var/home/core/zuul-output/0000755000175000017500000000000015144646042014533 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015144662273015503 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000347343115144662105020271 0ustar corecoreEdikubelet.lognc9r~DYA6ZF,-K$l"mklkcQӖHSd?c"mv?_eGbuuțx{w7ݭ7֫d% oo/q3m^]/o?8.7oW}ʋghewx/mX,ojŻ ^Tb3b#׳:}=p7뼝ca㑔`e0I1Q!&ѱ[/o^{W-{t3_U|6 x)K#/5ΌR"ggóisR)N %emOQ/Ϋ[oa0vs68/Jʢ ܚʂ9ss3+aô٥J}{37FEbп3 FKX1QRQlrTvb)E,s)Wɀ;$#LcdHM%vz_. o~I|g\W#NqɌDSd1d9nT#Abn q1J# !8,$RNI? j!bE"o j/o\E`r"hA ós yi\[.!=A(%Ud,QwC}F][UVYE NQGn0Ƞɻ>.ww}(o./WY<͉#5O H 'wo6C9yg|O~ €'} S[q?,!yq%a:y<\tunL h%$Ǥ].v y[W_` \r/Ɛ%aޗ' B.-^ mQYd'xP2ewEڊL|^ͣrZg7n͐AG%ʷr<>; 2W>h?y|(G>ClsXT(VIx$(J:&~CQpkۗgVKx*lJ3o|s`<՛=JPBUGߩnX#;4ٻO2{Fݫr~AreFj?wQC9yO|$UvވkZoIfzC|]|[>ӸUKҳt17ä$ ֈm maUNvS_$qrMY QOΨN!㞊;4U^Z/ QB?q3En.اeI"X#gZ+Xk?povR]8~깮$b@n3xh!|t{: CºC{ 8Ѿm[ ~z/9آs;DPsif39HoN λC?; H^-¸oZ( +"@@%'0MtWG uIo1]ߔr TGGJ\ C.iTEZ{(¥:-³xlՐ0A_Fݗw)(c>/1:N3cl.:f 3 S+A5&T+!җӃ D>P.BvJ>mIyVVTF% tFL-*$tZm2AČAE9ϯ~ihFf&6,֗&̴+s~x?53!}~Z[F)RH?uvͪ _5l *7h?cF_]CNnW)F5d,0SSNK9ް4:ÒozsB<^+鄌4:B%cXhK I}!5 YM%Jv`ʥVЇsfjҠƞo6xd~?6^oS5!90n݌ mr"/QI&d/܏L!['kvl 4 +g"`DP߯54eml#Ogk?U>>a~W;D=;y|AAY'"葋_d$Ə{(he NSfX1982TH#D֪v3l"<, { Tms'oI&'Adp]{1DL^5"Ϧޙ`F}W5XDV7V5EE9esYYfiMOV i/ f>3VQ 7,oTW⇊AqO:rƭĘ DuZ^ To3dEN/} fI+?|Uz5SUZa{P,97óI,Q{eNFV+(hʺb ״ʻʞX6ýcsT z`q 0C?41- _n^ylSO2|'W'BOTLl-9Ja [$3BV2DC4l!TO C*Mrii1f5 JA *#jv߿Imy%u LOL8c3ilLJ!Ip,2(( *%KGj   %*e5-wFp"a~fzqu6tY,d,`!qIv꜒"T[1!I!NwL}\|}.b3oXR\(L _nJB/_xY.# ſԸv}9U}'/o uSH<:˷tGLS0l/LKcQ.os2% t)Eh~2p cL1%'4-1a_`[Zz㧦|k˭c ĚOρ_} Ewt3th?tvͪ{~;J0= |JUԍ;Iw}/9nh7l%>'ct Հ}a>-:(QxPyA Z UcÖgڌ:8cΗ|U1,-N9 dI [@3YNEgl1$9  ֲQ$'dJVE%mT{z`R$77.N|b>harNJ(Bň0ae3V#b,PY0TEu1L/]MTB4$`H6NI\nbǛ*AyA\(u|@ [h-,j7gDTÎ4oWJ$j!frH_HI\:U}UE$J @ٚeZE0(8ŋ ϓ{ %VO"d.wEр%}5zWˬQOS)ZbF p$^(2JцQImuzhpyXڈ2ͤh}/[g1ieQ*-=hiך5J))?' c9*%WyΈ W\Of[=߰+ednU$YD',jߎW&7DXǜߍG`DbE#0Y4&|޻xѷ\;_Z^sнM\&+1gWo'Y;l>V ̍"ޛ4tO,{=hFѓ$b =D(zn;Y<1x~SJ^{vn 9 j1шk'L"cE=K]A(oQ۲6+ktwLzG,87^ 9H\yqū1)\(v8pHA"ΈGVp"c ?Z)hm.2;sl$瓴ӘIe~H|.Y#C^SJĽHǀeTwvy"v܅ ]?22R.lQPa ˆSܫ1z.x62%z].`Gn&*7bd+, Z`ͲH-nမ^WbPFtOfD]c9\w+ea~~{;Vm >|WAޭi`HbIãE{%&4]Iw Wjoru ݜmKnZ<X; ۢ( nx K8.|DXb +*598;w)zp:̊~;͞)6vnM!N5Cu!8Wq/`FUwWAֻ,Qu W@ Fi:K [Av*_958]a:pmQ&'ᚡmi@ zF(n&P;)_]µ!doR0`pl`~9Fk[ٺ+4Hhao-jϸ??R<lb#P-^39T|L /~p│x@Bq"M/lja\b݋af LnU*P(8W[U6WX ZoѶ^SH:K:%Qvl\b FqQI.ȨHWo;Nw$͹O$oEE-eq=.*Dp,V;(bgJ!gF)892sw*+{[or@x,))[o新#.͞.;=fc<)((b۲Eumw峛M2,V[cm,S~ AF~.2v?JNt=O7^r.@DEuU1}g$>8ac#sĢB\PIPfwJQJ;Qxm &GBf\ZA$Ba-z|A-I @x70 晪MV)m8[6-Te@`E|=U D(C{oVa*H7MQK"<O%MTTtx袥:2JޚݶKd7UZihRk71VDqiގ\<:Ѓ3"gJJčE&>&EI|I˿j2ǯɘCGOa9C1L ={fm&'^tigk$DA' elW@Tiv{ !]oBLKJO*t*\n-iȚ4`{x_z;j3Xh ׄ?xt.o:`x^d~0u$ v48 0_ | E"Hd"H`A0&dY3 ً[fctWF_hdxMUY.b=eaI3Z=᢬-'~DWc;j FRrI5%N/K;Dk rCbm7чsSW_8g{RY.~XfEߪg:smBi1 YBX4),[c^54Sg(s$sN' 88`wC3TE+A\.ԍל9 y{͝BxG&JS meT;{З>'[LR"w F05N<&AJ3DA0ʄ4(zTUWDdE3̻l^-Xw3Fɀ{B-~.h+U8 i1b8wؖ#~zQ`/L 9#Pu/<4A L<KL U(Ee'sCcq !Ȥ4΍ +aM(VldX ][T !Ȱ|HN~6y,⒊)$e{)SR#kהyϛ7^i58f4PmB8 Y{qeφvk73:1@ƛ.{f8IGv*1藺yx27M=>+VnG;\<x7v21՚H :[Γd!E'a4n?k[A׈(sob 41Y9(^SE@7`KIK`kx& V`X0,%pe_ן >hd xе"Q4SUwy x<'o_~#6$g!D$c=5ۄX[ു RzG:柺[ӏ[3frl ô ހ^2TӘUAT!94[[m۾\T)W> lv+ H\FpG)ۏjk_c51̃^cn ba-X/#=Im41NLu\9ETp^poAOOgJ8@o2k'Hr~4Z(I8!H G8HNW%1Tќ^?xBVQXodՔz q[*ڔC"1Ȋ-R0ڱ}oF4 3vFf#8^Vє+k@ :)@%9@nA B q 62!/ 6G (" u:)fSGAV(e֖t܁ ft~c.!R0N<R{mtdFdHÃФsxBl] " Δ<=9i/ d ␙F9Ґ)Hnxps2wApP!se]I)^ k?'k:%Ѹ)?wɧ6a{r7%]_Ϧi~ԞnZhubW*IakVC-(>Z#"U4Xk1G;7#m eji'ĒGIqB//(O &1I;svHd=mJW~ړUCOīpAiB^MP=MQ`=JB!"]b6Ƞi]ItЀ'Vf:yo=K˞r:( n72-˒#K9T\aVܩO "^OF1%e"xm뻱~0GBeFO0ޑ]w(zM6j\v00ׅYɓHڦd%NzT@gID!EL2$%Ӧ{(gL pWkn\SDKIIKWi^9)N?[tLjV}}O͌:&c!JC{J` nKlȉW$)YLE%I:/8)*H|]}\E$V*#(G;3U-;q7KǰfξC?ke`~UK mtIC8^P߼fub8P銗KDi'U6K×5 .]H<$ ^D'!" b1D8,?tT q lKxDȜOY2S3ҁ%mo(YT\3}sѦoY=-- /IDd6Gs =[F۴'c,QAIٰ9JXOz);B= @%AIt0v[Ƿ&FJE͙A~IQ%iShnMІt.޿>q=$ts,cJZڗOx2c6 .1zҪR "^Q[ TF )㢥M-GicQ\BL(hO7zNa>>'(Kgc{>/MoD8q̒vv73'9pM&jV3=ɹvYƛ{3iψI4Kp5 d2oOgd||K>R1Qzi#f>夑3KմԔ萴%|xyr>ķx>{E>Z4Ӥ͋#+hI{hNZt 9`b˝`yB,Ȍ=6Z" 8L O)&On?7\7ix@ D_P"~GijbɠM&HtpR:4Si גt&ngb9%islԃ)Hc`ebw|Ī Zg_0FRYeO:F)O>UD;;MY,2ڨi"R"*R2s@AK/u5,b#u>cY^*xkJ7C~pۊ ~;ɰ@ՙ.rT?m0:;}d8ۈ ݨW>.[Vhi̒;̥_9$W!p.zu~9x۾vC;kN?WƟ+fx3SuKQqxST Ζ2%?T74a{N8;lr`$pZds=3jwlL Eڲ t|*n8[#yN SrA GYb8ZIaʼn8 #fg3i`F#5N 3q_M]j 8E!@1vցP7!|+R@;HspSI]ڻCZUcg5pDcIϹ,oN-_XI,3\j ]ٟ5~' SuipA!C厐$&k7dmhz/#"݃,YqCL$ڲ`"MUbeT>Xuv~4Le͢ }UVM)[A`b}mcE]LCEg=2ȴcmZ?E*-8nhױ1xR2ϫCya` A y!?h!9yL%VLU2gr26A!4vbSG ]ꧧWp/ &ee *w$-`J\ ptǣC^p#_`{ К8EW>*(D{ٛ,[fnY𱹞M=6&$<,"lX-Ǐ_whaE 98 (oѢ/Р΅ 7ցl6618ł_1/=fu).s¯?.S[{'g=Ҥ):d8h\y6]t1T7IUV:;.1& ,5΀j:<< +Y?58In'bXIǣO{&V\DŽ0,9f O_"[l:h¢8wݓ19\:f6:+ .3}=uvKc ٹeS<>ij(o'ciS<{1$E[nP b?8E'xv[K+E{,Qƙ1*dcs_Z'407|qBOgYU|U--sG8`u! qGYܷw;ȌCPc_|(RaIBKb+{P.T! =ĦiTob d<>SHr][KqWs7ѝBYǭ~RR"p9dFg|K- obY_vM 4>/]e/dy,8!xŋ5 R<^mYo 3c9(F?he:9[_v~\:P ؇'k01Q1jlX)/ΏL+NhBUx~Ga>Z"Q_wjTLRˀtL L+BT҂ll魳cf[L̎`;rK+S- (J[(6 b F? ZvƂcW+dˍ-m𢛲@ms~}3ɱ© R$ T5%:zZ甎܋)`ŰJ38!;NfHohVbK :S50exU}W`upHЍE_fNTU*q%bq@/5q0);F74~'*z[\M-~#aSmMÉB2Nnʇ)bAg`u2t"8U [tJYSk, "vu\h1Yhl~[mhm+F(g 6+YtHgd/}7m]Q!Mę5bR!JbV>&w6οH+NL$]p>8UU>Ѫg39Yg>OF9V?SAT~:gGt $*}aQ.Zi~%K\rfm$%ɪq(%W>*Hg>KStE)KS1z2"h%^NEN?  hxnd/)O{,:خcX1nIaJ/t4J\bƀWc-d4M^d/ ʂK0`v%"s#PCoT/*,:[4b=]N&, ,B82^WK9EHLPm))2.9ȱ  QAcBC-|$M\^B!`}M^t+C~Lb }D>{N{Vt)tpDN,FCz~$)*417l;V iэ(_,j]$9O+/Sh]ice wy\Mڗ$,DJ|lj*à␻,?XAe0bX@ h0[}BU0v']#Vo !ې: Z%ƶ(fl>'"Bg< 0^_d0Y@2!ӸfZ{Ibi/^cygwדzY'Ź$:fr;)ٔf ՠ3Kcxwg*EQU{$Sڸ3x~ 5clgSAW"X Pҿ.ظwyV}̒KX9U1>V..W%GX +Uvzg=npu{do#Vb4ra\sNC/T"*!k愨}plm@+@gSUX覽t01:)6kSL9Ug6rEr(3{ xRP8_S( $?uk| ]bP\vۗ晋cgLz2r~MMp!~~h?ljUc>rw}xxݸǻ*Wu{}M?\GSߋ2ꮺ5w"7U0)lۨB0ח*zW߬V}Z۫ܨJ<]B=\>V7¯8nq~q?A-?T_qOq?5-3 |q|w.dަ'/Y?> (<2y. ">8YAC| w&5fɹ(ȊVã50z)la.~LlQx[b&Pĥx BjIKn"@+z'}ũrDks^F\`%Di5~cZ*sXLqQ$q6v+jRcepO}[ s\VF5vROq%mX-RÈlб 6jf/AfN vRPػ.6<'"6dv .z{I>|&ׇ4Ăw4 [P{]"}r1殲)ߚA 2J1SGpw>ٕQѱ vb;pV ^WO+į1tq61W vzZ U'=҅}rZ:T#\_:ď);KX!LHuQ (6c94Ce|u$4a?"1] `Wa+m𢛲`Rs _I@U8jxɕͽf3[Pg%,IR Ř`QbmүcH&CLlvLҼé1ivGgJ+u7Τ!ljK1SpHR>:YF2cU(77eGG\ m#Tvmە8[,)4\\=V~?C~>_) cxF;;Ds'n [&8NJP5H2Զj{RC>he:ա+e/.I0\lWoӊĭYcxN^SPiMrFI_"*l§,̀+ å} .[c&SX( ( =X?D5ۙ@m cEpR?H0F>v6A*:W?*nzfw*B#d[se$U>tLNÔ+XX߇`cu0:U[tp^}{>H4z 4 (DtH-ʐ?sk7iIbΏ%T}v}e{aBs˞L=ilNeb]nltwfCEI"*S k`u ygz[~S [j3+sE.,uDΡ1R:Vݐ/CBc˾] shGՙf 2+);W{@dlG)%عF&4D&u.Im9c$A$Dfj-ء^6&#OȯTgرBӆI t[ 5)l>MR2ǂv JpU1cJpրj&*ߗEЍ0U#X) bpNVYSD1౱UR}UR,:lơ2<8"˓MlA2 KvP8 I7D Oj>;V|a|`U>D*KS;|:xI/ió21׭ȦS!e^t+28b$d:z4 .}gRcƈ^ʮC^0l[hl"য*6 ny!HQ=GOf"8vAq&*țTOWse~ (5TX%/8vS:w}[ą qf2Lυi lm/+QD4t.P*2V J`\g2%tJ4vX[7g"z{1|\*& >Vv:V^S7{{u%[^g=pn]Y#&ߓTί_z7e&ӃCx;xLh+NOEp";SB/eWٹ`64F 2AhF{Ɩ;>87DǍ-~e;\26Lة:*mUAN=VޮL> jwB}ѹ .MVfz0Ïd0l?7- }|>TT%9d-9UK=&l&~g&i"L{vrQۻou}q}hn+.{pWEqws]]|/ǫ\}/J.MLmc ԗWrU}/Ǜ+sYn[ﯾeywyY]]¨Kpx c./mo;ߟRy*4݀wm&8֨Or4 &+Bs=8'kP 3 |}44S8UXi;f;VE7e4AdX-fS烠1Uܦ$lznlq"җ^s RTn|RKm;ԻZ3)`S!9| ?}m*2@"G{yZ${˪A6yq>Elq*E< NX9@: Ih~|Y4sopp|v1f2춓tczF_r_Bqe'3 FlJHlͯU$%KULY-v}U]\'WHb 2:N8K(*L*j$MQbJ-id~j` 1.<"*D!øYE\)2 edZHaQ¯|n$RGKmPr&Bd~jaYa礯iaK=:852,4BׄξDA`wGt.t}൪dk se/t~'1әu+f^]qku5^/<*㠨-( ~":1Lzӡo2qvK 㥘'uX}+iqȓl3r,K$_ ݗנ;p%8Z c$NՅ"wBohP&sߤ.o3QW,Q5#c6&]24DԊ%4N{T""IPƍ^碚&*|1.pmb;`h;xZ^dEOt[DrKF`0o^$CSgza }DǢ\>h'$E2>O ƞ'Eqj~@(a.$ D27s0Q _l= J!Xfk_㙀m"#M[\j2 ,E|)Q(s`JT2|1,.G(N@|r2]bTK.Fe.:?wʖzE2]J9UU^0 I%rҬ%$X*Vdehk1SjAM-'Ki*ºh~𜵷D[eU#_p<3N`!J(FQ]~^^Gk,ZMMkкBpK..Bly<Spn`Κǿ+K~v;j mo-gOg lk{-p]}Q{`<^2{t5%h 7NQĝ G4ciHl%gig UWݟsm]sIx5G9)Ӵo/G!" /ePqr5DP 6 uf{#S&^0Xl ˰K{(N/z\Ӵy}&mq\e#uO09I&³PWQ-)`t,_1xZ[ֆ=R$(>~GufhCx.0٫E6+>EU$F.X1*#{ ,7+zw'[F@.pWE]SHn?vmʦ> F,{Hs5xnL]v[+6GBa4O/>}$0뤊O`IB>%a NC^bK,իHve>v美K'7)jv:J KR7b'D`}V,\%k$k6wEV痩YVґE$g wYfɂx.%mj\}ut(ilc$H.Bo@94$0ltVחy`6=Oh'Xd, $~B0mg,VvI@\r)s9q=J6InAKoŗMi{9ު&FۺjOu%y\# ~)7goW_dBIX &G:wKfuA@ :M:<-d,_f1 G|+Ӌ?ʓȨV!8˸ ZQ^z!x^ўBB) Ci6uHgVa!" p1YJ2l]ж~&e; "O*ǎ^. >4nyS|˳ց \0dg (rKIʯ|ľ`lͿ_"iK]A tB%PyԐ=͈oƑ6}HD[}|m.Ս :-E$4 $fAi.Y]jxF yt)4mކZW$dmpG8EQe2n?˲H$%Q?ЂmGl4$V1$-X㣓F,F\m<(,(\:(VƇGæhY00ƦdR4UҠx;@)!5m8@Ud>mB* uLm(,G06Q2icND X#IsV0SE2}ӨY0Ji6[w9*5PQk6-L^e8;7UŁ~.ZC2% OuL"\eVp|ܜr]mSf蚺T%ӌiLnˁ'AG{hخB_h2ȵ'hD[B,o: U7F^kATj0K56 Fgain05y{ ry|!]x*U-kvDKkz|w͑چij{L.qռ& -WfbB(z:J ﳘq<(D-D=~U"Z u>zM4G @گĊDs)̂Xu٣PFNc"}O UTyXۑ:bxR{Z޿4VmfJu"9kH7Lp]H*DZ{̔Al?ؽe&Qv oJ`"7>G"M3Vixf\K04w:s$0=O#ګ"']haª@%Ǝ`5%*С _ )Ԍh.Gc{6R^Kg-DN\9"V]5µy\5"Tԛsh~w5<9M wc34[ti SE0xP]65HԺ>DuMnuweO4V5u$ʍܪK߯ T@4ET*϶h\g  Z\fI@i1z.Wj"Vq/JՕd.gErjX2Qp[i sG%GnѾtUKRm?殔{Y&^+QTvmF[}_`9JX@$o.VW 2te4dYiuЬS6ynX Uk PuK!m-oЈbaDR\MFwq4~K ~A /j5cx|~)MLq1Ci:m MLVSJ8cl` .E%r|xc_ k?'Yy,h2d.hV1t<t--Ƹ·̲VX [scbqU)ĸ6CZ&ˍUh$Uz)iS"(\ՙ&X3I>Dz Wm/?d-L":Uo5R.Lgڔ-ԱU4DfrkGbGO%:V47z5e.ѾCg*d34U%^Gڂ蛎ݶ۱NN4RV2.!,]~[l#T5ƫ _duC`h] l>n*Y /]0IGRV0*g|dqOCG$؛q|ԃ/]ݵiEYd؅FZTut7ba,1GXg*3zp]br.iEmK!39mA!w&͜ԧV]2 r$d*竛K"y \c2ɹ,ּaSRg*~=M,3;*3e*$ҺTU O^,ɦO1ۏt?ʘ;O$_Ӓ4Lx`ZKrSa  5S%VYsm5۸ νaSe:l/ncbr3 0wTu½y`Z׹~ X{03nZr!)òz]qP p0#ǹĵ`&ܑ 8 K;QB0u77T CZŷ~>q0kztS8~ pi B&L231rp/sajV2ka;E滬}(9k9f gy(8LsLgy3@FNq S*UZ$(&;%?5 s6νCpM]&^Ҝ>}b 0Phn!?}cㄘ!}v&^ٺ 8~DTn:=Nqd27 Gq2[s`8MeHV ìi֜]oS\T5vْ}A0Td*QQ!vݍ4e(j=ܲt/kmHA`ۑ#p<gLr@f`P$esGudo~U$մeGV#.nlAV{aUo <˿L |>,\Au0Idh>3D gf# c_<1_" Yc"᪬ZL ir7y3*D_>̣(8qj *oE't`z&ݲjhV>Wq]{6S6O`bVcnʧ .1~5%]`*αo/ U{BE%1PWMjӟO˯"pDjIk-3u͆ M[=.9:h~ Li7XOޟ'9~mmҝS03;RЗpQcSЭaqA93T~ݻNz8"!޾B~4#'%U,. 28tP}#(Gz9x8VHӂPn!/ &=qsF,v_ I#`v?u$ک"G 2 !zv̀gv| {k`kA.}^8 ]iA'|asz\;31]|O }r $<`\P֛`(óYUogAE_l%b|/ A"Fn_n=w!ۡuVP5Y}Y?r$l _$_@1>; /}xx}X}җd̞G2wW!deXMJ;Z| V)*mt 'VթNN3&uW3%D6W .-Y.Y1[sb(PL(/ g~YdES(N!?)C>?z&.Id gU06!Аum.ؚ"$1ՊZ,wFDE8BypQ9@q"6O@nl&ʘ&^`, (]ƣ5 bp4#4X`~u=0;$UQ6-I5(P |\dAD@]bI Jf [Hh>A.6WӼ\r( 2p]yٗ\c~| IY "!|!sl5DG-_pibi~ =XUqu=OSy:lΎ5##p2T~ yK{n~JPiYDIҁq=l3Wp3<۩KEzwyKQIcI bFM`,"wOƱr GIz=N"es?U,(~AZP;tp@pP)&=H׿PnH_Pѯ_¯39Ѱ~%!%l?9F`9[LJ2 韤!jU3UCn'77c-^+vʗ{X`vÃ؏G *^SWw#G~KI^ W ?@pć'/mΠW%>:KYhz-9Oeo',/n2ul}צ͋On>'yUCxb2ѷ4Xve՟W)Z|şrh}~twPp~8)K]mO3t 74&R "W17d%?GeO7\TzG ,eݣDugu}7p`*<}y2=f|XZ4Q`0\G#lĝ8n+F.c6t o>D )8>ïx7!빡"W1dFxH Wln\ /o>IL Ezq(p "k CQpUa&P1'뺰mz{BUocUNB>^u#:v[2lQ:wkہۮ6֞$_kPCX+a) $Lz>`h5@X*.w#2/D0ێfk틝Vj:yY[pϭ!>ȯu: t Њ:m*pUsHr}m ubCE$3B^[C(kʶ mO(ۍPDB6R͘' *qTSm5wv}:OmF[˷F*"| BEPb{Bn'* njoA=nO$ނPM:<~^q֫$/ǡC E2j:V+S:U:Kh:@'J8$w3y<0+Op,)WKM0ΛI9E"qW_Ӳ|ot%I*5f Rz~SDq ϳh/l~(J H3`h$)3}An\`;8l V6,J)Lmxy'c˺r*g!`-vZ-Oc W X_Lf Vֻ;*Hi:fŢ<9͒ldi6Ǎ`Ge{{U_ᗒgfa>kZXV^JLWi6gT ' Eo *tLL1R3럠ya6{e!~"Z^w_U( ek6DOi3Z }uP^f+NJN뚍e,|%K-#@bXtP%|UY"=w&N2n;#5iGX)Y 0(h,Hy,uH Q }K CRׅ%H4P k"S _ i/7Zߣ%:G\~ 3k4/*ìSmA4% I 8aBԬi2A̬<݂K)v֋ZKOr|/QMdr!(fxSط]a7t+Odt:叓,&穂t7 XCD5)u?UV=威nS[vl9۳uj)@ڼqسij]O{x✗g;|Dh7cùSb-Y߯`ۮkRZѵ^[Ϸ2ׂv2P5yn ufx@"7C$:DtˁԼ i;14uȟ2 ;ēqNo"63T4ͰY0dw8h"GsVACBCʹoBl7(AlevHJUHvDn9(Okci\Ơ9rT0RwFP(wY&]2oo|Us''2" 5UEgQ0JxU*m5c;]JWzn :RNa GeAY wf] Է/r>WBDg-/.[$^&KU` gG&a!['aBR۱sALiJ6n#}d`Y&ʬO.1M`r/VS;h djF1_7{ЫYhVyJVF APFf.: d.ex{#:<޿o͓ASeWԫ{}-dG+8R=krY~^#o-=|}>Zcnb~:pS:%"F={UYwY |"yRwZ\ŘuhNMyGij:Ύ:`ϐmX[}}lMtsN%(ۥϠ~KG8ˆߙV Xa$gCJ6;,@47z̰`:!y>+wfw6wk8M-MŹMQ,wеckvJXx"\yh sILr&HO"SyE ɞG1ݎՁ wY2tD]=KҵEWDIBꨵ/F3|!Ը)2<"EjH4Y4M?|B;c(pd2w-S%_;#ϵc-#Z{J_gU , -␴dk1=՗tWLpWG]]SUnX0T0}zl?oV9}?P9~7mXu>/ZGN-st2ۏٮ@묿ꗿ[7~? {|YM.//&vpo~>ɫ=mRG+2-䖹6Br6/-gԔ)mI_Z!:W0fP)p"G fX'>iE Bk}N]FHi+K04LJtQ΁{bqwY" g B"%ǻǙ9GTV9N( %Ȉ'G(bM&H:;UvщEUfFvnV 1X#S2JJZ '5GH&kOv$8ť)tۡqGF$&3`R,jJS$U.*ٚ^Sk8XHۀIRPԠlpEFTuoB3@{Z;bghW+pIћ)3Eҙ}78 6}t."*"<=:6 *M4':Ѥ,Y. JGh BT푍 mrrn(b`tgRS.uR sVzM.HtyIhxx1=.D۲>Vh}.(ڱRrPbs<Y81 Nm)]cIÑ"iD|FGx#oO Ni+U )ujJ\mWf$}|y&vJE9ӯLrO* (~]߱ccᚃ0uo۪XBuhKwxM׎jQ+tak6$jxfsxPTbh(Y(Lxَؙu%Z/Bi3a(b8=B?، >xwlwL8$H&5A='&K1F@IAC ,!H4:NG`XVaؠ FQW4 )6Oj}xEeL^i=yWY=v+h?:->=-PX,Δ2 A%D2 +|xECtY"VHi _ܼ[n3jkRʮ;=͜GS>{`Xѹj=!jhIFsN2S"q(cLF1QsP+gLUѐ6^qS)8Eҙ]Gs¡_i(+p$vGZp3Z+kzºv,5M(ԭ8S|׃p-8<@(o.(Fđ ǚ~}BF-\[{'^`uPt` k۩"Z|E"?>6]pO" o pxA\ $.ŌZ&ݢr.KiH)2~Ne<_T-OR-A;ΰs87 NsdCGCC16((#[J - x45KICɖC] [)䐷.͡>.8.Z-}Z*b$%AWޯWi ZgЬum"s|\l 3U榋k3/J" 9sse ]*8ڗsP}E 45D&H:G%8:6Ub0RQel\ e ELo1 BsIwqpRe(!'Q>v̚lBSjlLJi ^cpX6PT6%u*e!0wx_ZL9S$O w{;x2g Fa B)h q#Eҹ5k iBn(o溹U\KTd$د7ACزMR|{u9M+v-zƗeO^ Ig3~ sZO8qf\_Zyr8.x)!άI`m3⬙۪$E4y.ViSf-wȤ)1/AC<>b\"kPq}~KړMm̔ws6 G$:. eM,%u/$7ZSxrs/%J-v^I4#{[ 5uQyHJD2CDdoBh"3}[05kU1b@YB4@=g>(vwο.Md:#( ׺Ku4[Jim83`[%0p=3R g,` o:.88ƶ52RF@|3`އPq$H:z끣yk $ژe8YeJJA\?0sViH[/ }c^(DsBt CevQ O}TBHS*ve׫W-I8ziL#(̷~ uؔv,2 (<~ q)cP!! yWLJG ]m}=$nj{h0YOevZ$tӡl~鬟삣{O-{`o>m!S_@}胫X٤,ku\mBqRe-j\CY}=- >*v9E1gC(7Nջ^ 3ut͸E QCwG bCʧf=tÊr+C5Ɉƒ"x*M@ǟ _Kr;6")zW:4蓃wJ$5Et>>낣&}M$EY-|u삣\V5eFZT]p=臭\H肠S5B-\:&\] !O et]ڔ2o$,KQ]:!C[0pNEC0hzL 9EL7/?AƟϣ)nZ:J.=T=3MJejol3h;Q8['Xdz%mFC7[x, RP|bsc RKI0}vR|5{׵71Y161&8Тrx1K ׇIWuj󬻅E뒬Ѓ%sWg( D:,y`[J"7q*݌^/mE >8qrDrosp#c* { yւӴ8efQ JFېnŹE螹0awhW>!rq!:¨*]P8J "&%-TxN|;-̥k2-ptڅfỀob\*"0-@L)FG9ݣn"H:sw8/]NM \z[;غ\xtN/+{_Rw6IT8cdy[imEp h߈p32LSOa:šr#5ZݑDXIqgk,iH@aYkNKNpܷRnd: )fe> y];*q)3c)đHNQتX{n4$hVV~ {>YP,x_+&̀8S>U"5]uZm{{mNonXSL>ڄnvSCxk`?\=F%Kxd(^zתkrwb(O_ €Lf fdy|Rbʥ|h`l>w.ÌƳxZ` ŏBr j3=z|ՂLEtedʰ6b5 яf0I6D{IφS-'?<¡?xrӆ`*|lO'R߳7mH,ʦjBx`0K~Ylkۛh@ țYri>eϾM~|C0xuo˛A7\pK=4x}Poij!8\ 'Vp_+?g&y5If@V@]8%G|Sf:$?z:8>:XmG@XWa(1.L0>*KmhOW#|b@zA#x?)o=6)`1W@\24sd<)s'3[ } {0q/@М,*zQHB50‡ 2MwWJQګ*>hJ55DT6xY^$*ނ3%Jnz9@K4уfIϢMWbį+}/J,hh"Un(ivnM6 rOq33S8Ւ~j>j#t/ڥ=v bƏ"W&yvIZ(|FBH٭,5B+SNb^W@h t PR~A2ܛLq djo NȀcelIn16Q,,_ץtL4|;jE*8 ?p~O;/8;$|V%ەM1J)q[Z%l,8JWP1cy,@xueJ?-EQdl]j+ӥjc36`V32FЗ߽5tsgps\ހm"&\Տ߽ -v+?Oe.=X,AעIjv<OGΔ&9L,+xW"g@-S L[ݏ4$nF2&q kF(y [M4̃P,a<$ZѪY |=sr ;$ ;rFnp]e.1-}aZ*ˤ V.p()8ФcLtHS0A`1r\9y L˱TbZ5*J)86'h4NMC;;2a]V;5#mބ."$݊IJ̺XH I+f5^h:-Ct+( Ru .q"reSܠ]  ;y1JBNRf|% Zh7Slm),EvDJ;c%rdkc}QoPnOxpb i|ʘ Oz Y/Z(%D2oKҹ_]H0)dyO-MG+OES %ppIG3b+|(m c\?O;b2S^hoSXf7ь,Z}9 nj<4)g]e^QORT[)L,6|eH6uli?I.з-.˜fg,%`z*ý:,ܡPJM&WbM݋)H0so{*mvu v9'[f;;,ՈI(^ 8xsACPlhך [6 u:j#ṮXG&@PM?CF}'&Y>r&(ff(!9.O ⯳gq8mq>ul @$HM*NQF3I!*9#!QRabRIj"8JԱڼ޶V޶sV$7U9g8{=Әf)RK!bLf@T2CZ 'm;jm;ԚqzE ňQuOY)ly))fTD1 1p˄FRiJFRi'LtCgTU{ʶՔm9CRA,045bAYmlvax⩠挶⌶?ͦJ^YWB>e(١_ xm+NЏKvZ4Rr/%jp~uZXUoGbg7O>]<W`WI0^: º Y@QD*J֯~2$NP m4w0G̟WʁWWJV"0΋E$&GOXK~շd!ȝ7nZ|1<$L $rjf u>[B)[P׏y`YJ,% ))ٍS˟Lf*ՙˆ*,|ޫ3F08RC5^(AS5!5ac&d4<_*O m^ _bi~!*$d|u24X6[xE> ͕-Jh٫|]_6?}b[IPxˁM Ip"* Mm15q/Uij?_܆—HY>VE<慽vftVxfJE/;+ j*YLHDV vUΨJ dbک5Ӽ:z?p&:d#ȁ\Ya*+6nۚIg$3Y 'wk: I)9^RIo>f8EioDx@6}^ӾK%}d v# c8~) >~.[|FjF1(v+KcUƬ#bR"͕Sf@r\`: " -x:R ](1 g_ST+]UyK |.@R/@$daƉgIp BJiHXgj|6BC^޻"Mڼ IŒ\&Ȓ+JN ,AےM(MQogfcwf6Kh ҒSV:05uO`T˞z=m@+ۭb4Y d):,IaH84c̼LjBcBky 8c#`m#*T$Ĉ8,8<3<<"%RQ p1)}x'nb"Ra^,KiA4քY g@6q;78SY#1^/wDQM*QųAb0hQǸeex$YC=tRp}l ŏ_A]gm&̹]vǯLwǯ۴{[ KRRY, %ӱfh"Ԍ&a` PMG;/f*@O;^+Md4,HTFu !;/yʆw\>OyP~wᮋe\ y xcGG[ z{8??~U:@"1N+!Gx7҂D Aډ6y.߽_\zb ʧk}~/5`;4_X N_ܒ^Wp1G / .,,@/&1=Xu00 泷C9sZD_3(E5.R ;pN1Ƀ{9v"Kc9Axk9 z+b>{12k n=~AΘ5XWPc{'6p N8b >gTS.u|8T@Q:O7_u1CvcYՑRo[7"\8p |>%+e](N a1(Lami|m$%w|ԉT ϬMYRs8ŕcF @vujaIIf0)wZkd]=HX$ՃkZ^M !&+fL씀)#R$R\-̩X,Wҋf,epk,X -Oٍtv3rx(t>^Y+K1P"1ێd4MLЪ 씴T| US)"yL"Fv+bMwLtK44 dAS٥p@S;$ JtK71][N >Q5{vP#Es#nI#ƁiۺY}Fq.ewNXN)JƉqx5"Ic4eIf],W)©Z jJ0RX: zKZ@6锘TڌJg3/dj̄ 0Hեsˬu'6Xz,#qԥ(yXJUZGILYԃfNS,؉)aSZ&*1q 7rԊ.pTZA(lCtdwW΀Ȼ4HuAҖ!PV-ȈQc&0u_c6zWVTvȃ6 FCX,LGz-w` :qOldxWG*.Z* 6ui:S[zu[-vmYMtŖ!ZI% g<4P:$ vL64IVDKH[ZcCE[5JURՀ9ĢTUb,4}kyqћXo `C:7lo82w3 ad%ذ6\1ѻB}=RԁmzmgՄ[yE-5o; 8P Űu6e#*[);**ہ7WjIԀjRm`=#kܔ ;2"J kE(L{DqR2r0F Y|I\Ϣ|~qAw9Usd976dE>4- Ҷց)!6ҚSڶVly:Z Bˏ9ht3b]s֬xP}!-7 u{R?5`*6-Ղ:h&eR˓ԩ,: kh}I5EP +ccCkp{AT(?t AI6 /9heN㶖 Ė`h}q_ u;~ɧxL m[Dz_t ?t|v9>FZ֧)S}VV؟7m~bM^|M~NR!; d0iT‰[Ua&8n 9lhbZ<'7ܛH.Kx)`/8H[Qko[38o44' *5@J$n23 Dv?T]н^h.6Irp 9Re͖'M>my_PjiWi^3bpSl08pM^Ӗj3XŪUm F(ówfޠ.GQn|yhȅ2TRpWmwBc[i=<%Gjk%ŕneYub_)]c]NFt3ћbI(K-i,̐w9-v{r9ˉpiȬk}.,@p9+JW>l"j夦o4v1nKe-ԁxCêpYmdbgIq^:ovZ#+]J6q'0 0l#8g ukgj` ~͚ oYփkuꠕTpdL9K9}-MS`Zm;4,L9KiSbswF"D (]9cl3%L2`HiԪ(N.VgG'Rq"(C 7m6jke*i@:.DZ Y~}F("9NH2 39Z s^{~͋쐧5{'z==Q}qtZ26$#s'Zz_,lu_7M>Wf K墤~C %9x'a[;(J)7M`4ƌo~'ˍZ{7=o˗,Uȓ&oĜ2xEY;9f8&K+5p_@y9D9MITD?`c >$+ o.) QZs-, ;ߖp&Eˠ%7$\LFY\ #4jAXJNbSK^S0iig2hɕOOWi`'ctxOwfaNXB)pFb쿩KE+Krxk|]ztKb@\k&UVC\&6#XC\? ֺ; фiT}")b;f}u.֮[L7VBTyy8b$-(rrX5:Y[e?`O1M2̑ҼSL lx9ÅFtxrzwn_IDg&=)?̲_Յ/ [^-zFBd!>l| X $_)p1r@B$7^Oz%O)Hpv}jj&<7el8 H/'Qiy4,ܲN]~qXSD^XNb½yS;KCn?=G?<YAe#4Z$Zye܁oI(3/tz^,c sR8,4_U>ѡN,"I3NJVR/qd:k_pfVQMAf^ءB1䎒Γ)\k7\=:vAUor5zC>(}K4ٟ|s; ,:qz 1plLuHyR|kaq+瓀2-\\LBUFU=?|2G>_EC \]$r{.IuMIݯsmkӃ3!?czBًq= A֤O3|~TQ}v€a7k61,eؿt]?[~}z?Bdş.܌%xweyF9J^CɃƛ sFyo[cBqޅt8pa..3 ;{JEQp]^Nr fh GZbYtvga*& lqf>O>%^7!Ŷs}3\ܸ\x >,l/s~|ն+az?'o= p-9Mdqf7 wͮ/ꅯ&iX&x; ]4-W -wb t#vKxo6fqZa4rԍ 6[=fPv)/ok)kSiβj1'/㰀A<_X@f +]ZIufltms|YH!>wר}CQH/Wgw%i"p?I54vCiBDK'ь^ %"&i"ػ8rcW ~[p)A.$m㑢^z.=QT==2bŢѩLLth6Yt/lg ¸bl כ/Vss*""nV>,ya` tͯ^[9Lۿ<-7Mߗ_ߖͦk\x09hH&"j;YaJNWJe~5d7p-@rnvA$|'{\l?yquRw&_W/SOZs7^v{JՋ)Ă륨v" @{ %:n[ᾂ<ؔK[ ￀T5ppte}5=}32QasF_}CiP XO74??V!Q>ǿjห/{%.KX < < < < <~>oM`}2j17\}yb`sC#>P CImޠi+V;~p|i­DzHϠ+ᛟߜ p.%BJ8"JSsY2eZUV,0g2hCE73wq:?Տ]x=TWG_E`}8yha#+E ۷ka\k;^ { T/ֻ?޽'擖F9*蕁%(ޱ$2&93%V:ʴY#r˜q?A`0{O:j3Ϣ Ϗ5nֿ'GK@l2oH a6hdS?>E#񟻋ۓ[X kl[NkmƞfsIu !>+TW(s?W@jZ4}Ivy XTϯKZ?,| qfREs_/AA<%s⃬+(4Y.!{cJǑU -F+YI"uP AFIޒ;+G؇hTdPFr!+O_yQ4DH}UYUi]y^ɘƅq,ȵY$aE|vOJ#ӨD (Jl2u8T WQ=@<*k: Q(*WKRp\#M FhI9(l"ATi'ӊGgW*QTP:3R%~T˱tmXj8W'V{JJL* uQ{`X4&~A.S3{8z*.Kpj Ej{KfpC- G\>/{p$%2J-.*GU6o}|x&u7 "W0^~qCt!fK3Sl ]/h+G1,l6"r&A ڋ N}V!D@ ul_fbYT^K!6X|t(1ȓ@H& 9 4,2AQrP+;Wr8^(!cD[6?mo[7&ye݊DV^  ]gP/QTPr1*[*aBYA`cJ8g8LY+ ܑ{cx*ǑV{r4o?۲z章?=V_R5XNսqĿi?iV._<0j1E@[tK(F^cQ1֢n\Ǣj`PON><=~<5瀲#]Z{p;TƋeC &޹dWKkUR$ZG=% {U0mu4-P-hlaK[rۍk`KK]MJQaeLw"|qS\DD<|? Uu'zn]2/iv2QxPK5ʣwcx*S"Vd"7rZ e(NF3TrVϟz H*b& "W \O/Ov2ePPrY+rT:KXUN yv.oy C ݸTC^ý1xWY?}v)z n(BacיOս5WP-} )v8>jiĄsk@n۵7ޖX\L圊L* k}Qr{4 kq[rv[LB8Zs^« Yٚ1:$|]5ٲ0^`+١B X}`p[T.Xx@sʼnWB1'{MQD(|R!zXv:8'ZAOɉyoT>:rĶ)bhY1Gխt .''15S } cz%% ~iexLr@Uͭ>-3/W*1a,HX$՞72&#SS(5ӽnx$) `Y@1η<кѨT9|J=V{Dh>q:RBr)x/7> uYR;n} ]ujLB8j]ouH??{$j?߇@J=\'z\eIY+]i#ڕE>TFP)(Ս>_ k}Io@(@Զq j4=ȣP;dYPd%Y Ny]R}p |zXm}ef5^L;(=X`}x#Z7ZwpQ>c+9{ɶϐy}N;d\fzX z9 []P%z´.QC,d#ʡZ`>,h0ˁZo 0D(!|G:4l?C=bdL]s1תe4`6!|x~v Քdo`N&Y|W`ڼRhWR"Wdkez՜H$"j Fe.K!e忝fF(jP_(I׏V#>UEM@5"^#|O^htYh L┄L3^䘦G PYMO0Y=I6@\d|qkVL< @U[&[~R?~FsVMFGWt)KaQx"0]8lSQp[f8,(o8ZP0CUniԣ=׎s JOx n 50MRPr;%ZǺ/PY6glC2C!P;7b.ޭ⮺BЧg ez$uTz%S8ZP1]@ihF#tȘqށj5i}[#֎C +8V'ɈD=v 1 ȇ*NDsF]1YQ:ʿWxu9řn=3R3!YSMѸ ȇ*yģ!tr*8op3EQCVW*=uD5 Qg8:Xw2 <2E>TvUW_GrFOx1D@`A:`I䉠5Y5eV |-l?KS~!c q8GA&A .CQ.V=QQs=q9"^>H<>qTԣQ'?Ѷ#QCU 5=Ow#*x"tzJ_̋J{1HG<Ą_J KX?昦r)_s'ňWEF!Ur3n#޵ؖ胞{&Դ qRv60L;}S6j)1Xx VVƂ]NMRyjgZɑ_QŎ4ޏ؃=ރk;lO# 5$[Rn,"Ƴ: %Df"˚()?<B:xT h뗩Y3xzj3^kmv$k~(LV|N/Q^# $(q؆h=ޡ76dm3_q "BV $gGB}#erlށR&At>jNλnz&b~h3\Z59X4>R0ՍC{co|BmD~|:ޱia^^"RCV"^7y3 3}aNCM!coiq5f^-#3\^uf: 0$,*:o\rYw5_{C"e]X"XA|8[بym5"V) _sPQ`MnHgC~}w yf>Fvb"UIC-%[k?AG٣|}.dϏ.rPwr@W[\S2Pi^ aLrW ȓB.h8cwCȯ9)#>X@]]T5a1CHF})iR BsG;ɺ+w"@B鍋|yV0y r S^G2 q9>S:h#[ %h5:( 1*y=!H* ¨|QD~1B;&ɒzuL 6D =&i|qVH uT#JFdBd "u2%)ho33cH=٠'#ь;Li+V ,鹟h u< f'^HR1]Norlu|~{}s<*rN<ӷoǬ83?$Nj0ߣnL.i=  wj忮㠀qPq_GTP\E?xU1w&]u!B .ApqEmV_ЃEB yHfE1zcy$MR`vRJٳo v6G`gW!!؀M[}@*ewNV[d"K;3AcR^# lw!Z0pb@ح1 & qϟzYtuwgǓHXռ,š>=h[ 1;xl j,tVUX"4+*FeOcao6Zvo+_(bV&wܱJ1(n>zld`*1.c6D 5XxAcP+ A׊$oW({0z ^0?ILt9#{_oT> Z*6%Rb\J&AJ3K-e-R&^cE"VFg6yxavIbBud9,ѻ\ft0t|!C #Rڄ{N_z+AR;xT*.NDUrN%S}uɟοO#$SBmf(XO@:sV.]2Y^L6B :Js^. kG4\8}sŗAm +=r%%z<^|^hi0V 9'Tg+%-&+rS ~]Ϛabv8V tHiM(7Bs^$†{$$c=N9 8h=F_oFylj3oj3Lά{B \vjW>RR)hҙ!8r$ jgx$Zlsѩ4k{ ]Zͼ8XT>U-ydp@ȯ;.jg< eUڨjtr]tX'HcB=Ckȯ-cCA/v@ӠE˥ALnߍ^lgAV AQXp%uiM+ʊ[;&tc8?z3^<c[ԫU۬" غ}l2P+WmϏY+S(( Í{^Crw9?I^^ȗfq첪B-j(.2ƀ41"Iec C7ۙ k.1VعO1* i{Tm&Q~fdT z?\CB]=g\a rSQzృ;xT $B`;FCϭ f@h  q.+Y}IvKI]R1%b.dj[^=9(1;~sc-<׳͐MS`ET&8VGod$V kp޴0ΓOW7Ngr(>_j,׵~Z 1YN&tx5Pn5VF PE".~4΢%b\lbI`^r伌("oa`~ܒz~Cr8F\A%hfxkEs(\Ĉq9g"~:ױMifaa-Z-W7P^iHoLGs4aB{95 Pix'j oDܵx;pYW 8ulFDFy Y 2f@u>Z+V`EGn8!FUC`E)׎ֱ}!m&AR$S/o?[WzBvb2O? bS@jO/oӁShūNOYAqÜty3F;9mdo=h:wm;p1(ť#1]"sv%agbGBٸK]Bqg*p,K&|QIv dvy" WJyU;6 B\o{:v`7!rI"ٌvFr<.y:o= 2<^qEJ+uT4BIb!&2D0BYþ eS*ɢ0|x9ErS 3 HC@Fqyw*1ɴ&Q8J]Т51}<8FZJR;+.Rɂ"(%ő#98jUW֢K굫u5 P-F--ZxLmh<_~|?YyŎ{vnbCH!k[L; RiipJQ:O+cUZpKN;ϴ;Ĥ)2ᩋ0O&ŧw0ӍaY.+n;QGhhNɼ^ou~1212$\QMPA̠a3A2i_&_N[fq87in3LpL·F@]"_#wM$-dzU_ .gq,Zݒ'X{LcBP C/-{];Y3Q;'isI#/42@Vؾw eJ^c8 _Φ?AK9Vi(P+6szNֽ  ?C78B rr׬AaYCL #hq=Z(0(Tuv׍{z& ǃN;:p5]@2IDr*.@VH2B7!|/Eݠ ܅=Ĩ.A7"6Aq6=(XV% IԂg.4l|B (a5fϋ{d^Dv&ނgtz b?s}NjYd'A[pL4Ǘ"bD2׌ 8DUó̡== x)/Ni{ނW)-AA-8z&/rrG`;%(AJ\FuuK0],*qN;SjDT` uҁ6$Yc:)"܎'ۨzXF~lmP6EְXi6 +Zh#G|=#'|1/'ou=(/`՘=/v/=;ձ'Ga|HS}&優2=G<j]t~[ tD,ȡiݠSg5M8گgYlnrDDXǩHEk7_";!0#g!D̿0=*k@GQ1&Lwԝ /%8I}y8|*҂g?BTIx-8-Y:Jh*_#bDq`~#mhVi""{떀,uK@p[̽ Q(S~eV0;p,A7NiaDwR_Oi; Ww4"pt ߵBԑcZ[~wNϱjoZYV:>V[.5z;2IBY=oGf$QRۑc-ʺ,pKoM"&+ oBrC{$ԯҦnDN6X߮ ˆ V 0:&wF{BXoll靈J_fWDަǥ{w~CΚ}:Zhmy:]x#tk`)nn~2[o@gyd86rޥ Of)34sr %#+C1]nMHrD16RʹByV Ct>0I8 !]v3=ZCD dR㎀]X ̑i,up%10j SA Y!_>&Q[62]pƢ9-{Ol Cɇ&Ӵܿ3cRcBbtνp‰e69zL ,> pcOcSbȐK:ؐ_at琉%lCY`u2duINp$sQ@ƭC W5p^Q_n:׉[b|7]6N~M\GԑVM|wHxQŸqZ{{Qo݆pҎCgB޼&:?VL$)ˍhJmc42iJ*i3uf#\ST*<^i\~;;*%suWPo|Couٞb[>7wܾNo|85"V fq(^هUX[%a~CiĶpH|~" `:4|0蘆0sgb1ΎKP6\.&UcYE 0\:C KVIIƮ&GYys&jny`ވ_`BFV?`kzE APkrX+ÛX(R2 R:Tb*}dғ!LXI3\C9ZA#R048倥a y MĤYb4 +ջݐqp+^Ajw{nIc gD .M#mXx:,'YxxIAQH]2+9^4(eKQ?Z{u DHg58l\a*/J[>MVOnS6iNf[~NxreR$]%[,޽3mޯ?\!m:iU0e,\.+N:~\d%',Oq2-OzU^[3k^VwC_fS#j_>S<);Q^=~. ݮOv2 APw=Dv3O0vcQC܎t_PȭQ`/50{*\2t>^%-\%,TpgcAIkN`$b^M9r9+Jz8O*ט_ૌ ye]4OcpQBc),kv2:j\ q4]_F&Yfe0˝EXlcK%% W!PrC\rFkk>Ja8b! gP"$Jb%"uy#ƞZ gb܃z^%zIjf+I*1H!`ٟQ{.H'tϔ<5Q]]O8AM(¹3)Nݹ±ά`8(?ⓩo$CM8%Gk9ULڀ~c HxIBrBt7& ;D~[2n,U%4V=/E23s RdP(B<.\9vʕ=؏)uJ=5lGqDӥHΕOC^!lpo/g.ovi g23?C% ֊izt;/MĬ%`jP:d<<ہ[87ͷ3!4Luq'W R#c'17Gǫ;vmDT$: C7TQ`+qkt+tUGDصYF+&R44&0*pVhXbN!.(o4 IMdn(3i<sx1xO#Cܞ;6ILD5pK,? $؝]$^ I`)1Hkᆵ/VQՔÇf~z!Px.frSywe*['. ^Q&A!s{+u~G/O),ȋ6$ G9\\_m勇_6~:qe\'VyYG0E_K#b$9!` #sٞq#ᗗ&},@D)DCR09$PKn4>Rgx3̷.&Wp:X) Bs'7x8enF RR{gq7*TˎVhgA}:B)Iۃ:eqg잴uKN1ދ⦆~ދMw=M/yXvyxּw"0>kZ 77ӽ8eZIQ/J(ͧ-=Hg Jb$/U;zwьЋoxz-~zw'0]0^MÁRc ÝOHOВyW0OE1} :,mBE;(cߔ;MC^aFzz,u6ƇR3Mpj4'Tp);PD\ KP$blpM 1A)FQ), 1xF o[}lkX>ٯ/PKa'ͨQsVhq}j;P;+ף me_q(6ym*ӱoYb{tW,4[v#;~㫫([chV`Fʳn[q=^͓`}]6<%x KX&6;0oḃ"O=01e޴D,Ìc{=x`OU638^h01)%M2.e۽} u0(03^ќz3 6ƭ ŘwApd\/4@1'H37gsnQ l2I- r> d$0 ʸliu62 Р&i8.#LB9a# ǭl#@[c`XE&d#HX! ] Go:{Ư nA/J7pՄ75eݑ`M6TBخоy1Q필6=H*c%GA$ϑ`pډ Tpwʨ93wuߣT98C%A413Ə[- %#tÎ/aje6:%UP-c,Rj% LWE䏩)g(%f}<veTt`(ec褼Bu*sqʩCf1,HB}&zBN1u, c'&p!Wȃ rM!0<7V|mJluJM^;6O^:;ۏi9ij#b[\:8 U"%~cܺ wmX-2=C G nwJL("}jgx"Zz+CBUPL,v~o3̂.qti8!(Eb\ݹ28uNH9AAP~keVFAW'_ʛYS)2j{pvoH.87 &CE"LVvس#ʜh{`q7\TN)EsT\ln>m`d?H8FV"e4Nr`28\wN9#(RI"+@X.WJ?h, lc/s> 'S N12pJv 袶Xhc%_̞5eos*1ʑ.}TM"|?SboXŚfƏ ŝ^Xxbw#:.ry[t Bx|X3¹Gs ΁&О+AxgIꋟipg5ALLG &4;3ve0:b}Pi)9Ez (^ KF?";o*rk@5 F#c"DL ?EehHۻs].~RaQ-Lcgs0ށȷaw1uDMylNu1W;>X=|q>Ycz:)f#& b= dAY@;WFB:*F `4\$!7#b3E7'Q;~8ɾ'b9U] 1TW;0G]pKqV?0Gep8; "6ܣ+.1$7w(Hp@聵Ѡ߁V2`M?ofS.ʽ90Gep 8dU5đ] P!qn{#]]#š.*V(X

E]NU-kZ=7wctG%!Q.vwhʀtG~y+WѪ۰gʈthrqY~~4sTgG: Lcr ^yW41i)^̄  ~rUѭ8<@[e(8 %94"X`yeZQ?>H>I.g-(&2 |`$Դʨi=2^~7sѬp)P)1q FVҠsa`s|R[ C?(;h&*Uuv8l¢p-4J%%/LR!.jr7CxrJЁ9+'X6SM>0GepT{xqT,z Uk*:JVksV*ms: /)fO Q) Z)Or;1ltt(Aޮ3M.S{gxY.Ef9*AgTSV19 UTAusamUҘܸѢMfA齧y`Șn`dMsi.\ID.xUIjڤ̾uQÛ\{#%B_𐘗IHnIseuͺAO?j5Hn,6D0'cZ%e Z`&? hhRpfA0Xn|(7MIqZ2yfF!>ݟf$^%e<|?v6F.岾EHV A |vk,%χϓ5g!LHog`wՏ]OgMS-0&飱Z}M|-k/~:#9Q% J-$)rRCBi0T3j +b#?驚e-qۉޟ-qIXvp*y3g!y=i~ZO|rWnh={혴Vab/}%X^>1S}԰He ^ѝσy=BK-t]-zdC 6dx}>PTՊ5pa-GFCMmvd6vILaԊj 1])=6O=T~j$ -ToTfCOo2\FZ)7Z\tdR~l` \vC950kyga7:dqq<vUpΒ<{QEgRYNކCOy4aVk8Anj%*@pKehV+6,W:q *=2inI`w/.C]*FVkfB;̷jVW\rGR/qt0ZҼO߼]S1ZmО(N( aО}7$lb UO=i[*ړ]k'6Fyx*9;dI&: 5I\ޛjh7YJ\O}y0|4A\Et=st orh7i}i.9~{~k&z_I@ˆ#iDƆ(6Ʀ)*?we?Uk4Y<5< {~6|0%իl1/4ppáGr=ui\xv60Y0S0ED~XҊ.|,~y4JWq6j[}g-BBףibI~KHoZ |m6 | EE,~Mjtm?K>k60CX+Hx.'c}va#q2 񘻠 6KsQq Θ,xGT:Vi~_`^|\y^ F ?w аx0|7 F̛Gޗa+p6ϳn8 &=tvvc4gg.z?FhO[ksFޕmdB d v&I`;;L .vkQ,߷DIjQflmT]ޫWT]/Fgi.IrIS>7g?nx>`F|8_NteL9q6Ǜd/b,,{S}~aPͅyM>J+t*~Cg@'^&j\lO٫0CcgP1X/hh{mC+eW~tb^ O $'xW9ihOi-,|*Hc?Vh+@HJY95jr<*JVn/qH+i%,4P)!_Lw ' fZ],E! rCN xaj0Jʠ 9 va ٸɧzy^[Q%^[m/߶; Ю07~*]ؐ,naLMaPT_/Uѕv7<qj]1)lw |,JPxX}0 qV?Jb驜wb'&j `l9#FV9+(, +is NdtՈzk#8NeyJeڼ9,pB|T5(̸aD3b9'-!D#%%pхr/zbe%qex)$&a!a^JۄZ`|XZ!V7)㾾ѯUP* #?[1nשּ\E^o5>z&ՐB Dr4y, >bwP@HԷ h0k0 T[=2Hsnn,Yw"#5T ;]p1ɯʆkuEE0by#_ݗh2Uyj٬v6~r8 ZxwY35JIɦ6w-|݄5/z9oG'5o'e^,7/pw0Ѽw6[hG7b^J*L'?.ɼV;vlnAݐn uiB'd\tv79J{E`$zm7ga•2:}MhzhDAm F|?wpO/ycI82ٍgǯ:^o_|/^xU(s4=7 tvwյ Ң.`łt9~wŇ|3zdV&7 G?.1F I_^nT]~('6Wfz4J$Z6oG \󞹵 }vJ4yve}~Sd:Yp.aS 0ĝEbVF*i=\NL$hP Aގ )9S8skx!Gpj%5P" mgٜE[Ѩ^hU y*&L/1,PIqђ{JQV(ҦZf磑ȡ4KSy07wQ+@xz;)2蓍Cnה|Vd梶d{$,(\LC{P2?1wRsbڴi^uYSBtfjLVtbp>lxIryfHviBálNk*-L̊h./[zgq$\9IOak*0?ۙ Q1|F%Y~vL{MyF)VGU~ S^{z2][4D|t*&پ'M5w&0>;U͇C5!7Ԡgطn^V2 ~~xf9Q[ Pۍ7Z "%Y}ƹ;Va>@31INvtHLLtқSDw{-#2R\9y z5E8xS$o4{&wxO#r"'^Yބ< kNǮ!)4q$T$s92N W"͝ꩤ\{jBCGv]!`LC/83X-7E2QVy }<͜W ΙfJyKΜi=x(+!ILLH (?P:6҈至pf⓮MtXqD8gf>.䠕"+}K>੗?|%B@ȩʶ{鱃%9MhHSD*7=OͶx_$ } Q+B4z0rԦGEp9NTOaF&ι]][sӴ)vaoأ؉&6ۛqT[nWR$F@JX_>~nne +Rho4ڎijȤM9ݑleZ.-=ߪ^STFO=3xz]w,]Ն,?>soLE>z̓}~ta^ *ioZڥTJ)>gѢk4v凐>@GkBG> ?.'5^Pu9oft9ѵ[Ϣhլ6K4JEzd^GM[.J?erxTْYmڊrYq ԯb>ri4 C/hx 0G뛂%ْ w/Ex)" es~!_UTnˬ޷"wxC4OZHV878XwdvGsKE#e;>QYi=S|O{Nr^fEM>tM-t:JġB.s7Sp[w ^Zhm|u}Wѿo:_G;ے.v5VVMX A|VD - ȊwbEFݞMi&t*rʷ>@߉/4]j`DqHs(ju"W/^G϶9"V|Ї[5Qy 5;Pr桃+^,aAcm1AL*HISztg 77e?>ˋ e]f9(+?^R*|9)TFѕŖBe ;*%*~MYwS ecP"Q`5q*2!ċlbSGKNIl@ID r^Z[/``"vkw->; ׺0)ǝa+g uԩ ה^O=aPVRފ*Xf%c_&te6j(yٳBk?/%aI%q ̇y w`h=x' y,HDkQ S3446u1> 6L)DSeT΂RjxC j(&AZsqNض琭q:{'] =!:?@Az 8Lʦ2Ue&+Fq|^ϋW3'3Ϻٸ>r?tADeא>Gɭ\PSڇeoHw3>fT g05VQB2Ŗ%ZZb$$ "dK)8N7w3qO ټ{Ǎ٧Ea7x5D5sk7QҝPAn.".&Gk8f!kk5!),둃d}uڮ>ir6wn.Lxz~ګ'X}ˇ#dy<_<_{W5^f oszƓAd ,[yy&ņfE|f"n$ ˉDc .MID%doYC?9UzHHc7V_оx-ӹKٲÑi]YOTGT.%ںD}ZAԆxNߨz apGx_r=jMrRN+)!#`e8w?{/fP̞*l|?*}Fn/z٨]r;{/z||`7 _2CTWDFTG^W}\m6.FגqĜF+lhCB8KԏW1K::XXj|5qך2|ܕs5WGބ6d/ya }?p3)yg"=ӱ((iWUat3c9OfP& V^Lo Ls*VpT^6]gy6sS* q ]TOurIXoa}Ua^͇3]N`Ӭ'o,6 d? ףIk3vZwh_WFэajTPآ…4%Ka7cH1;E4clgC;kFɋ7 mY]n y1W!Hb 54(1( wQGԖY䍖~X blxggTNd^xZ; PKܝpD BF aĪԔ/[70b=WWY^8 a-//D+oYae1}y%MцH Ðd# 2b>_|&s{ݲP|h<-+ǝ,].-}|(2c>Kl8[մV^q^y_e vjt4$s?,xKёo_o1X!yu>$)-!/FjOx}l:ԆhUE߹*֧s5dc<Pg[&%})/z(zN,%;t^;#:(t x9mt mb KPɟ..er;]a62usaeƔ6h蠕1 O\ eSXo bt?e=%n^()gq:$<}tE=!4ZGJ- \GVNU,gKa>uBžFKbMTC~[ q+bz<f!c*- 7p'q5`!1k]-RL ~a!>!Yӫ7/#o:6~hc |G ?Ĉݟ_۟ Huz J=3gѯyRP?k>["ɄYUZ7!(?jz`33F}> E[$͟ml(ߕBm?e񝘧~x2t#FYv㭓%QÕ<6 E²Y͵|"3mTKGmNL7dvלknu/}Ӆ\xfH14 ) @"bh~L!%UD TE|N.(uf|?͌1wY2Kʼ|޺nWs<5|!mGxigGzʾIo#o&;^]bm~InǤ ZBb+8giH6? 3dfw4Ќ 4$=C!ɼ7?m TQX#aF0DB#lE"D"~AxN]!\/37"6"B". bB"14Ʊ9f  "c2Q`HptSOG_>Δvc3CMs]]ڏJ 3aKLnЌ3bBs$cH`$1BFh%5) gZ"" !, )E\aN1 8FaJ,D0X nÝmw@.->?'1Lz~.Z%ۚ7Z=,ڑއBb}8G~Z-"]CZOs0cXh *kHzi/=dhu6Z-h?v,GB/ Y~ }4dKF cL~66Rḇ;$:FEJ C01B iT@e~23^`8Xkzwy(fn-ٸBlVPtKOa鯟4QoF9K!ʜ7t(`9fAbT AJ4Z0k%B!QAv|gN;,*{T :$VLd2tJzxR}|zy6zQ3[4sTQ1bG ЫVQ)X)6Q'dy0ut_5ٱؠN2CYt| Y.0ʜ.wN܎ |~ipP->6M fKѼ_?0ҜIS/*:L+Xf̪B! *[or{3mT?-)ڈ$y:qaNd*5dD_E_%E) OY2at<'7w ѿ^(!p(H']BUSҞ%ps.zso#BK+b3{{ڀTc`w[`w!>jh ùw|RmҢ^/1aᲞ[D~IEOЊzs>9kgxǯ-C0|[0jO n5DQ~C #"џKl=F?JS&v.9E:G8~րOr5fEVDb.PLBcI0$f* qP@)6>{$X h$a!ZphQĸđ TiHxh,oBb;Nj9o tmwj֢`;SE2Ԏd&}cZs,cSowqu|⡽d+y8cY(15I#uͪ"V{NG̜bGNJt)ەwyN)mn݁Wc45ރit1ڙxwvGb=y-:(n|睍 XS9.k{]ב*v\O:T,˻K1 >M/~5Yb`90l`6p]emgl4|_0卨of7~bq ߴJu>tP _['_[ul*^ja rc+pΞ3I´3 nߟ7[)^{.z`|]PԤJBMeߛw'gنK7>_Gb^"o%m1{j_9M {]&bTJ*bY\VSȱ3!w)$wCziL3vg̓W}g??;98Ā]7;Y/qXJ]+[)wanKL3bYN!rDJS5*-Eul T6^,SEm3vv'Ik`~m]5{20|lss}~tFF/^3Бxf{6"a cf_!mo@֟uᄐY~ϋf4`Y&BZ_/_0dS`ԁgCk\05fJG&>"K L&J0aTqYf0 >kmӂϯ0fxgVLP=Äd/f%J ak$O0ؚxVŠfl\MsPZ4ͱ,_!a hlNj7!ƪ$Wxmh1-D5f>L7{jӌA9*?VMI8 >FF9Mlc;i`zm0|f$k5V;r$֔1e~s>Ykь&#KV~ &9Hiuܼ\UrBW( WHГh4KД`(#hO05fgWu!xhB1nN%+\Wϯ0 ~TpLqfR 5?5GB[#axif& 2ƍleTz)A/Kh:-@<"FH˒w._0|8䠦"5UX՚L*tR%Xfv:#6)?K 1u| 3'r8Xc.Ejfh*^ 3g EZiݓ8:E2更$I` ;lz8y̽*E!+/5f/ߛ!cpX ~OS]a25H>6 a- 3cu-:n>ML9z%+$k҂DvrW"cRjsm y~)3<'86WvJhb1敡װۘBUC `B¥s7ͮW6tIiG[y>!lVg ըVMJZAM>5fwtk& 䆊jZ|X,axُ8Gע2 L-5fv:1P0UNcY8ԆUd9 5fvO*4!euL-:*~൨cZɫr$li!YÕ:ak$hd}q*V/Mi uH\|~ QhDDS֭HTok1̌ WH\MrتЮ&Rm] , 3;<{QVSurLCRB(0a h^ؠV4eWV3UJ\(HΞ(!jN)ʘwxes)sKԌkMegXК !vD՗H>X7祐X+9<;E,r0|yf*FKգ@RGsEIf5fizU'}olWTS{%%nIb}k!8ұ@ qsԊru@rH>c])-6kY k@#5hYBP%_#axY&oY.M`RV ֦vH>LhV9cQvfŵ&8,_!axG1bbRhxY|[#axfC1 9$qͺk$>#m{N3 ?ut ~4@4 hhn6@|ѺADTk/NZl k0^D*NZ;[ء T\qFnm_}T!WJؽy_ v?}h޾x2^o_u_Duœޚ^]k۪r/%TvՇN@KJƫٱe)qLiѠC5ucIhBIIL ̘|]D_qb_pY;.¨5#mômzauk+ZrJS}BVܛ fU0c+xŀͶ ko#ɻxu@ `=bt#`R'm؈:JS5:uŒv<6kBs/5fw6W U;erhiOP[t(ֻB=LeUej\_$rƓYJ$7I>zej'E~;eKx/eLsM 4%2_mNG٬Vs )6y* 5L3}|qI/%JCr5^&l&GlT,kS]f8җˏk`2M| V6,J!fPr`٦;&OyTNoX^|';ab5am#aFYҌQ5SaP҈FjtP2xǴŒ@Kq7m߄{mEC ˛ڦu+JK 81QuIQ!aK%Htg_h(H7,@ T4[TnUP@jw Azk]p0˨`$ 1']p6E sZRX9l5(O+LH;)b60M@th DuQ f-8gkxFbMU8>3| TvV7v6hn*V̊N罜[NLӕl#FÔ0RoOg1Z gJnafP4to{qEխYv-shWھmƻ;pZnd.5yەZP+rni}i[S-H^~ݴmB|vI Tf`4GY\O4 CnUu̚\\lU:X#gh)BW &|7Tʂ>Re5*Czt$#:+Oc.*҇tG*,yF7?9l,hagU-om6"Hڈvv 3eArKA.Pǂ]bA b1"  /<*̹`" elz7j/9GH(򹖜NMrSO!z>0`HjXF 8C6@ҒPN;roDuAǍjɷ ?*Y_:5d>2m+)&[)Rm7 ,ukoE9ՒVn*nyJ[^( WXmy܌Fa1eܻhd0'ZK(Ir yI"R:" frײo"8RVuAbPT* ('@c}9@A=q<10BB23uS/N Y$532vc N˞zN9퐜vv0Y<*dH']췩wf[7't7cv,ӸQ>\]%XM'J6{0} g:@KV5>&L0bQG"`"`fM-a$EV 1Y|FC`U un4v}Dڥ\K7eSd%YM7'7XHCD{fhp`%J7bi1g;Ja4q|| 4gKoK8]H8J5ԾI-vlQW eQ/ 3{xfYW#3V'bj ګ TI=Kv v'ّB%=>Rܚ t/ pa{[haGW?rJ},q(ں^iMwᗊ6̪ၧT1< ԪGEgƷcުRɤ<%Rwl*-)4.I^uh6]!Zm;P*1oG|/u Ċ@tr0{AD8&.{-GH"#I2r*xc&J0&aV[%H/:' Y>b9jZ 7!3(+[_U,nˏr> T`J\V`Ⱥ'KB -\,mN\ZaSu6oa't?|XO'# ыܚoq {K.d<QjƑԁ|S:zԚ`Kz3_ l~[̩.UX볒 ?ҞQS:*r`/oLNH*zEF{@6^jf 1HA3"r0#Ǖy[9[9K) N<ڞm7b`rQ))AE&w8a% V0eT#mN ՟tr*sRQ )6ʾE8_z6S]W ޏraGՑpcڢSH ';@_1lʡtuf7I'HV")6#rm|ӵ&S0&` Vc"ڟ`zIΘ ¹!f68R$#Z{@z@>; Ȭ` L&Z]lSȗ2ކ,aug3hW>"V \JAozwf<:n@4iݔhs+='Z`{hzK Zk6(yw֑ ˃!242G7 C䌂*gRbr[`mZiO'_^usTTk 0M`[5gzl| r^ճ srFQ:VR{6dYnz޵[A+zG8\1]W8==&NƹC~X{c-9UL+kMN) fȼ<29iL-sWC]7m&U_ܚ|^ ۴;+:uUMf}?=ịUh4BkM̨3hQ9b0XJ"&.H|9sBN&l SjQFّ24ɵx+:4<(yq9c\GJ/u,3z3\E!C4z,Iw6-rIIiyt Q04ݡa9$a|+6^ELc1>"'^q<Ƿw60Ir7 É4+*1bV!XQ,fɑu]2t= )sJ0D8 )9a!f%'A:60A Z'}SpB]8. å+Ђ%VUG`_d4&jci\|Y?{Fr`_6ž_ ddsx%WEJ~gH4$% o-q}z65зakDE8,N굒)bN {}0vD|=8N#lkqqgP<;t܁;svؑع;w`\;svasvoH\,<HhW85 q>mSVD2Vk9Emr-r&zI|%س{xdϻQmH8;vD"c@c F* ڪW`gQܝjU!}Ч Vt;"ƱRN|"*>[ǎQF8>@ؙ9#s9Nƃ6H Ju(Hg8 G^k PnEG^Gr0G 'b~q^[ypbA`ҎRR1ٽé?fa+ ⃡h@ mJ䬣'I7zjUNA1SPLB3ccI i8'XG'꜂TIYBEB ډs4*>[ʮ5Nbhkrr=wD@DO0wG@A*rCʳKtJAݘyYЬ cdzڝY459趷rbߕ羄_vov/Ynm~ \|yů-Ewb"H}@$5 EbݩԎ]]}Ui=r:&\YAxn,KBLR%T>@ NajxV[bNOEi/rBΏ)zt^k ''6:B#VX]X9m*)8CPG8؋)Հ&1 HP{C2JX ~F41a0=s%#̟ٲWnp{VNA^ŁAg*yGg#6 ǻ8[8#1KPNs sHK(,K׻VJ*Q-ǥx7 U0IpN{*M04ϴ4&5|;FPӬtQ.Fr~K샷agvљ+?LE=풓^OCtoA;9M-Qt mn77,,!G~̳Wpp;Unp7JVlja&Ţ2ob9Fr/U_"9#Ũ_+5dMd~\{PG߽w?}~O$T|8 LϧT^܅tA}Vm5 ͛fo.haԳ ߦ]}[}Xr+1Oso] j'o8ݭ`3z0 t`~UYyϪz^)Nd_Jh!oĔ = $99ˆ=/W?Վs] x<2NWƱ29#5E ~$+G@ei͉t"8mq2v~==Ihj kRBrRBd 6*I$nšCזrzc ϒ>?>udAv:NwYLm9GzI%wEMP "&Qn"72W bvz[e_’z@Sa1%gQB^H|[6a+˜c8(8#&IEXP $<(bQ{trŬ"[m`i!CZ]"P$\'zo}vG7å30;2A'vq rZzW 1݆~Q=yw7.U~+lMh<}`%ԤL/B1R9@_$nOrX39U\$#څhpf8P#4Z"K WI nOx9~כ -j̚EZ-z*eO8>:t9Jp |S%N2qghA){3ϝeſ;d>0˺ChzV=bmo,_Dzիe'/:%mw;8O!}'[ZmqӳB杖;&-4r]DhE@0ѯb]-[/v(6ܔd~ݍt.IfYwivn~af6CCʠL󩔲]ޮihxK܉6IǾ˾qdz`3հqx[rpyiO!$O]b4>1(PNm1$6m0')Ik'GH{-ҦUG}~Rg訽TJ֏ڇa٫^HN7)w7mW9Ym8䔴Zu.bk8a(BPt:PƄ0P92&q"OsၹG'꜂TIYjń0Œi`6K`-K1W^ҍTTjQ_N|{+FC?/IQa`Tqi Y4pv7:o(Ρ~ZLY{=5@8| By i- 0U IDV;Su<&+[=P_[B\W4C>6[ׯ|_n(TjL+8 R% heZYL%ctRI ڶ`oHN;kd4Hc,qjk%7:EKTx,*yl[FpkٌGy+d4kqlWta,Q\X& N6.)`\o8/xg*J+3mRD(EYr4pP?>)j.<'R9~7' 'Ap:539m.yƶihknEoIP+؁D}WS++;eKbD{$Cb;"}(\v\yM5:pA8/ >c;&R}ͳyѹy^-,2X%SET`{!F-km#9e;R/v "/*H俟 IQrD%Ja/u Yg5GJ XGࣷMx߆8&]ID⌡x}z7x]au.nM)ms_mpcݹ\p464Y;BV՟ vI>]o m>X(&{+yF:H*0.y͚wv\A ̓\KE>ג)E 01&14 Ƙct<%7sxW໇S`(c m46LMhMxZ55L $byiN386XgoaKHMW VQdM|N0StNr!?~/ ]g5cmх݉лzlzT1%cTt>t01B?-O atvrkp憪I-*9p:CŕNqDzDStOT*CdI[PRKGS.2lb˳|Dqbb&Iɜ ?l-!> Bc+=茱A %5FrnAh"<@&fuf0 xa4 '$3We, SL ;f]Kֵ!O$u04PaU3x0!\F#QN:T;{T,_MfOQ0fu0+o+Vf3Mv<}.AmeK>$0=?[fd':=~2Xf>{XCMJ],λ.&]nciTt|M~خvQd{g|JSSϰ ˧|t;{ݿ{û/*%'ÄO{B'^_Kك9IP%)vRg6s4f-]xND6=01JO%}0Fe de6h&_[Oq]aF|N{˼7I%)hZ$J!$c(ᴣ8G)N艨턠\)II>-؛pM.Ie&$ః^ maq9omHIF԰ڠH ~$]Rj$5!<>-"J  4uIB^? SL?DP$pu[bEF)wLZ$PX0] 3:\EXu@lSI $Q v+' BG8Yph# ˄$X'wsj9*4b:JaI¥hƫ%!$R"ltRy<8m %> RĞD~ZyiB[F B3#U.4!LA<e]gH{V@f 0̢75/ `ܙqe oyt @ o7+CƷZS-y% J!ӢˉWOpYښa7yt KF9ͤ`S.m@qUd{Cג5ݧ]7ɠ%6K Z֦0crL¿sI q39JNhɃtEm`R5YJs̍.ZxLQ eGX3E[0 ӱ 1AIL[,l5""C0!B*Ca 33jX$"﵌FMFÔrD4D32 {uy>@ˣ%g&ծԥ|yTUL.NRmE)iift]Z]GӢϲjZn2ɍS u( v>2 m/y&'0 bx dWuS|ֺŪ ARQ筴]vyhF祖!]Ut{p9ųn<;~2+YZ~մC\_i2ѵ2?G^v(0/yno7W)}߼ڔ9WT_N(CၱkT V0bҹk\]<|gwk^Y)q#+(1GrD+KtpH$k&BKdC!fH Y@S]Tƒ3Xe[O[vtU!5`V*u`L ToJ&۫괏Ml ŸGlWT8f;tB)t/mt5ARmH$C~r//"]*,Ţ[Xk(}04VdKSYu/x,[]Y.n~ K]ʇV(t0)|} tt<~Sl/e~\9jng{z06X%ؕuGӪdc>Hm5LVj)45ۨ-XjɌ>S[vE WBzibIn@0YZP sZD)#Ĺ5(O+LwmU&zE9DŽRQ&GZGEdLw9lVJH7B4B Vcl\t>qcxe #t-ѾPm =Ч (qoר^de []k@&*$GK!%jg>LLj9+j$C{Z{ט[$_Qtu(~2+ɶkTٹz/Td; V^w/bl~5I!|XR 1{0}<g<$b&G%A+[Gi`Tk-,DT2""&ZH0<)c"OuVi]A͵ˮb~9+"Upst;HsZ߫GƉ k rј0oZ`tY'7`"yS R^z/%%D )XZYh6!7w* i&Uۦ]M$$~g~FJd9E j3Sq?N.ߝdN0;|9\; N(Y(6fke准Z6N2&$6lr4dKֶ dl Licr+!N\iN$RM|~ xNGp4˯d\zغc^iq6lx))Ъ]5j(Ayg7޲t*e2c2\2}s0[œy]d$@'}lh]n6< t]͠IpRअw Ϸ;{AD8&){-GHb'!ux^x:&|ގLUxp]ۣc7 A͉3^WuO!|;jBG5~&ņ*G} R>ζp^*c,7J1s+ɉQ*?=oSv')HwI%YivnU& 1I=79}=QK`ٵp/*1$q 2{9Ƒ S["|.N iPFztث,X{ jQ2Ol"cȥ'qC\&'V"YdF9F pYtFVHXG"\JlB O1jvk zqyr\4p6c*P-pojA;,&I$SRU>c.J0& Z8Rl{ח_(Ղ:RxwHitL*"* 3âҠ{ ^$HdbS`(`0H3 GZMŅ #D1H8$aٕC9d(QNQtc Dvw"?۟45$OŒx2ґ<@pvKj`EJ Ok^eHU'H .΍ `%0$w B @t T_p\)DeNQīqgُa{goJI)w&@;|:6r1YW8@N0gٗ_>gWfa'CVƳRr00bE~3{Y"5[q$Skf=c'X#S:RL݉LߟLoNB_gտi NCkrӥt=Э.Zc`:/Cз"Jwz)HѢضo˵N7rT`2:F`ȪGKD -/ms v G_,`TZak`Mel<"T'?|^ΗH|27 {W3RR( NW~s=Cn$kGb|H7uÐaX0ևO@DdŇ}EbMoiqՎJQE֍ZV9|>$9߆cl6N ΆwqoTO67} O.>Sſ~ŇO/0Q=8 LKLk ?Cd <_0lkh*`%]ªK7qWlMn!@~~M\:X{_+N q/>W` 7bV[dx{*DEdGA*Eu1OMS>/6G¯ +vR9!q [5Ny1ƀ#q<X΂$[`D;'hCB|8ƈÑgF?{Fm^0';sݗY,p0x%G8~ojIvږ0@ld*?zF![:ir+u`@sc fN2sTI'&Ko~OҼ2/ЭY{4ki9ܙtK^Učwm_}Lktd&2k!17eUfʿv^C=>Wݹn4FCݮn}/=|l#ۃmeZbZ1kiy)6>01evc $͙0)9+u+IIϺRד=l >K.dVSR90-"lx"ZcD;g'AɪыzM;!rS;~[7_^jDO bVs.zplwQ7F0ހAfȱaNf.@[3)׳p>Oxdj('J̪㳕n\^?_P^ydhxEcaieTd&H0)aJZea.} d.s=:Nymнḵ_n3L@dr4dFc&J`!clȆWB2" q,@]H*[I/bBCO1B*i-pP0zvi{n!wZUXz'OU65Gl:O掬-oZA}rnܗ`-wYީU]vkmWܟ{f}1K.*`U&fBHm΀&E͵D`&P^n̯J#v8v/D]|4n F/l^mB< TB1 &R h4:ɬ\sHcj(6UX`ʁHxbbA$l4֦IY N b{{Cs3 AM $wY. |BIs# #pc!抴DګF3b]-[Zc|1y&Y1^-lȀ-0 ǫ]+O.&~2_sY"r==6RRAI9H֬.oJ?T]ϴgB),sPw L@~ު\J#P8Z:)@xi*%Cʒ[f"~NЙ Hh%t`13fo3`147K#ۇm?Z.Blj~frB2KSp#DLJVani2Z2@-GnWU֣qCIT@.}&{csBn(zK T90Ug777d2]]s_/-ʿt(7 ۯlrv{㥯ŹSP\#`nٲͳ-yn_r m:[|ƟgL nZho\Q#:, xe:/˖Wdn@ffyϳ遁UA  UL*<Ŕ1;\a?\.y <:68'#tb:ʣbWwVjK؊qNW3c=%ATʬL9(Ze 47L8ȀET dCs,iO\ e ǔ /PÁT뉸&Pq*qw@R~oN;C*نʪwBi)෿:8q7yxVj|;3p?>WQ\D$?Ѱ>n()㒉Kvh) '8 ?>^kVC^+)%lʡL Ɨ6WWuu50|hcJJy>u}B%ZK;=O׃KlH{Eky {w:<}!Op.5gE$uGSpb3Ҭm>]8qV4nhTy6gZ^=X\^Bg/퉟go1B-k+s_<~<[t7m{+H4O*F|=uuXWIujsI]ňb۝WmLIkK+Q|2]zvSb8p :Kl{*YWRu2V)+>LC, &yP)?b-7'p8=3"sן>~O? /?ш'jy 4(tV|w=&rG0@R1ן/Y(i׫vL8݌ *zq6ljǶV_+ؚ+(1T&wń.3qI _le~NTA^6G+18tY4ωgTN Y2ŭ"2b4$mXټϷc8y> :aC;)tV-%HǜleRR{OLٗߔSv>ki9ܙtK^UZ+{3 Űqb2n'5$"73H˱c#%mMǧOy}L‰l. D>7}T& Y?oUI B.h (!0u\GR"UPe!½uwaS ݹE ^ϖCm~o:lC3m%y9:Bdu0S4 jgL MvΖbG\;~I@ r 1n$b1=#FFpjUWllOۯpK`vAtDdDUS=dhd-c]!F ` &,T9ZbNBeAe!>XB;"M"2`VmK6j8@@㔀ÁqLM< N %`Ҡ @3^H7qѐ{}E H"kDUM7;: *ң5azO|JAOfwX(DQZy`u%Ƒgr sQ3e )ML4+=N*fS:H{mv떢OWte+4rƃ9]W/dcjZOߦFn}8_˸#O}jZ"JvUIn9r*1HCV~6fH8$% ER,[h`_7Lg6*.b&>:C *8jIO R}mgq+[[3; УHNW a7S4eoP*nq]^kq10 .fdrH8+CF_+c.tJMLryå O}3@cĜESY,8ZI}F/D\kI lIK_-ynKϏť7]Ew>CA=Bǿ/W~""Ɍ/p01ԕXS:#c|>8:2wi(Bޚ0N(ӈ7$HJXri x$ )[)Pf՘IDk1 Jl5HVHKDǁ㝑c]7>m rJ *o5[k_|8I)[c>oft=»w=M>moкg;jCXLBN^ Ȁ.&%ZMr-lҳWZвrnZG=?4y%ݡ煖!]oot0]x,bg24|K96l.zC<5gӟ55nr<Уky٭3xu-ͩ T?f+[ծrE2bR%c6(SZ8S<#t˂,l JjkQlZj5a!xR|`rۧBFBm0Fo3#oZo{ڞn)M|%Uo[dٔߟoęL>Fn'~S iXTfݺYSh^T+MKE H HgH9 0c* ftۯ,]6fT_e+-P._ p_gTXE ^&p8j.$zke33&q8vdR (М/7!m+dRc#K.u!&LHT'P6zݪFv{j_OpWt4`?c`ne1`Jʽ3ø` ÌМOB(YݙBvgjr3…ԁ]xhlM[[kBX1RAA RIsQ2=!,WUr)@*9.A*9JNQe,WUrr\%݁ks|t3}L9g3}L9[z`b3}|-]L9g3}L9g3}L9gy0\3usϜ3g̙>sϜ3g̙>O $QlR|\` .Qe &cH^rEmI %ȗZrV:'M$מt#S:)ģ xQV Dt *.dA.4CA*w8FlUi.a\_u?(RLJk,)QV8dmU&:]LNBQ1G Em2udXDApGkZ H2z!ZeP8rcla']vhhF[3;Уsԯ 0: x-MH)3[JpAҔi1a73C0<`mp7pÛQ >~-,cBg=u^RZo@wu+LDpjO 1-;ifZ=59)EAV90^ǻt\<3DD6=,x)LSm 6Ej2<%#DFm CR"jxT4> ̂HB9U ,j˽)u HWlp9 rǫA/iuނW%Z i'osU4PI$(eJH1]!y]M*Ffd j0D?\,];Jemr'yrӰ4(M?~# OP* %ԡHHLBDN.=Y˾! \HepGa9AQN!IL1]H#X,coHƴ (QHLd$&A:łKsg H\J-ׅeI˒OI;i ۃPt$'v_{ә{8PC)KƟf8Hx&b^w_K'?_̳#LL:gӝff[( ؽ+AHiq}RlS])C4Ncյ}ɛ fꨲ-//Fۋh,R9mrg;J+Ҭd JaJbFXHeG(EX. ĥHQJsQ'(lHhHq׽%?09+FX&l*#W Psr:  n*9=bb*%p0g2%E]pΗG^2Lqid;i}B1zwA1^ }^ɮG}ylcBɆ*iC#D Dr@t[fef;EHXp) IX8FҎY AYA@ڭGgٜϏ"SEDIϕZk]홓ڇHx-OmHxL ,hnAλn9g$ b9*1g+ŨWYm5Kq*I@|o=~eCUK :yHitL*"J 3âҰ)h%1  H&۩GASf}$ X6quG cpLI!2h֊((:XEE;3ZiA!̈'J)# 4jiGt f^D'y5K1R[AzIQ߀4֞8b 5S !tk@F'7M.*kuj$^Fƅ?/&ſ?sS*O~1z~ߪ[]* lHy}+3&C1gVEgSgaF { DTUZ/V`$0El.Swf*Y5 ,u9}671À)$`MHtzr)cdNuϵxp9qPLOa +ҝ_bdMnQofMw'7ŏJT`;ZM3%r>qU>lwS4ii\kGE`Isc߻j֖ꇵ]pI]o0!6ѭ#~FiڇYd > +V1i?[aMoirJQgles*|*d9_cB1E6.Iǿzܩ:4FWw~]?gLq+0󊞵 _L?^BC<o?>ah0lhC/z໌˺r˸ŇlUn-@~*~aU.ꓟt5|!N.+@6Mm)ӞYiٚR%~=w-DO #>}Oteb}&MI8Uϼnc8f,rgXɬ00l$umye->Cx#;P.*e5%"0+EN'd*j-Q N':GO<3dnC= sv>jiz2W/I/^,qU S |u&WI'LV"*)}GX;R&`L5Z;X{F8.4ss$=g`^<` A)b+Pa#A2sVkhS5XP2V'_n2KSeCV_W=ކ"Q(_ @ F gEmXz*W߼_׃A I+lfhws{>uq_ `yo-jmB[mFZBZQVk+/՜ uQ~)r"ȹؚaky;8-9A%,} _FEu |ť7exN_FP.>+4MXN2 p#'@=-◟/R'LKK ޣJ0÷?o${s,a󵟖w"{AQ0J*Y0R`#Y/9 H} CMHuWE*ʶ~Ë( IYCƀ%lmUuuS*:s|0KξNs R >JwH66iq# 2Bl{eJZ+Oo)^hZ`EIA.4`ڧ.V{vTPSRWU hL@eR ҹ㊁Q"EH>Q$SZ y N Q8>/cpePQ(I!F 6F[ 6;UbY/A@^̉Tҙ48c&X>s.Y$,rL@S[%e2= )sJLzǜADh$C8i !7E0hY쳯r>PWMjA =H44S@'|+ʹJ&bI3;/9oC)ֵVYHDh`<1k%%)2!>Ƿb>vt.>7Rwu'NOHȹszBhW0bڸ0VEm 'A9/B#F2Oи3eMMA;fGt1ʰ`h`1 C#EՋvv/tٌV+ _P{._jrg}eJêuz9-GT5* d6g C1 I&(REV'0J@f +::rxPT0v:PMdxV7dr4R R #@ \Me^Z%EPm qwj;JhGvEzVIW"]7ܞh ) CWq7ӷD yW'R-,-"T'p™"1T{U pmgIǀmsJJMd3BDN H<䪽T nV< !ŽZ1fN`| <ıLt ̟>?^m-4|Ezſ~ 0\(Gp k/uFvs+Vo_w+b@gbHSip9g#zQZaY[qTvӓmR$vqf9C"e`T(MpR9&[km:[^MW^p2N} CT_]٫pب=tb>ߢCFx+Il Quzt3tK`uK϶ک#{ō@h}׹+]4O5{cuŴq;Nm{i}~7e-Zv>yxsuv-z^jƣњf7kކBfќ7tl W,\ZNsҜ-t[U E<07W]3^kr_:|?9%|O⒧kMqKIIbi5 (EcݩԎ]]sEw{xj?D+˵#cy*)r68EV[rzKHei/sJT.5&8. loOmtEʧ s*{*)gaȎ:6 ٜc!PW~3%#SJAiz UiQr9@\ &T~&:xj`63Vy#A+DO,ȓs2.86J^RjQy: ^+\F*"O'esCB!$1\$]Fj Zu 7"dL` 8CpͥeVJ@0sTR/^ⱸ'e*{527ǿ~޿5>3 OqO?FJ._e6Y4sK; $bgVegJhHcoŐ̾2jȎ)PN 9Ps-xw'Il;?1 HUG'$:N(/>>] wo}^8X:ɴFUo*nl%Dc~r^ͽT}R.)9 U+r 8(K?oQO\`4ػuo<U'Tp;x NF14`瓹_4_\N.9zIDo8iƑ@<>u0yhfX> ӆ8U)YF1ɺ h4jZ |̇q#y`o^FSxةk:͵aoY87Hߡ8:~?}8L~5wVIԳL{ ߞ0jkho64װKU͸}>B/[}?Wq~9M~t}#|КJ"1NWpBɾVg.h_Jh1Ĕs9ҌTOwԽ}>zN2zHBM奢mSzcM@xkK&Erƒfm 댤6d^QC8@\ZnZG,>Kh{hv3%2NAt&O9pCcyItwʡN'7$?Ou4F dk7_r3<8ƒ$$s:RV/=T}WuK_BbR0"]]I%J0D ι+Pؐ\?&v'x2vȷD@v]ƕPw`0>$ʶCZ{BZ9{([|ӵr]ErDU4"t`J hX-s1dV dXH=!$dArN:ĥTL$А PL(IaMb% zֲClvHL+o1j|7T5QnyniX!p^x"/}ïdW_E[g N.4`#%(,hP8“X.] hTǼ0o^Iz5QddIb' ^* ={ϼyv͞MT;ReXA*a,ҕ ]pb-,Qhe -v͎WI^ɵ΢,p7Fg:#(V_ҍ8o^!'_,׷]]+Qv+۲v5=:9'7*}*'qs$ZFJhI4ɔ)c$B Z@%ml8J6j ʴ@a(Z*Dok 'RD+Ӷ5,/U%rǫI!h5:Ơe F l/gӳUVfgF9(!Μ# K"ҶD g%0<1A;y5|p:$NBll :% gH% Tےl-V=~;ҟNnjN?2ccyW?mP`y/'@ʵRcM8lwqKpb&7(Mb`"|L+݆8j GaIHeӂBgT<*d,2%j֘zn68˯㱎^M F$AK\p^CQRţ+ŭs6xqZiӎv-֕8 eP_UЅ'M^>Mej8pLҜIHhWzO`tqb[cg}q3}KPg< !^t1 xu %p) bTmBl80(3Y`jDzMĴ#0gw#K?Lf'iD53iWѦb\13.adY7$&+AVs\B/?O%0]쮡{f-lerzKe;|3;sbYwW{f)^)zZy$i h'D2yVN|`MG%5?i *9".-A90<6;?IOLY.=58|.7 By B[,aLQY=&R6 ?au<8뙀|! bdF\o9[;zaytmo1( \*5K)Fo <= l$Y5=#BsW` xCn6<ڝ۝lzaoR 4ism g!s.%iix-Nvββ,{=ILֳ@d< Ϭt.ypEbRN$$Ĺfp20 =Vп 0e7SQk,tXs)h:Jkj8{u}cy=ֹNOlvEi]|~4kzOYW{@J%Asg)JcܹdeјU)EV |L,li㹳jǘ2D (eH 046-}CSsJTz<1I3F$T3p`4p@&rhz!ԩbGl R͑+9NvؗjM`DYgF:QW%aמo)VYHTvʢH5zd6EX˨ 4 > X׾ZWܚˠk8^s_N7e`$>'Hq"rZ#α@.jk6c}.b;]\>]3;@ AlH8.d&5eh0$PB8kքx]% -);c:OZ#Q[]÷ g附߭}[S97W/rCi|Ζv:>[hs~!&5"Q ]?I8HcLQDBr9n %u(Hg8 G^k PnE IP1h) K4dsKDBXX.+z*ajUՙ {y7\bRn(tn?|{3!EXϊas| R޼}?jwY.=,Zlk iDjjڨ XjAށSkEi  WaVl +|H Iʖdd,(oq)7:%| e VSgݾiNRjg#:x[ dS^Z˶{bXzب:jvjX5[{V|+x[mNCx 7I3mh LJNh f;ٛh?X噼2l[nrŧ,̋yV 4J/Hw; q.0p0,2(6M[ ZEB~Zp.K ohl('ahl-eN152F -?`a&i!9-V *ϕ6xn]nO\7p g7,a(;q"gE8Nzi^6e7m7f%=X9ꝏB)jRVhDJPNNРFn6hkτ 뙱1$˶?8'8&G@>QP%f6lol8m˧)n6)8e6&1`٢߀KHfB>0]yGC[ekг057:`Qr% PRj, P̙CNX`"3F+Bk_Kq_!Ca1]-TYNWs\&˶h*g !X7_ ϫзסK2am@A+d-cAf-9f9ђ&$=Qrd l$LZ/5ɥ'Y"\XYbDb8jOhc"Q' Ff3Hh! Jb$Tʆ/ΖbmzK&dAP}=fatVz 0G]-8ޠ<؍eo"f74=76_^ɳvh\Fs ?>eW o#x~Uoo7ݎ[e&7фZovky)-/鷾 |xy;|pFQP`g_:\]Wk|m.^t-n7~DMmO\馮Z fX^ӆS x:]ѼMgnrJ^f&ŬQ6b)ϟClsrF~eO_ʺ4*&Vuzw@?qϿ~_W__^V`\lA=3_.{@תiL-^8 d79~ﳏlDn@+~~DA.-?4 ,@&a6M~D9?/Jq2=S5]6ČfH=>g(ۯ.k/G$_#O #V@RyH)hf&< pkK$Ep%*#*I}0=P>N"@s_$nv/#'aTF j+S<:m'3yukiĨ3[ЋhE-\3ܳV Bv8F_839 H1sG"cCB?{Ƒ H~ 09;H:F?eB4)s(U(HQM'M(NsU_UWWcb ];zQWoWxyuYܻK^"o!.J<YRee!'BZ@^*\2B_IhbF=Yjݼ5Bp2L-4'⬦Z]^}??ǿfVlhsjɒKr s c-5mUcn}'d a}ӽt7\ b<;YAy]ju^O((dCm#8ޙ9UV.m$;vV~w|*ѨGsO7;hHMҙO|+*1ZYl%2At_k:.poCA>GEz- \~&-ɝYuykt1{K[BӖտ_)jrɇhp.S!Y-pQŠMInc'RԢ]d&FAQw wgDHG4ςO!Z橏ok RD^b̢t*wɋɎ@Dw; sVT2J.Kj EOkBB6 6X#[J؄Bd,6͒ J9R~ ?r:}e2Aݾ F_z\گ'fsFs,T^3Ahc"^v̊Y-xI8%!N QuR@3󨐙B$^ȴYc΂C G#*kKp-_c 3" 85I*ʃ2#༆(G=W[EAK3u'^HNj[䧋aR &N &̆2#@'DLzրyN>RSɼ+9hw־}6}.e88<fIu|An߃odA7Յ._ֈ ֿa-lwrYsWw3;sŲ8:xd&7 -TL-$*v*4{Ұ6 i2.W8!Ix',Oxj~IY$M&wVW~8;F*!#֛ߎ͆wl |ܿz9k!B1<- 4k7 -^mv;{cJdY=PiIYA@'}-':9[\'A(/E݉㤀oN%!yL@RRI3AAtTEXĄ>0Hg?^L'|0v5N$??Nj_ǭ.z_eI.y9˻G; g7sV G@h]aV ȯsP8 e ;:'GJrN?u?Q=A(L5{6o1 HUɛEHt6JByD|qp9[ 7\@kx!SP6@J2Bj RY~ū޹mHӢٶfkW}R.)jVoMMFq8~@=1'pQS/z߻1Y޴y>/?WݿO~j|1820=Ѐ] Atm^%'Qo8z bqqi뉙֞@S[7hfX!ӆ85 W0bjuO]+#w:d[-s5Tp)T6y$w,|17/^Fc?RCxDcpoC?>P?7=j LGp:!!0[]{tJuͻzIlJN#|g' 0{^|~k \ͣc|z~5'K1kNOp柦uo^u?VG^Ӟ|S/H;>hNм'2OMN&Uz:/HBM奢mSzcM@xkK&E XR٬B0ƘH*m˼apG,^F=OZ;ÙJW|MrǴPȡA?*F;?v`N[C0W&1.sm0,hjS<~i)oO}_ϳM̀+iT_aE $Rq]rGVƆT 6!P55Z[C86t.9CvYs(C5 $]TwD d0Q@: u(crKQ~t1w(ޗq؛` F G_"7R۶~7 φ/DǾίp7ϯ˥X >ģy0tZE;Ҫl;DHKi7ucS~[]\uL8J+W3[IJOшЁzCAmf/I`O(' Y1)ߡA`!1F9wq$A8s]ͯ).uLc׍?=嚝U8]̫ *:$؈4x " ? T9irQaz{!fiZrA-7ywߴ0xѸbo7kmg<}C aZou/6w;æK&ɵΣnh{cq' DQz܏ίVB(tnF) O&Owxb1J&ijb (w[ !FF QYíN*dwNWo!9dV(\T 5f6%qDH!R hѹ- 'SFM85c#eX3d|Is,hEGp7q N)}a3p6h<gkT8U;6աJ]5I*feMn^12yQѥ11'0G@9F%i!1Sg<νOurQJ-+g->r9(upxy#1!4wZD(SW=Oq]|૵ 63zQw"q7ӳ6:nj_/TQRL:!ҤS2e$Kr0*5%5vb+d;eLYKIVg>119952(ᐩrgm:9Qr;$Ȼgo.=2,_P\Q%H3*7^4G1vf|Ui)<[@bX4欁2;2ZE5eHsp2yAwfޞ_rP~fT@SF& & Q@TB"Jl|:^\AZV=+c-;:׆: AؽT“;kT4'4*9 NJ 1FFhtHEde„ uN@@`[|yaX.TteNyBBsp!S(u|bje). 0Uklqq|uQ9m, S2Jp;=5$Sof_٢fR ^]v}LT&2aS:.gxs!LڑO=v "J Rf0T{yNEEO"ӰHӒ+P僐K8Q.VF AITL4Be9Zg^m:[BCsJr\A$C z%ipjtPX@£G7lKڅPj9IUHIm|`oAtK ٗ˖貌L;oF:Y> q!uNHD" x99ky0Z)r!1WS@CQ!cc)(֊k?qB;~]h((:p?@Jֻ:pR? x%j$)O}x= d:c'W W#Ş|u]WȲ@QW#lzG9Gܦ"cѲcXB:~# c |4Dc- QQE QJ4߳Vlaw dWJjK)4&RüFmE ]ct9Vt'tT(3tt[$HxosҊ3R*PLF}$Sԓ' h$VK rp$+'wh)cSy7|Z y j;Cc]+{Jb%&c&o4Xfݧq,SlWǯxd]: VܜsZW?{7Ǘ_iYuΞ'i4hWI Xj]Sk V^4b7}IFcyGd-7Fb$yNh ԃ%ʾɾɾ g3T{|HRknsNeΖq`/yԞF8Ar)D_K!iLKZ6'؁b)|~O/ }}nΠ^QkM#r59 su9 /%KpS4NIvK.-\R5)E*Xr $tLBрLIa"Ǫbzށ۰}'BIbXf,CQr۶↯PoOPOQjm$oFx|z=`2"*$ps"fU;b-ƉhHbVĢw @PKL(a%k Idl2:3-u>KI 50Ryo@b< %s;S^iUʾަl~dmcpUtMfެd:<ߢ 5(gu5c>f7tw}y5mqnyok6\fG\B˷N\02ꜥ56vB;wAUЍ{}]>{^jJO7rm~ G iXoWx!vbҪ,Hsym㌺+|m[:o)9;K[#K5䩚_jS y!O5䩆<Րf(U>ە|B>42@K(}Be˃RBjgR]Tqzq$5uB2= /ϿLN+|xd= =b bO9~D k)G cY`U$ '-c.8pS88(֖'~k}z7v)!= %;a<(X'ԑ P\K]@.zM 8%Q!r(*Xp5s)1^ƍ q #pM*&.ƫGmYF%6-X :? ,tiT:vLJXU.6wݘ .O;^J/EN\p38*(`Œ,GBde1ol5Ȏy7 $h? at3)Թm*Z@l, h4rDr %f9 &T~l 4ˌ3cr4̺¬ې?:GU9P)MuYޫD(Ř$ó+'Pm)rJȍ1ND"#.A0-<- 9(bYH2RZmRtifk# 8k/Vҩht[DAA+\\QTTޅ(:r!ׇ_tٟǸ_[N |2̇oupyj*%9` pgZ A Yt_p E1Q&(?GZ D8rIۣ4}6Yޘg.PK*/u@ė[ܷa&T((e%RK8ܡ<\s4-o>|8^\#(qƄbx:Z,M‘QV Xu|^*k?VԶ̚#ڨ1{K৛Y20{8Q]L}hxz6[avII4_~3m4N;J?]#q9Gw,˯ȴ1/ wb/ǣFO^ 'sTN׏:_dרkL %N!#y ,|ryJ74̃rl#6Y:5N-[өB}p<{<ë3$?q?sן>O?qN>Oxw`B ꜂~ /z' Ǜ- l1Yê f\s>diðr7~?'?=*A|nMpzq6E#)kOhhfh/\1c1LH7>h PAxl \,qq[k壗dbo  J69u6"<ڂs22' e(,"V!1vHz6<ҰC8oJnӓ!rSoı&dc\%bJ1hאq90RAQS:u&izsB 07XQ4Э bIZ;I|{-N{.L꿴'%C5[/-MzىIHW=ې%Ҋ'Kmֺ@pPpgo'\_NgM#VSI#\ 8(iє/}֖r|Omߢ8qy Ȕ!tR·q񠟥I`0ɬQk0%mJ*p#>BТsI[$N0 q,jF ʰ|JE2>9~4 Ѣ #8čb\p Sh_،-6'ú0YV· Vl HJN r)YU+*7;0 1.pVXqmkNpV#Pdo3i (u]nR8e(z^ 0tJ`Jn)` Ғp6K,Ida,4ene^eRnN2zǽQOnl7/@bIya6l`;8K%8 A1I L#xKdKΖgmr%qKB* @@M&ϣB0Hdi/%|DXЂ --;c-^ƌHHNM򠀥 $eTG"@q(\6ZHkO=kK=gV\i]TD40x8િPAm8V#s6 FVJ;WOrZP mL[#;r"& ^'^dXck謯.5\?B@ >K4tHZ: 77O€9=0"ALjMș &eF2 ,Z#ՁHOb.KYuv/Ƴ'׺8E;(_DbĈ,kq q#OMV-@9ny?ݮ 6Zx#ħq2NJl`1$L߯7c^>5H]IQ6:%K! nC(ԞkiH̓֟2yA@= T%G\5HK+휭8ǎ0n PK\R{fIϹ̕uA(oQ9-eRt'TʢvoO<{ݾ 92vxaߍ798Po}% pn*C2)) LR@&dQrۄdB>cMRބdk-$`JxEKXf9]A&# "+^ +5$,9,8.P፧HXs` `A`,k0-6mιr(ٍr/fpyno}/~p 9ѺǦr{4V ̤""ijIJ0|ҡjq [c|V$XQY\A:'h'O'x}gNhL@evZ\q"Ä\vEjRN$ 4IԹO O76H WƁ>PBZA=N#`ya v|5z\bG:lb,t=>i0\}lNޜa6ÌuOfb%zT (Is g D[%EB\h&#r8ɝ6h,q.  MRl8[nݥ ʑ-e%N )4JD%eFPh(ZY`jvM|aj&B>[#BiF-HM}^ʕʼnfZ%ka1$U@5ʝqU/jt6:$kDƣHüVRr"HY65/}l1v@F!)~S#Ķp3us=%H>wDT]×Λ&:ǯ~ ta:+8>3xZcDJeM,( T u gSVĢWAjcDF:iB&ś;)#:=ܟq>#u0OB1(θE7_>y>v%G;N>8ǵl/ygQ#CEiPN+X)yu$^ɋn{'"_^]џUrDgǫp[ <˓P%/,L  9a|@P, y+S$,$C 6ox-Z| 6"MD 穀@jWh]v{at GZhd''G('(9ȎFFcGgw*yH'GWp5[c'_ )H2_ZB }=gsak!RQwM۫۷7AK}R.)xVwa<#-jh%\}FMK3MLnjo䛟/_O>,Q `n?'fnDeۋw@HSO4buOMݰnAbP'|S+e뇿}q30Qԃχ {tӇѵ*57kآYP੧MP ow6W58~Mg)]ļyy5'+bVD 絥wzT_zyEf_Fh.GA&𧼫{h틵m6q`Sy#IfNAs5YP.E)%*$#lLIŽ 3WqO^ 8bXeDC3(Ѭ1xe'sSv^n9ĎIK(tנl踛Um,ܓQB)4{)kO8]?W\W'q\} fr s*;26*g 9jxY{\H_1$iY:zX T-R AkJ=4H' @` uݻE1(WX]A \"y"YB2 !!s .d8^!zL+xfuR&xsy;܈(\oxurA~%(,hP8“q!Ղx텔 >MP@jǫ-{˼Yo[ME/7uNiְS)|Lmrvgz!DhV>.o]Zt>ZE<6 T6F)D;MM.x뜡2h'cD5ͳkeggl:*8KfE_~ٻ8$Wv23"/bfg15o ,ErIbnIaX$2#^DžkOhϏ? ߾بbuɅIIYITI쳊`Ы)"뢈mB $Pi詹oKC%av$Z[̹"[Yʳ`0x(XXXxiQ^B.,>-z͍:a&GG'~|4b?rUMbdt$x Rbf8VTa0b3g7brM\G{$j_i;u^r V3DHfLJUٙjBvt/$$=wQKRح.FBZ&gz rӞjɰGvҜM<"&^#miH{H{Aѯכs~XBԝ }/wX}rƬ+=`%cح`:X"#}? TbҌY .raHLޏQz1JQzy;{KfG'q[$yv:?2> 5?}g1HsW7}u| D9xk#R#qM茰P*4N_0_a>~燧gm_a_Fi/o*oav)$uHZ (ۆV9Ztf=liN_qohN8x7y'pumΙؤл1< >k{4ty1S+Q)ȞӖ] Jגu7b;ٳkdz=\/|5,jt;p26 ^ĤY[J&UDYFkʚJNF6uh6Z}qTowAͿC'pXEχ^?P{b4Çڗśe?.bg:8bw7S3 g̺̒w23B$}!˖?2ɚ*X yMUkdz ZUWMqc+Y LQjQ P7cc7l3N:r3)yoh:̜=_*]mQ/f O<{=ogl­WKN>(f0d/)?{cs:d3Qz`9I[oz܃`DٙwoJ?aS.3a5K7~mֵwOyCNcVF=?җl=ǙK_}񯏲֘Zc|vV?.n:.zݶh-h1 2[¿m|3ETQ2c'zRnTm^G.܋oLM>auJo%변=`"-$_Vڹf(>9>yq뀞$:߼NЍ}e&gֳK20a:V J]F.ZM\ *o1@E9jRH]ZEK%P$TujЪhّ ZV28;7 ''gmsu׾my͚bִWf2CӻD]^vy}t^?/>]jZ]0`vjDn (1UzA\k^K5[Z{WV.biSt©f%-9 s.?RG~vf~-9rm'|+RFL8 ֧ޔ箆 OFl/±Kz{u"z?>Z~߮\uq`$sv>?^[`tts/z07:7|:9_7^Gʚؿ}wzc߉6t$2.9v'ݥ7>½ @]Ɋڏu"H,9kNϯȒJ]M/ps9-@k(Bi-lTr6yD !t8},ݺTFHoِA s*g߻kiq 9pŋ4p tF s3{?Y&e:a^˪p϶ٌ?4_gљC﫲u앆/3Xph!KU(r*bu1Iّ&r:;\b ;E/3cصݲvN<~mRK\gT͕5a7{X2*%EYbmQ:stP&Ib*hB-HIQmKJBkܼ*NC}zBjo Fk`֎yh@1בɇ})_j\!UZ+ϥ׶alm,U#bEpwZCM("эf6i9bID9_|Ə0aGk83G6oUGáJTdz;m *bC&Sʈ)5tF!;L.+:QeIJklV~Bm=BP:bvب"??}=;=\o> >H&[“X/y$օ8fW ?}03{Ż GU^ؙ"YjjH%YtAoF1@E(0aW#16Y!-ю4VjH)F*ľއ Dd+qxUIcOi3LU3ڐjU% N7iKZ!rE|^Ic `hjA 1Ȏ .UC7_*KI!oG Lem9z Gh|4el")G>!( xu:p4(6Mhҫ(rtY\iAKW40w<@dC|Pc!9<% 4I"eLȼ&# f,TdA*X?Q<S#|qgUep 5Yrf貚N;>m ƻhp6[ό,dJ)$K\%VZiL=R{IN?&z1T@@"Q)|BC#LKlESH]icY&-h'ʢTn<L >^@Bٗ;e,w XXj59jp>%-4> : xdI.z63$@fVYBir9 (=D;䍃"bgM-By0tS AW ?KB [ʉ q`k>mOtbҡ-欝ӈPLF̲t⭛r)ܴފ" R3rT -,9E=`^H-ߟҗ$Q"I ljz1sކ`P")*5^KC&"!옔ؤ5)gE!$YoI vIyU@7Ѷ1@oFW!)_Y%YH+ XX;"2 0xӋT\ F 45:9/St]% 杈FiUF 0P@azrԫ{󳸾-s `WBXBE@<.2̏ο}}$+*j[T$0NUBneJ:$ULc@ QGCIB8xlH=Gl[$BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $z$ZUL UZq9=\$7Llz AWSӷOj44 ܗ=B,kDWR[DD8 xq! Pk|A*hYNSoJIMuZH [& /܄ϫ9d^>v£4By x}X e]cxN} j)n"HYxYc/)1亶R1$ב\Gru$ב\Gru$ב\Gru$ב\Gru$ב\Gru$ב\Gru$ב\Gru$ב\Gru$ב\Gru$ב\Gru$ב\Gru$ב\:ò!A\Î\i-RV]wGC 7a\*r4RגcnCcVKEJҳn6=\y׍~-=,4[N&d,}ߋ/ʏ/z *_Oot8[30F r櫬,4,Mxj%Ǡ ^w:Aԙg\0*4o M'v. HRcsze7ISAz.RLK:%blQTE' ) EPS/sT1)HLȚ%t0 }bqJ(2a$dzW11ي:ea]u}l}cp;Ab[44$>4:V8{XiLce rH#E>SA-hD8$ESVˤp0hH0ȴ0R4tXRw{wJi-]L[|wqKvWAcq8!{]ns3hQXp 1,#$k+)נW1q6kV7'Rd#ݪLG*x^έbò4lL.>JUz]BO) 4uQkZwK,Kz^Br!L[lf Ja5r3JV+zю$Fӎb}BC)HnjbȂ*hUTAӄd0**I 0Pl}$.wmQK+>GnvMY})uc׆ݪ[ݺȭ9 KnqgdM;hml^T= ]7Ud6t[gm/fl fkHX;}j՗"u[#x,`O'򷡒y{ m{}.6Ud]۪>tRo)oh\{*9ݯ/VxTӕ {`91***S>S>S\IviB5VkfsNkr J N΢$,$:śAxclvߜ)nvs,ڂWCz't/ހ<|w/h=fTXN:덡GXAB >KmH$mU"W"[ymYa:Ԇ ʎu~Ft qҚ,WȣĽ"&9S)2/3$h#zq߀o|9kwF+.yxDVYWK'?E+*,5o^nK4Oףv.({k>e9Oת[SM r{KG*Gv f4H9('Jc画4 |^}mɞ0 w )[^JxQ |s,c)5{l=vÒ.͙ǽZYlFeK?m38Ϊʺ+Fu9&}222͏Ȓ9?R+ԱXrEڇv䊔 GWВ{>vDHtUUGx*"Eӷ3]-.$O>]K`EOW󧝮'SiWX8vt{[eo 1h`[o?٤s:k]؝?_z^>v]^:{IdZiVިP(4*/ތ"TJO~p#O&S%K]ƤE/K9Gi.(^Hmm"C3\}?G `KU/'{Ѫ}Y /&'_ءB\8qgH#ڡ2<`e~JWeXJ3B+Ѯ%g c *p )ˡDv$xbV$sNb>@&))VL6VS%٫ ,kn.wqi0Jfs=oǼUbbU}Y릲Ys#za.hԓ &_dW{{H3/dmw߯?qU\tY`UnlW@lW3*LTZy WB {W6byd~PoׄE*p}}ݚ[k}R#TzFTAiZ&G]FPϢ9s"XŬeXHzlotΫ=pXn]cu8뉃q4 o9SIYR H޸(A:l 躿g]y } %G vkA,Z]v&ɎʛXnr^^{_Ғ3myv>?7/ZO&{2s6Q:/ YG g(dLP&`ƉrkRBRJ[WLH>sHIoJZ fCe8U4}.Dq];Q37FazOǺiGn</Rs>J=S%V255lJq[m]ډ?=zٳL vس6+fڭWتYi%/ݣKɓGiO{Q:/2%t1+M{~7h5b9/V@  /zNE#F`M'!@xi*CfNЙ Hhe62EU-WgǭL3]v/TǴd+8i>^-V6]я:/G?f~0 IH4e!2dERJY8*r|sGޥ` `YX2i C3 0 4}*6Jk%AOAtaUF9rϗެ ό@rmN>|٭uxoNHDwsd-OIeIPK4c$'R &7Fy {#I,$2,Os9[| }fyޚ J 94 !QVHq%̕1ph7SXؚ p|<8ygYd) s06 uOO<#>3IZdnNE?JGއ#:+ &ԓ@QmoR #^xhct_xf\<*PJi^rʂQLbAf \%`$=r ?&G|"hTj=lxu1kqiW[쮧ge*a`4Ya\]Aa\kwj6o~j\ξ 8*Yaj O-5o;6q7z,ib{ Z}O;u)B_ _>_.Y ?n^x [oˎю̅fpeLB1kڱ=p ˎQÉUp%6+E*d(ށ9nT9tFԙR\$,d`r\vw0"%Uze;rs`bGo^;f%Qv]B*ŅkA[oq.P,ܑYxo00k{YY7R<#. ,N'pp@̏{(,_`ui.g|[v8}h1)KO~+_3 ]rv"ؿ\bZ(ЯgeJe_iWg_&wnQZ^i&IO~g߶&-=}B.OlO?M Vtﮞ5-ym{8Yvpמ-g;Yg׳|i|L˟'L M}T& 5oLj\J#PaƵn| [x#rCظwoI.1oނ06Bb׋/k.{;/rFj+9:Bdɤdf&#iY&HR[qtr=rL&Zt'3TB-*585Qxvy#{'_-صqbj:W@؁aʊ=x)&sl,ӮzAdi݊<.hsMnY U] iȅ|w!/NZfR b(PH^|:X2ki !3,Ɖ$.#yܖy8Up#g ":AEOF@(ed @ ?No r>s 7;6uV c,s5` c 4z jD{wCGwKa,s.y"@IỬtZCq<<<'x<ῳqYxd:"dO!zi !L(}Ȍ*LPTYC.;Z`ّf^Qq8Y)Y4~Z (xg-qv(%}q=8 7I$_X#P$9 u= i,2:^ji=b =tK"J -yE,xoHYuL$) ̢Q̇´RY E !.GKΒ*阄rPPYWcGrL޹$:@ 29z$`k ΨdJ)2DUDalf(]V&t [-!1T{(n%i_NLOšHܠwj֫~btih1C4( B&'tNkYP>hK)R( BuopBG~ch7Z8@@1Z՗r#NY R5EK*ΡSO;A%)hIG =+p2s ؐ'?G¥@"(c8ANc&GeZc.10 2/O/OGx,z=VNIcQnZ>Is_T fNڂ֖{pjR Ձ p°ήe;qI.KE !8 keWҔG/: ̃k%ʽVɽvɽZ gx|HrR6*gp΂W"Ϣ(A=llUZc2EF_iG_g{ ,=6s #ɜޠ-ˋ$nIYr\TN`.q:P(e 'N ,U}S1a`)H"*e:XXxEt:g h1tc 104:@i<7&ZNY-T3F,{%5 zkAA=ʜ=. =nWS-:8vz\Fq >=KVp9.!q] ns9m;~.Ŭ7GVWXWf EYK֤yn0)aJZea.I2qEf1:99GG")rn8;-Z淰u6F,+cff{"bY rF޻C[o/(SgU 얮Ux/}l::ý(ڼF+,~vAB..x:ݲi8-޶^!~hJun7 =5Z0؎9sλyBۻ Cˏnx `sk7W_5Xxm^3Mcn{u<JzȓwzQp(2QPja)M 2Eg?{Wȍ/{b@>d' b|%$;?EؒmIM[Hl[")VGf )fSB%*߂x-KOtWn-9$.A# :<1#7:kBo0RGћNHl4v w,T UR"$#0K`.yO!)!hTKp40~=~se}T:@9&09,KIdUbB )Ź$d ,'joO_:n";&_yNNKs8;FGQwQ#z&]WN򑆟niTY?'[u^yrᇫw0Ȁ^l:ގ&Q?z5w3[ؕ-|Ufpu3)mLh(U|}EwsW l}NjuE_-BɛVG:R4}6Q r1?)?5U4'9}X? 3"8ݏ?/ço?O̧w}?Њ'q %~"9 gpEӺV޼iaM/`A^1b.hHs߉1 7KDh8v$[W~WHlR_;#&RT q!>GGV󃉿+ ~_|W+Π<¾Xk='y?l$Jeh H,jIdNKY)Oƒ.fc-Fk61}qx0.YǃJd{VJJb.(1yreփA1:u8uFi|u F{6Xf筝҃Uq̫p݄[Z 3ͫ4n3yEȬBpG5e=%2],Oo|j@9/bŴkB9>zK ضLk7LK MbLk?is=W [ nQh-Y[Vm12 dLAHk@y@OB X8FTseݜ\c*Ĉ:ey, FHM-\fDnW47j K)`dy2_,ӲzCFǝwGh9 QIs0E\: X2昸Cg>x K[uO-! 5H) If#wo19<$7 'ךPyO8U4'h MûIz!j[AUʛ EKvvƬgIm#)G&TEFeeR,8`:fRR49$!Wq.eXMȸ ͌b¥Zj^|dїQFzCzh1QG%eNcLCƘ1heKвno#20E`{P`S(zQĐ,!̀6U9̂6 ~g3bA?.{ڵf=_fP{y-j vF,,U#NE@K!AdlXͫaJH!1CdE4aML,(H`"J*|$2v2Vg3NyGƽk͏""Ek51$d!I%ŵH&2r5q Ѯkv[vJQ^A2պx6u7q -e $\DOޑ=^*ďEkdmoPdxB"j#xT B?֖޿[G%6cPH+nj3(S(Xҩ:o*A>re,ET eƔDk@PU3"q 8UMG pǗɢOGĞ_jDrF^ɇՍ= v3;B"x# <كc:y!msJ8-\b̚bD {G:$E)"B-p|[Y =-5 L(KI3ZkPE%I:Rx-Q2Y+&"O)t݅^wrwt8w,C9p6,x`X2Fexv |f(0MRNE{e]+tq(dhP[ږ]on,㿅R / <#B34~0.4+bRJ増RQ:Jm4@p&łgsC*a_2z=Uܽe N\ Tf`F4^XHFقpmцg.r=Oazjٍ/,,y)bfwmB3oF)%o4&f~^@}YМpml =jO{U{\w;R9u|O{kG-=:E`s8u H\Pi:EJ:`%OAi_(b) 6XlylVgLv>p=t-ؓCR]R&8`Ɍ VQĄRBjg82(k )9W΁Y^gmmF2f(sk hȵjl6 %~#E)K!y|8{mlӴVO>ZxGW.p጖N || "3]x!$-&nZ<ٿ';j/'A `_!:B'CRGM9}B%QR8Fg96|l`|-="]fyS~녰X ʈLb ֩%.Ӵّ a"ZSJ1%qU58X.7Y,uwI3'3qdP‘Rj窮&f ~g=‚vZRf>lSLsh+Zr7nYN;L5FLOV!4AؔrmӚ YՄWYe/MHG&G tY^a1)Ne&9 dZjLD|@[FfG,78N.rlDX dtGt('SM9tJ(%DL@-\~a91\rc,]i]%|kX59mil kesQ6ЋUHnfvӓλN: '"0y" 0o3FR0ziphV'4`-p@1$F<":"K>?&4ژo0&H? " WE\n { WEJ4-\}pe\YEpઈ+PHk`ኤ䌵pfJ<Ջ#&{z@Y/`p?&8ImG;/@Dh`A8]0Q)zM'2tG7vtRbhՑus 9{ҵjԭ@zAS:tP' F9P*^-Qr-'\Hs-8ƵY#slbm-AF>.Ͷ}Իi3ę7] nӰ7ve=!Yb^o۴3G߽qpt3lXh߶6{ZzORۦ,[>^}Ozʸ1~{R<"-d:W+^ T\Xyd;|9E}yyVpYgf&jL4KlӉ>u  G`5rx #8X >q=jK9(/һdM.pLUgIɧU`K)(!8,Fg2'D}I*hFΞ #$WNǷW~kQǏKݞyT/xG; U}vYmvG׋]xGWY?l]ҝ(i2| |Åsx׉Z*yŧԺ\n[=5QmZHhotwd!f \t*6 f=5_6ݝN殲 Vߪ..Lw7UR\̵,Śocszϒ>4cٴFnDiJq&$뛠AUH.:%/QkW F Z <|AR %d'Q Nhg3I?x^Z92:gA?ap.jf$ ՑAnb>P>F| ` -+,n7'#}NPxȔxs(2eء7BGRbߟ iWmh&1&z4 4ՠHr(Ң"$3 V|r@LjQRƙr~#S*L=ģ u(ţR$MD]46J;HF N/tͷ(I:LM ќ C?%b:dahTFE@?:s@B XEԎ9; ?nq҉)s6r7֔9IY^){_h".~V |w C46Gx AۋL, 9r* e`!Eg:" A|ċ~ɎNxnX:[mg߼}6p`Y^\!bDaոqtyU<{wܛD2,KlxdZt 7ݚݱ.[3>RDVBWYNTn^qvpki .m/A͠U^[c2&PK&x $dhWel1Iꦊ2-C|L (jSjhi NgLSE"ga\3.Fk⎧kH_v`l6dyiK$KeJ~ ɝlV :ëì@!!CdELtML H¢&H:>!Qɐd[ΌijQGcjD^Y#N#vd/,R4*2ZH:inddI 0X’vZbtU:]%G-w}wz]F$3ELuro*gXx7 KBk޿X[RXXAdoõwٖ.5ʴ9!4 k`c% zǗaK) ǣ x/jȠ"uZ)fPqfhE/ &Jn AFAZa@eHF 1LV tiA$oe62zdFΞQౡ٦tLlKwuOp{u?%tì=Yg֦U7aK}J.$h?t")D΃a` B0Eih 6C@1/͑FɿfYRb !)Hr-hYwEa5}=}y&| ]KŶ;^U<9L %iZz#}bbDDƨnJWČN"NQ<2e*G$3jE%"n2gEv¾.;u)D_|.\O;<}|K9vt GrTVԿ$iJ%UJ)1: Z*-g!;fɶgIWJR>EF*.PHhE 6FntKW1qOrSMbyQࣃo1VWK+H@nf`SypU\&$MF&hb/ˣb^ZwifE0Խ:Tv=Sd{JQ!8/2QdT^ja) Р0 ^DřauխPp%Z8:@$@5阄r9:k'جFΞ7}W|~DYCBIZg`zD+mURy$,'\-e5m.Qqb !I02"\A,mVCp ,]E kN48JU*5h}Hf^(Ai8-s#B73/E],[7B>N/N?jKYIs{!?f5qReF 0_zsZ͛#:skكﮧ>A슘K[JwkNp4(gb/XHµjHmÈa}uf6hh*]|4.zr7f?}lr ZGQle+jI9HK)+bX(8'7tw:HxygqWĎ"uO}?~w }ю'ZiBNZI0/&_/!x054ZZc5]Pa֋g\9qw}\m3K߾ƿ'XpovrtkE/aWly*.fT>Ue+JT&g/'qO ܴ/66=FT /nkwY4W!'\%S*# 錤ކkf1z< u"b68)tdVx@ gc.P3T99j<̀~/{ZZ(F-ݛt `ڂ_]ӰK˗lHK~_i[tJ0S4)ehHrgw3YY'leշ3$9t:tP-@ZΑAĸYd@O tt*pC@\ l` .6 r>OzZ8ęW(hyԮl7#4 'igZ{o{wb8 B6l4Wt5`:`]ks#Œ+wڮ%;K\`ezmIHoVwKjihK-KKY:'*P6gq[V,mTH-<*WKϽ ~~8B9UL+k /P:ե;tsG(N""4y#ڣȀ0il Vi~V`J%1R2.0s %[:DrM1B,Śm֬ꡅJY$Y<_0-Q8h"̣ CPD2VVFeZ y+Ӫ;y2fc&PfTet@"REF (Utȴ]Wǣ`mޝ~u7xpXn<B :,Lq^}*#8?ɤ6cr'SP\c#s.u!(d4):! {O!wn4U#I 2 D (#PK+"RN!"5i7,VP;y̝v 5K(F[ U9A [$;^a_@ٙjY:R{8jօtoS熱b6` cݷ߃?~2_'-ׇvz0b?)L d&Хg* Ͽ@f%>xS8xPЀi{h) [UE฾lYgwsvv]gGů[YYuӁ͆ka>{mVʠY iA$*Wa4u8|{\vzW 5W^yR/ysƸ5%6BbIݶS9 vpQ75B-5!f^Xn6HJˣ98 m7CQ9$V8m0RS-yA)ıH1Ƨ6O_k'WZ Hdn*ޖ/hNfH.E$M?0l&{$w l7l^..yQ]0{5:[}rjÅXp ?Ȋ< (tާ-MNUtr4sCr\gjkviޫpg2OedVvE C7ts 3[UFg[gl$sPݙ- ͹TbQk,&,nREAz]TN Q/a t5I!񋻚 @ _O?KFBD}+ 񗗥-jP0i8uj+pp)a Ao@IjЛ)GRt*堜 Ԓrl^ҕ&Hv;M^i}R`!ؙes ŭMY#7Ovsg/:~!>~VS')%Ы5FoAbW oB?\p464<NX3((dQ[^={t 2E2^FcSB(kB.0Na%59 tP[sF6+e+vf̀Ph^TbTہl W*4Դ`6| 95*gz+f=QnzXD^g~)4̶5z+Yv@WJoVK */+cۈϴc9&cg-@rFϵ,wJ`5XcY\0s,wf|s5R^V 4a RlzVʚyBi8=}w37Mh dF9%@ Cڋ6ޣ},D4 r/A茼9 s#+V ĥl.$i_IRJJ+]ΆIaQ1>S¹c&B̀٠N@sjR԰C:VKk_Mj%}2>" *}/)EPM6 UbT+5~ޢrYmvaQHn[ 73ȸA> /Fl;8Ji1X)"Wqt.Fi@Jʻ@̯hM;' ,)=s*B\|(bݡǗcXágHPR JOӤ|}w_e ^7 e-%qfsI!87H\ M#&{!.Z,÷*9G"Q'IЉaOTScĔ#s;lB8ZN*MY7V:W9B=?(ja[ѐCC=imOsfʻgSgSYv|,hxj.[ S=ބiB`}ZҭlɅ.X7Կᯱ$_r-γXVsX^"4Nb5|2}mӕ7`[M>@2P|#@m~etm1c0@p&87w/a>i@~2ihrXZeP(\NqNqlNqlDa%κ$:nǨ1a#g9"L`"2J[uS1G EmrȰ,] -J)!RHD#i)ޚ8۝V蔜j7&mlx>9lvB'_/]-zϑɃA]nC썆%Ћp'XXjIb2A0ɵ r B($<3@cW)Tk%iiOM!lIRsXLvX埗Eb!궺Q*1In-[pQD$.c27k1Eftn>j`fSGر[=\ǔh<,;tQ8,֧Q5SaP`3nH 1mUg x$ )[)Hf՘IDk1 Jl5DVHKDξ+nejd)z㒉Sh)=?0؛ O5ݽyY|bNe_[\ mm\ J630τfK ,?NS߂HۢloZ]xp??9ΰ!Op.5gѬ-u|WSg+6M:qDG[~IYwZ#ݪuq}{pJn cœ?<~>ZtWcI4O?s4.wou= 'PwtXd@Jx`ŲOƣ@Ϯ "ĝ v˗\wu\E}YqdMMmџYL>?o?tQufC5͉,6/npxzD/G~8}>qa?ъZ̓E)OVutmjuM- Txn/|5Ve޷p $Og3i9z~jռ6>5yBb\T)~5$b∙]z~:ƒ 5֜|}q} V|0ﰑD+j˶JλH,ޫUN + d,bVncHo$UmXb^Ӫϟ۸dȻ<}Q'=,fkB'MfNG `}F.A:悰5SSggOkFC9qm=xAl3.AY=na{kg룖:ڨ/]U}/Yaz׋>=Ub*;t[;{yS~3qE{6{PlѤ`Gx Ã, 9Zr* 長v+9>۽Mۿ{گ6I2_c̛;mo߼}ݥ$0@Z_[qݮ|-ʭ/Z| fϜ6yb|fM>6pRa&1g79EmӧWΨߏ~ &|Bԋ%~\F{uz'$^?.2#qdzJ$B29FА>g B0*昸2d&%>YFjս8QNqMJ&/?rLHoJZ fY5q6hE3xCr-= |&ٌWj3X*cYZYYcq)-n>bl/4~4>_;b3&PHeIM7HhҒ1bD]btqFMleb( AHQ`SjhĐ!\3 kdW˭AZQ{̪z:8YF4R)DՈdL) 罶 "BmJBXgxURT|x_W:B(exe~|T!/:5jl0GQ}է"/Cԟ^KA%bB+ lu! @\Dmx#"Ȩ5|€ʐ2bh tiA$oe62zlW!9Nhm: K f4Ov==4O8#&GGi-vPeX#m^&QO3CeIۼHsX$9F01*&d)#y5D?)Fy!Smfٮ.Nl<ϞY ̌&3-DH-+7?gyrN睪 ,R༌\3@]{ChèM|vO,4BF➎}ϝdp!j_>_^cܿJComx`x\h&@ 'Y$1f^;s`t=yb826*8(]RLp q+RVܻY8e_7BמwgC6s?6! d%U>Ӵ4ٓJe249[҄'U5W9Ns>I69zC!؜<7FZzRvjl;yWdx~=ۑklEƐvEoYeI9nYPLpy8O/o]5Z"GlP?R72.gE>ބjB&"#V߄|,MHᤵi^j4 M,!4dZHDff2a4eO,8N.r9Q>2ւ*z"xP1=x(d89W4.\w޾p%Zg74=Ac4:YT^%1=_}%Z)lw#z(T˜#oXPqR&kGNk"fXOcO O zQxe:zD;!a 0T {fLP3QQg CȲ#18CfqҸc,ǔe# g`% Sg-qPmt6+e!O'!^lE'ن"M#}lihw{s]uMT>w+ 1r+l2Xf"e!dfR$HixdVJe|J\%⇔:-KRcb!st,.-PMnSmkJB#DgA+V(vF% +CT}tAlr٤Wf(]h.7ޗ3`b^'4IA}ʹ.H}5D>ȍ7 ܡ@%bjM:Ƽ7( B&'@k>'!xe[69 QҥSc[cӄB^C׾}]﷎m {64i/`~Z1=A?J3 "l޴fW}3]yg2 Ő.N&Ӌ l_'gǿΦi:ˏ?IZRY]xB\ma9E+Y$Yez.Ĺ}nٛNﺍ5@ e~C-6ҽ4kz}3o}ũ"ߋA5 %2bCvWu0yq ݝ_ڊBjyw(8D3?g =?8XD7d^1l\j|)trN<t?ߣ?=0. LfUw̖Um"ʊշWkv H`agઈ+UVm"%E`2ا||Wd;H]7KA>ਨLy1/qbnJqKNCI3bvF R В`ہ@ iCpUvzgp \iv*Rތ&J1ɯVjTz~76T`[ KSJ09+?U>N;a|dJ;o?-\q<[+ny6B<%hu}#[C2Ø>3 sE|8"4:J 0RHctT!1T[BvzjD X7'T )U)SdY`q.P(gzɍ_[p6?&{%A{X<_bòdeoXvH6uxYUL95g !ts?Hn⍲#)Tc.ξWH-A6.oq 㑍{\E|'u4^^m--7Yp~>r|uIx5Ҭsw"U+)w D.05*2kEF m!!˄WYAɡ9H`rXU/j=Ώ~`"t;%5M A(G$fu3$RVyk#-&3*An]Q|v-ޏ%KZ-3*UH r\(xgKF )bVQ!jX{\aMY2=`B12R9Ԓnh8D.b?d_!FW" Xƀ^fe%2U4Y;g*JUYސSNg"& ki@]dtܲRq,Bg,QQ 0ק_|ſzh0*뀟R+?]kvڹBv1۲Sو=-|0&/`U/GRD)T,D \Gqσ{B}ʖ:ᎄ@YL\3kO`6Kats2^oV?E*/0$M͕Jus ,7{ An~^_'4Zfp-w폫ߖ(m}si8+O𠻨UMg]궏6M\oB/R̶_</V۾X$']}QQ1U5)ˮjDw5#mLIOԃ,>nV=sx9z8:klw쪵Y=uV6R*4|4nC\OIo׿~7yV;?ev\Le+A\}Mf~vHem|UV۱'~.15tv* m]kߗ4wW_Gwt{ďvC$[İ,[#ѳh!'Ꜽ*Y%[d"j |~;#?V GH{8/IdI-)̃\*4:U6ujcJ]ا&{(tDBwyUFKJB`PH&j~G^Rk0TZHc$Fn&7nC4# qvBgSҼ5zQ2z)eh&C̍`C& }x wң N׃)t<7>5es#S!:R+qYdO$^Hut:c7xH".6Qr:O'ףN)! ʖ*U4vƃ9W7Zt26ur/?ɷq+m߶ {=nT.}p~g>Ӆ՝OushZ7:Vm׆\-e]DtN0|#a(=cŘ֝roԈ.tH1@6A hJKױ'[![0Oy3dCIǬBJ:#3"rEIqQ"F8d14'M$s4)c&ʕhRhJȠ8`jٿ~N8|.4D?:n xk%oٿr6'DQ9F餁Yi Lfs3moUw>HiZτ~==T@{{Gl᷽y#}%/Gtt񣁸 0jpy-dtO)e!xTₘ.er/`l[];{q+J8ڒg",U4I:$ItS@6J`u1qPBvˑB 1UhR匇 Etv>k^>1@vc9ũA1XV՚DYj4ich{9jޙwӛ6@ViL[Vgk{J*jhw(ןoNQy2]PE#FŘQ ,v9(e=xydRMi\L-php1K.z:tLf!EK-FZA&՚w7j#X+cEpݢl0ܱFخOp>uwFɷp6M̘WBA#Ȓ$XHhqґ 15l"6K&Tv2+ꆢ &CRؔƢQhĐ2"{Dc,Qr1OT%i`ŹSe1C4(vYe!{uwEH']L-eJ3!t*d!aQ֎;֜\o:j䩸(+qţǻ޺|Xn/kv3uJ2~G}bӽu;˫w{#>+U&_Fʷ}GbuT/zƺ%o3`TI {W|Efѽvԏ\3>CMъpJI*In!x0`2t-#3`䮮P"h >T*UTK 2Z/0 o(]dWV՚sK46Wɯ7tL0O<9aea`2x5_ݷ;Ű?x::~fk}E{ƫzpr+X'Za%Mr"iD΃cII Ԩl"PrŐ-т0YҀ14dD`=>NJ8#vt8rՒzuFL;͏^Ztb-Ͳ׉}fSY VBb̻Hr"`"kuP7"IrD=t~{鄞y>TYy5D#R@ɐHq-6{R葧 8*^#ǻzEBeo.CG PоF=VvðtcG4_ŀ7=,.4 ^qih 6Fr)mTԕe@.7_z&#}X%,8B -^e;^f9-.r`x8p nrt$Ѣ儹qlOwy'] Զ/ܤSw{\3ׅ=0]X|fafGE+±-c.ܠ#5W//7sw.Rvxnz|{Xx3 DcR00WR< Oپx*lPec(B2Vd\b ,u `ŷA)٬A 9S2G[./«:gVmĄ)GyX~_\JK$))&1}|K7|cJE02KnCwʻ|%rȢ_}.߿ނ$8I*-'`,ٔLf kxdRf]IxZ0zspIs=pHRK^#89$2 Bh4Ȩ2yԪ5goYR9F[UE>^BVtG$!c%R_B /= AQi<,$-JDf1a4g %]9X'vF( CTt $ (0^r<̵Vj!i-.MՊn<˻c,޽4/>4Lz[ƺ4.YcIbzF|k.{Ͳӎ7 4y1FްQJKl<:͉U7}}yRO[9#ݡIUt,/L#"_U6MnU(tZVn=ڎL}NikD@ #$5y6EƑX6>?X qyBe~k u|!TÁ JÁec1AAgwY/:)~Oj'?}I}z4G'WX&_q7s5;`~ݬ5zεv%qx㢵 h,ߥGoY;!>2g6ZvqIB_ho*8߹2NozL/z aN2z{_q6^z3!.{3goR_! 4*)8HZ7Bh\ԶqRrj33DZg9ODO8z9gSp@rc&zt0`a%2LEPvnZ5!&ji.*$с/|2pTȬӮp9$;٬LG]5x`u6dx\a^,p%tzsO]DZnrܑ9!T{cGo/ m̢/RHv|3(ALP("k'1 DfH *Iڅ$lI+Znڗ0ӓmt1,A[X/$2շ{q ] FC2{x^Cl+|9jжOaGIleS/NRRaF?Կ͝mK7b0r F6Lk䬊Yl.7]9T:de%=D9杏3)-d(H6 R=9/eqY@ޭI$2xF]Pq3m|"m/R>%@W^:+v_o~tpoClkFW\xV¥~ak~"f6U,.anTvSth /yn7/Æ9| TiBCu3s3W|١:+%MxM;4~RIy0ȯrI$wK<|=']j9 h_.U 8'v;siy(y#}~`ޟ>2|騍ݧMଷVtG% c mpYL) WAPmeLNSyV,yW>@"}סhth)uŌW퟾9T qf?f}Ƕ;{k[]ORǦȭGj!ztV;X3 \`6(Hbx؉+&H7Z>-dqL l+g21ihD:>٥ rb Z}:Ltب׍tj3ũ8U6cfڌUj3VmƪX6{_:-Ef.dfڌUj3VmƪX6iUJ 3j3VmƺNP6cf =Uj3VmƪXu6cfڌUX@Fc]ܮkZ`kZ}R=R4؀~3E(|)ċx3y a cVKXz~HG3Ʒ<5R@;@>=Ǫ.Mv<!TEyV^+c=K}uk)cL7 v$7&Nj -xގ]!-FDˮ$ot7%ZFpQ1(iSRЕJR"q\[1l_٤AaHdMge(8psRDI Y?DŸu9p E90[<8H?yBA[XKS>: N|?A|LxSxQT^_ Fмg`BӜhpV {{3:N4˦{I]B rUuH2Ă a1蔄Uh!R &X̜ RzƶXh cXW,|K?o;S&_'~p|OOh`0/qhFf+zÙ+gU"D RsΘ +i@n. 2҉I`3dFs \*/"(f2$a& #xK/2g=b0,9'ڥzQjՎ|ڕvlɒT 9I^[쭕Zpj$s~kxytn[-%L%&A N : Zgxͼ"u*IKl %:|^0% ..D.Lɬ`jDzMaJY܌G~m'S⢶Xcw.首y/#98鮂gvzԷ{f.8NzAF"h)u.AO r&{AܗgݸC#Es~G?`=>8x4Y=YUƒ@H\|%&h`$2CMP3ɑX$+Q(8$}OU=kҨ^~qab8_I-aŹiM9Nޓ&C"G HCiC-^t'KKnx]x!jhu\0D 9g\)d e5%.ۓ2+n]m8 }H0ӵ [&)cΟy}`u%sO)%3,HOrgtPƙ_L[ӾWop1[޼e®7?Q H(dMDE\I I4@9mQ@sx~tK< ?8bek릴L:lgfnz˵#cy*)2jU3j˂Y2D:$ 2$vpMv!2ab)Jg-֜-3`azzrKEwS{)"O$wM|$Znh/q]Uͧݑ(4qVAFpBxqIOFtiu:B|m]β-~^%YM\@*fǓ ,$m((U8#ʼn@Oq_;4sаf'V!k=!'<#OHxSR%G<$ῗ?:ڟjif"F*" OˌTqk Z 𜫈Du"Hq :f2&0l!Ҳ|l*f< n2ԋpcb{׀ YyGYw5X]Xq>wgA6x;%9`q ΋* ^pj^~6Y,6c1G}@ Fp] N, pD5 ; GJrN?YL}.r4f;__jB%v ^|4G'?ߝޝ|z{w{`=. GMPn¯i|QURUc}˪-^z %/ے١_-?3QUZ53zaAg2|<EV)cϟքh!Vv'f Axh/xȏn{&"HT^*u [o ς2`p)ļHg!YRV!^J6,m^N?q#G,K=OZ;Ùi+SSv^]uTB1.F ,1G;f8Qu\Upe~ ?%EQ]y"_~qb]h\!Y5b:4?__=D^\0c4C?[XVli7Ê2:H9w R>mBFjΚ[ǃy36ywշwCCzkײw%Z֔z.hPKwQß3& 3[q46vrka~߫?gqmkWq]E^NlzlorstWΫFպCXhŶ=,6 ix)w~|{p bG@qFfS)*z Z /yPNc|0Rk|w.F<,PqD=8iO cip8'AHb .jdegg'L姓 *:$؈?JAhDA48“qG\ F[d. __m6icEB=Am0;>&IjwZ.$4L@C"E&RH xb'iy`NE]Ћr`=2lsJJMd3B9xIgx uLƗ#l6I7LEXݿGMzslr-m}kB@=NUqw~9͟?q d" lh1Zjdď95b*'߰}f!l\Z/5_x ,9 )2 UB#(iSRI"%mqv1lXYYf ʰ F%Nd|IsYSp((6(8eK5)Oh 8_#G\z̯O4Z70͉ 1.pVXqlkNpV#]ogpyp^V^< HC┡^{-TH8 AVa y#؄Fe,֜-f[Xld<¶PW[S[x#k2ovL4;h &G[lDghy8<'*E\#s!x%M>љtQ!J<vC60ceɥ Z F&J,՜-<]jt|H1^^q6d{Ņ b=3859/u$WB rW 1 S!M"ȍS-IEyPR3.8!(Q{LB-"-bl1vhgKC,6JjyaȪ]vqӈšC"uh%qpObJ/x&)UI넊"NjwaPLXAgq 6yUw=Cj䞪;xMl8ܾ#s64*cs do|9 0_ \Ο ߞSAH?i"b[O @;\9)uBc5 GT*fVNuY0 %sLHB dHfEy:鉶6SkΖSg}s1q.J9@ 怜Nj)S/Ά91ӡ,[)E_-UhѲf] 6 *PԵew_!dGx %pSyMF&N߹g G!J'<)֫RB㨓RouKQ{'1҃q[g3d5]dn{7Zk~)N+Ǔ `:?V"J)̤B1 bM_{?MGwwuU>Ͻԅ J 9qu aE%>l)&CUoի-XoU_*v| 'y'?Uҕm>$KX{e ]Z%W4K:EռnQV6VI^!yr 3[G?-ګ}phI'_\]qyj~Mrmi/a&YอQZڨ@wײTUHŵ 7I^4^FOK&*}}:4;'?X53s}`;lyR˖Jo '\-s&ݸt/Mzo頦2([M&f́}\]|s[Ò՛/n]˔5gA]bI`4$%B%\PC30]H[ipJ[ωj8c&‹;wĎϘw|Rq@6l"䩋2Y,8f\^Q`$DׄZEtG 4FEu MT.w67v%Y'B8-.O |5rm8׆9Rί ЍDߧs6qNrJZDjx`],եL9ª|ǒ˳P$\z㙠c-f2d.b&^J&dGQ^ A(c"߳1`WN%eLŘx h8b>QJ*R3q6[hEC{\[㢟)cNx=c ӺN|y$ Z.5!rgVCg*g=Ѹ:Q>P|rN~Oq/%W,s|~WI'm}pBR`5$dթ=(jU{8spY4 T% \w/Ц*k/[gםݽE|k5ts)hh|yHeH&guib E0<]BuzZGIX˘2kO [cs>gWt,aLQ231) !eމy_YtdJm=R߭Ozmto1ow( TjL+8 JV+h"g*s6HѶ igi%62nu4[mMrFhR /E<:Hgl^/wǣ48v}g:fmݮ(8Ɗ-JO:-zi@G^JdUH(☘:S1!;5!%;IR {uoB.i-2LG RFrw LF0-OX{2[b,XD|Ȁ@7cQ&5.pEp`I˭4,,uŤ+qa;8F=å_~p V'5cz=]0 z$ gkv6,CnAZAnW\;7‹稞ZB +B"èCI}V!ÚÚ5?fdNb" d+l1,c/W|Vdh 6@uKiI&<}ٍNiakDE8lN굒)b`C{Q DzNh#c}am SX *ыD/$3^d9EWs^tՉF/WP AH]e9uJBE]ei);uudQ]տ #,4X=LeT^]]B[p6`޿c~u 96e\]Qp19޽ j({obo#LZW߳000> 9π۫!̨5FKιh:{K裷 *%_ј `zp#j( wȨ,s:=as[#!2kSsȚ:V|E\ KP$ol(4ZNϔ|g8X)C&'%wZӐ`EZh@,|,v|fR!ttܿW=ۃBvއw*-LSnLo.꿐 Z*f'((~dp |Y,ذD@ 6kqp6$HQkA#0kj♒2JՎsj*2 1z)]e /P-w0 l-ԲZPjB-[e ;"_ye{˒`0.Բń,ԲZPjB-[e l=QJp5'(\p5ƞX Wcj,hGM.X9F 09VQN8kքx]% -)wu`s F8&NEBz'ئ=芠`Ow:|UH(q~prqxME[ѳiWF7m  J ;:ځmo 5\I$ъ]{#6Q ڻpz.aauYMOzlvk[ѐb7wh Um΀L,~ u)Tܚo }=6 o/,^ γ0aE2(d@T`a,¶| /f%_I[8juU,[iTǂTJrF,B 0#lXT PSgR RXB!"śwӼ7NcCq݁HDf(@:p;{{歜歝͜✻'+GQ(EM -H A-`!i)΄ 1a=36pg'20Rx=qh})CT%ڱS3q6;şjSrݜMZp4yzS}}.y,ڑ,5$> mnXIQr% PRj, 4DL!šNX`٘1ZG|Ox{s{o鲫cRcߣom_ޑ+<\a.Ʉ#`8#bd ÞQ0\-OĐ'*B>9Ù<&)єu,.,,1kV1xnQBuS1HTc411 ɬsp M8DIlrրHIי8[-u{8:c-wx}Ww7{*6FZWoQWZ9GV8'9%-Vd5<UߠeƁH¥7 9k(SJ"f2dBYMKk "vr8ъ~'WT3Z\'(|kxyy0- %L*(HXR5".ͦ*[ΩĈ"$%OcHQeHJ$J?X tJ REƆ7AMHF|Om>84zΌg}U]0!{iY(2*/0'Р0 /g3Ðܳcw]C"3ooxh_ K)pa!i0Xp{)j.8Q.mrQ`ͮ46)+`^(eq.`g8M2@+_\Ypey45Rd{R9T^ʨ9wdE. b_dι`Z Dk@f'3e%72!ꬬQR9(9K+R]Eꊺ=if@ 9R{Qn6ڤDGNw!ϭWJTrDtup[#Ҿ~liݻjy_5RsV ^oQW#f=Y>ӵU&|rH?Gu.>bqM0F] s濌&y<:>YέtsqH4L'\,<֓;Nc]= {:Ftwl.1M},Xvdj}FmS٫`{dWcu\IֽEFґұtyL󠘍!??m-jZll4Y>}_O~Ï>qa?>8X$,P"oot^#\oIqŗ&8yIko~ʅUɃx'$6ْf4ઉVnKe_=< cH^@erc~י&#op|} ,̯$Nrlmsң.{Uȉg*'` d "gĺa =H~peV}~?g xX"x: N 4J"Tgc.P3_Tԙ7.4T[7 rm=ڝYm[vFYA_0^qZ4_UdH)CCvG7S87Nts!SUWSԱz"[x<{eDmipR"._}Z_4={/J?0 ms9Y)+f6i@ RFs& }=6۾lq[w!Z!22-"9B EX,(jG9; .?7"lQHG>3b+Hک}~N6q6}#Ó:eS8#S'rqEsgϝ<Z(m!}`5R#,h Lf[J723VQ|#Ѵ +;zc\9nx7Nco:1\sļ>.k?noSg a Bc  dy=袖K\2rjի)xsrwO<8wuFffɇsqv/O篇'[b!V'{4}&J`U1qB6ꠄ0' 0D㚔";L*[I+^Ia+jVt:yd|S#>p1?Ujw,*^jW0B",ڶYSd ʩwpȤKmѻL†ژ%dzZLj}6g$ZzLFe&n˸Ky[Xme<ʶIm r4qJ|g-9._xybpd:pr\! 2:!4 k`c% ( RE31J瞃~%m}>%N)f(>8r5fhF9rmV{%u !e%39NЙ 42}f2.&Ξ9M|m7Czd+9{)SOGv}H;luѽ/{_}-'|Ϙ>XJr!DC_L4{NV$yP; 9a`@J`ObZ^-0 ŐJ44Q_3,)1H?t)cAE8*yœCo7aw0|O.~[kmUzrDYu<ү <3)8Tk1g "ɉ4Q܈`OJD'7=LVNF++q3OCY6Ѩ_xdE Uӏo|7 :~cGt"'~ގc}#v,,CvZv,!E=V:/qOgIc3_r7VD^C#- hor@;b9Ǜne2!I4 OE#w5`)Cm|ʠ'lKi7ȭd\5)X#HB5cIݡ횡aldŽ%u} fgi2K:#DLJ2-K=iY&HT^|U FU6x#>Iz]31)UFV@5qvk~tc^~}rbڍ+zdBRp'Yudy+Y58}CV@91E qR&k^#dS.yKBwg}((G037܆"Ɉ4҇0^0uyǻhB:*qޫ\Q$]ITȳ-`!8o #RVAI:RFP, ą`Q):2X J޵6c"Sly9yl Neal,sX%ْ%2eKzۭXXID}ԖhY]8;M ǴR_)  ,^ 8B՘tP\X$PuH" :.)^Wh_&a؍i1 DЀ 3hIH,Mk d4 AG)5vDklB Q[o(a9msc`|#sFcusKw>! eTr.Rj Qz\FMMI4 Yl@BKfϝc=2JϮͥ4m>zxCWaH;'<0XfgR4+xkb& dEuLy{ ssȰ{#it?R8h<-G/dtC?6hhm軏W߽L&ϓ("Spͳ}vbVic@-ME<>T BκCF;]ҁRR;^^ߔ6oq颿x5\jtj1);/]H-؁@@ٿxLOV)g7L{|H9C4lY(,IFxDBJ,\,#zʐ 1^ӏJ3k&v c:&o66>"ѧϙfqdhMp>oD}Ps%F}h;OiiIhQ۽1ryRo@`m ԣމ9wba#%IAbR(c[Iz!&Z9s,˸Bi2BS{Ʀ{+-o߾GrY~,FŰvVTZ`ZdGtD{Z?0U kgC‡RLCQ cw4}y1.OSZW?<2q5eҢ=K62)ՃDi])&I:O]g>&=-]>a胖&)i) =BM`no aI"O~ {xF?u\G]-B:ss;;n0G]-iDL7o}l}EMg F1]>i~|'Tά2bUHrnV%X29$4aM1uHƢwK2$ΐ:d&S `=רM\J 0xmJ$rEf1Z9)e,^i/T峉j=a{H1}i6zT# .<_\kwpKILK[cdng9Xl[^»w=i!Z呐EJskWKq@~B.*-n/ ;ybۻ9?0[:^lxHs`+Sopȿ_6?+=d|%:JV46R׈;=A! >aȍu4 kIK.%|kΩW!J xֻ=RM,DMVDˣF1n靃8;%>2:nY-{[rWvkY9U=%|]p?7rۺ9DhHhE\m(5RJ߀8RMlFOz`Fʏn}"*>9xQ3G4*:N,su1--riUɸ\Sqi#iTVrpi]ڐy0pm9{nnXJa4lQ M%q ZͥWΆ `>%́)9ߴx҃01J霭q'L¥arהgh^G>^]_5ZOMQe \?,)#<2 Hd6r8&}ix,ޅNNOӞ&==WlPNX5 ~;c`3q#nv߼W>UPlHwA@7 oӤ0+39(6GMqHe!+ CV{0(4 $ R'xႇ(4S;Q.#8 wWq)\BU'IQhrQZ/ c; Ug)Bg #FUץD)lCld_19R.COD $PZ3j4gBM֓hi=G{=~OUɚ3{a++  HñdyRMρ>Y!0y]u,G[!ZFH7NXZV˨A3>` Ʉ@M$,)+#b5qvfH=|lfɾ(+pqM3>9%Oz_LKk4ZQ0Gt8Y(Q' Ȁcjc_3BCo_M~Ɓ_]e͙ׯ.HA;[&OYvv{驘= ^1yB4VK%#"Qm9Z-!HpD(sy1` ".A-DꗨYu8,d/ۀVSic$'ZLd5z ^'P Sx8@O/ ސ'H2:ZGX**|PK>(=*fK l:Sy ]wa\}go'YY ȁ{%!*/OgDŽ d:E0gS5+A &ԳHQmopYuw!+ ^]ЅEnBk#,dR7Fl!5Ӝs0'GL\Bdy>R3\a EA#肎z"8]rl̦Fj6 pϫ|zv9٧47ٗU6YeL*:!f)SN_nֽ__G Z{.$;GnI&m;WEhjf΍jݛ.u& G0Bg?/gsq1]F=u#G.Lwz]jez}n}:Ұl,>.zϐytI˛[1L?=#7 =:? x֥u:?gW|&N1/Tk_t'ˋ.wz>?'ca Qf('L6iZwH&|#Y'YȭR3lJ36%oPKɀe~yH*&<<:*&)OϮ4mJI9׏02Y[=~US "NfQN*Q^3a!3NDL8F0T]7^~u>;NMl<<F'+M*/FK*NsWsWDZg, W%o!٘K QlNnu??6k%t,ЗJwro;׶E>7EӓG }R.)ɩގVpY&XKOms"W݉Qmh:4mݙ4&Fokon>. fQ 91Wn?nmDMGdbj$fVbHWa,2'ܴ!-> Nl{ݍٽMNQY?lF]1WCb>(_d d>w}3Ad8Ǵ3tN ]өDeY?y dǿ!}~?otF>g~O Cx% $9a  CRCx-^.6B)|hmٮO7|vG>;ڛ6 u~jFW~W:xI]T!U Bf勘GVTB!ZܱqˈIkg8QZ)czDm78DK":{_'vt碬C?1h9W;{a҃Ig%rU_z#%E&JOT烨#RRG0STuu JH.`%i;6OYZg ?:'~"ۧ۾q0/yW?H]p8nݾ>ysߎo_}k?~J>PwRLU{^s0s< Sl7S#a9,P$X丫 QVxsףKK/M?"X)b RP;#'>uqY6߷/_z˾o5znD#3/G]g|#8@,X"m8e5ݒ+)Yʋ&[3{nPS#9lS$>hLD6(:x0i!*9"ur)\ІHo|2<%U: ;6#gZ靤t_ecE77|2's--Q3zar[lθΝQ=ѝ W/&=#I4u{%$@uMb8IBcƣҀa`E#$!Q6cR3Ryδ"^KcOpN.W"$͟/i^hNίu\~g?6xӹm\Vɻ|鯈ȴ/|V$>I&Z&RVHWX}ed|/DZ{ZW7kl-N*꽑rjqQꦇO&??\~.D#pko?ό*ýPPY˄o[RgbHJ; FʥR#ZaY[qT Uاb1J&ijb (GʬsD DQ69k$rL6#gl~d]ŴmJ[Bo7[&uGϷpwhLdm7=f7t==zp=nt %>ۚȵ2k]^/>eo5[7غnLZ?͆V]~5e-)n}]͝Oczyz~k=] yG iwan8x5L篠.zM9kΦ?kmbpڼ潄TeZgy ?E/jnU7+H%(8|\ Sɾ;Td2Ԓ$|- _Kג$|- _֒JU- _KW%kIZ%kIZ%kIZ%kIZ%kIZ%kInH r"_qG038DιkwdclH #؄2Q*}F]WMZs^lb,"s.Uz#IjPu樂,P1@CVL@3CMP3pD&e6n,Y֗NZ)`wOyEps8D2TxVTn]xGOQJ| *:$؈JAhDΒP=H #r.ylީNoMg^,g&n+V{R=f|[7o8HIlֶfvD t઀bQ% bx JUy\{jo~',.(wB~۝:y$Q`kQ iBS25^\{+ {?$ōrt8I-$2 UB=F~MInc'R:.i'd&(Q D%"i  Ѣr#8$qѾtRlި|VAA4&b|ѢTRƒh}w̡U"(*o|9`$3b\9"$"okNpV# XIY3^hmVr5qGR s*(x.x`8zVBOE<DN,oic,b Yt^_) ILIJk!jB8x 1zI˸+$!Ψh dA77M[g~\\f\ni~VTxrLML,<[>FTTf'l wD>NѨJXe~dB?g sR:_'(C6|If?uy&//fԼ6/7h^!Q-#p|OM>A#>_q.S-}Ho!AsSz3ơjlΛ\X u]2M ImqE8il?f%vYz~e]qy˞O dtqv%k1I)Sr ~LN)m^bFk#!eBǣ@|\a!y2ò-۸MzͲ{ 0{ U#*!*QNi۫`:-w#,Ug i3Yg?|[f\ zRoh$Wҧ; Arm"wSz%d#̣$})*M_CǿFں焾{)f`8OuҌ~Ӿ[O3*';\bAq w;"<}Z@zX?=BpNq1*Ni+ςj‰;<Hc7É8 xSba1 1 `ET6R Ŵp }c~f"=o~OnMN+_Jߕ̕MdWW}{ai`XiAr#<ə"v H4컫qR277riCѧO+V젖biw}{mEDr؍OUU?+WtcRW dP j=BCdMuv_j2cjߙYյ/5nƠl-&<3ʓt`nV|h?X6Ӯ}AhCmݩU/fA>xnEtkđRh#&4ݼ> dF1ZdVfMnI۹%9~}-D`~Z{AD8݌M[6@.R r #$OBk1V^Z.0-U{bmHa'Ņ{ϫ8A)Ÿ Ba0^EO͓l8<\=ZcjgaFXІ)싒2(PVz&@(Yy.3Q KGJd;Ac P.ru(G5afes qX}o{uɏef5+ͪ\ jsQ顒%ou躶gj\k_g5ϣOa%rtx֟L7յ3N6`f{\y562R(ºJriZxK/GG}US8*䎪`VC($j[HEBDjn} CN ¨iAB[B7zf!% 3Bs4-r6lYW`F=؅,/s&8&}[cNnCVes{};1x25EZj:^#%BkΫ9Z%܇ S)w4h^C +8JNhɃtEm`S5^+O  mB{يMRYXKQVT!`%1QMKc5_ʭU0MԝFǐ18@"\1B '5qJ AM d1\7֐CQxsc{%^{zZm5\ ԁ/LaYꀬmXM}ITrYۼAkWHbW@0f`U"q`V}gWJ%;v׈|@*]CaW*>DBzJh? v`P1wvzJr,yp[g)ٲh҂tx/J=]d#,C<%QA4 ?٧*SJa409;<a#X** ߾}5FVKs5ʎrg3muf~cX \g03蛩Ua' ̢pZEǸ8;zJ^m|ߜ ɴ&'Ξww &ch/ 8U=3י?@iFta0L Kt&@o42)Q\m9c.f_ǵRS)np09sH[G‰uXԓf r᎜,=Rjsfȵ1OQM0&4nfw;Mg|⬅H(򹖜N)bxЄ|Hz@ xJ=I.Q+^KTJIroPSZIvH*+ W!q(*QKپD%"Z t/ #biܾ|?ߕ0QX=y؟27/37ՆԫcP/`QKB_>\J_>ZBG3-zUz>!>ȹ̥D iƩўkT V0b҅]z#\[=Al!8pښ-G)3͵kJ%B2vlJa^iGjȌ?g)Oc.*5X@Ue Ji/nHΝr;o~7P}!\5qMGn?v__b5:[veuc翳G=H0TOx2HfIOI3nżRvݟgW쪗C/ZKaGENihg d ayl>AFcM)RcLCAa/ ٠H gaM^:PA>βaK>mjn> /!rC6+=PH7Oa}m`,¦μYGA҄AY\]M,p.Gi^R)uR8GӴ&qC~d"'J<ɭ$G!tsiski98JHƄaK2rIi2w>ivZ.x>9ZzAm5>fU %{/Xz A4;>ۘ? mb'ߡסNj\XXjKb2A0ɵ r B($<)/j|=#,RFXp4NDS{ʦ拽3i`t/:oV{LpLR(a[TpmpwPeFd aceP@hr!Wgv@aU>I5$~_a uIf| _3C]"|,S|ϳLmE_eySpXOvQ5SaP3nH 1mՋ AHRVa SX1c2bh [!--_ĴFΆ+,},)mTBM`t֥Dd6k\AsҐf2 y"vmft1SkO͛.FӲM;uD$mryr@Vaj,z&@u17eP;eSydM:o}z4f5znmג-/\EYjm7Y ZlV^]Ӱ7 *^2_Ϫnfg.uX.dHDbuB;$M_HoJ&d7W}ʽs@8s0v҆gj6_A-ya?r6Nɥj^%(!H%Jz"%$ȢʶD ggtKjT~2}!/Q-Pe\eC#D 'V"YdF9F =ΈUm MP meAc,`ѹWKg$ d q`U Y+}ʮJFIjIRA+0GwW^oԑ HitL*"s 3â ZTO2`Nj57A[bFLpfm v8|EX#6d?)")JA)o"\U^Xk3 *:mqkrťJ/[`Fr4(LܿA[|]r!<\eSSH٢|lgFB}c*0E \8+%[_Nf'4 a8۸uV5O[kkE}umyhF^\G8UsKܮmd|b^&w&BM#8wt4 i5,h}%> YLx>_-'st1o9G%h˛l^5TըEڼKIsP泲"YY^2voTNXu6T37: 䫣7oJ~o~x{WN0Q'GN>V ̃K"Gc`\]o̎j- _NQr(SzxW+ 6ŪD(M$׺.| ! GA,ETt,pdb3I_@#BiZEqhK͌61h"w|%*"JkHza yz8 Oy+1؉rQ))AW)w8a% V0eT#mN սiN#շ@}ZӉ}.]euvb^3|蛝ԩ_(7ܸsv'lrVI,=jvm>l֋Oi}uShP.ox<{aTێi=b\dLk w9W 3 SH ,%'0"~ luz\FȼxtH, ;fXB'u2n"U,YydcKI 0"9 Q ƵH5qvoH& /~ˆ3g^@GΖ1V|t[/>ۢ=ߠh VLǞ)) 4,9Ѩ8e$U쌷gRbUn_oMb*3poෝL ׊'Nd|E͸Pe*z\]8$A=$86Hӈ ;r6Ƀ=Z8?κF&3ye;Lq hDaG%|# 1beH]Φs65@a0u>xy^!( 2#?(=Ֆ"l:6*+hBڭ̐$, jxT~8fH$)_3 y+%RZmP鬬t4n&;D묤^޴g{QeZE;_9N'Sb-@E[#q! 6հc(mʼP"HJ|G0CoW12#AB,3Fٍqְ5x(ꖱPuXXsH̸a۳gpqk.v[qԮ[jסv`|6d9I9GpH2R7F(F:HTApΤ ) iU3 b^A ('8tD i[g7NEdƃҏ""nq"n=G(#S΃kJlt)E)^EBhUFQson!ja 7. C2@X}^kϪŶ .!zf)R-=S̳ z U^{]iO?y tgSrYʩi:9EDA{,$1Q #=Ū1ˆ^p#BQ&]ה1тFQ4`pHnkhM= t@KrMk[T_nHd> M+']{{[/V"JoXla.0|7*̯y*0>k~BǛPM)ЛZ=,]u{mԃpٝ{ñX~Cm񘇮[" ܷYß-{ ߽.,?mE}5e7L3N׋xlw3`Y*,yxLbwKخcv_b<fp'wlWNj;qrmw_y˥#Q SzTf]Ǘ鰎!a}ڌ~60ӟ6{2ϯoʹ<~>;&~ o`WDz' O~8$=ѓ*UnI ;ΧelZPY)Ify2}7aEB&|tzf 4xyO `LVe'U ts|M"$N(I+/õtCP>c4t*?'lxq1)/?ThY Dd4^ҡٕ̕]d'g^eڠs cZ"v Dq2ٛŬԔG:/Rxsw6\o 'm(Af{+)?}-w'ơa#"l4Q>i: d4/Xt[&E}Ƶֻ{fbGmN 0.kwfjE.5LPjz8 1RU*Ocse>m=`Žu=]$DݶC+BT j]lĎ8qLj@2M:bq֌M&Z )9X'!5@KirK RlM),$zø("ti0) nqښ,`c޲/(e,@13JcGR(._ס&QᲴ k 18R*DE A4 #5Rۣ@kR1&x( Ȣ`;`SbrX+/B"fݦGMږ-a Ҁ_&uUo6(PLnjYRiZϔ>|VAVQtu(P_,&ɫ/R,[S|XDmgW}*`ߥH,mȒƒgb[-m`V"Y,*ufOC bώխ];yth|_s 2"D(Lj!*.U|voy?)(-Z}Hp|:8kޔ%1!4wZTw)%J{Di/W.y6ͨ7o7Zy!sgtsi6;b5YBS"M:%#]FlFYCEpG'q`xgU)ksI3'3cZs.'Umr>oY/`_FZ>;61 S~E{@Meeܞ'Ӌz58EMm x\" ")'+eNm(5aBBޠK yk߅,j1* Ne&9 D]<[X2k!!!S vГeKd!Cgs!>=jʁSVrmή%ee_1x Wڳ~`|f14A.>tav0 k6uܾ>^Bq:^ g;{pN61VI[_!5EFg6WQ!jT{VN|%G Ӓm0ld0T{yNEEOa%*S僐K8Q.;Β2Qڲ-m*&Ύ|7!>gJr\A$!G@8eL:(ƍ D5д4XdnB<>m :J!tlڗۊ&G'\sUc>\3\2s ,Bq8$({ FtU+ _[QTTއoz'+[ նBduGe|zz?QMԋ؛w*ɏ(B0X~61 dˎ;'+Pt"~fre> sUV`UWC1W(tUre3W\[ 5pcj~ƽ?]>'wnܯpMWcVkd"4BI62? d:c'W Ȥ]ڎ[nt߽f>mrS*W%-}!m TVZmy,L0Rddy1 >Km̱,|ҸB:QQE QJtLy[Ԛ8;䦸f^H~uzrU^|d~ *fjnH8[_l?Er[}xU-9m5V>:}w:.F_$\G2!DL/ƃ6@yVԓ' h$VK rpIIT!##Dz_ӳZfZ0G}\=7 ;V9P5kEMtkLL۸2I:g D},P}p9iI"G޲'Fz@Zr?А t$@x-\r{:NjT ,ڤC1@KkR2D'R(bYcIx!&93 YD2%FCLM{kwӕ}!Qk<'k%5+|ڄzvGg(u/|r-`2"*$ps"fU5pcn OsmY>c]bB +̹"3'XϴԭtJ.%%37xlJ(2x1Z"JwE^iU1im_imH9J:٫ph2K]C LWi:'EحO2Kon)z ^h<+`zΞ,F&x]B.~4#\0^s4&v6\QA)!3lq^i0Zhf_ȝ6 )^~<2KT%: 3AL^FXsů- K!Ʋ$#H40OZ4= ՚P `'eD:k E)ǣd;n TP#B]X}//ns|tz0,k%K\tD\'.*XĒ$FBd}oes$kp: v=b;rپΟhP1{Rp]0 dc(X bbDH9AmBǚ7"a)O֝N)QhTJ@s$*Q!bffṐu;?`i#t u@d+22 ,QsBZ(&RFmRtifk# 8k//IiS".ΘPO[XMpdUMH.q8#]|X¨:-~[k^z Uo/gNVMa4uJs?̃Yݷ.v%8\N '~@iSM6Λ@s5'mLQeK/U|4\;zޟnjZ9\뼑M62)jI )KT.B)rz+fި{ϛZP:[]XX? h>xO?߽ϿӇ?p~z+) @"?@U;T۪W- P-hճߥ^h ޜ>,[9Xr $/_IW'qVx媉qb_Aq/O¬J\J[^8 1}{aGAR%o@cفj/?ښGb}$\ҬDMF`΢9̃d ~ugr ARWHql\=??  .5?~E=*ھ;jq` NϤl\Haae_7ZFn|̅°L"փO@E 8Q&3Vjg%~(BRY)qG4vӍfmYۺFr[Mbhdh)Iڝ Ӎu0y8nDwĆ:^XJ(Pj9Sۧ6m,AcC U@rxQrKV.ѱ?/ِ!OI@ ,G[q(8QYTlv],Tt4X"w`JRc\I]{&! xv\h;Dkl`Fx1z!$Ҕ9 ;/7 |-z/?u M~"du+p $` (ɳ,2dqT?{WƑdJ4YDH8Vh,k`Iaa:߬RA@R;"nTewezYb"f1vy+>Mޮ˷l 6]cn#- a|7 ꤲV]fplwጃ) %#Ipl lsl4]#(wa7)gQPsQK'*@vTGI:l#Yd*Hݥϝd\6{.&Ǔ>%AFc'R\> WFe`浜MPUVC IMg,vGEwo>p.]`DʩR @Jll|\`Z7kU‰lz㉀eVDWjEK:qVT޼eAT!m=%"nAt[Q5' -c[ s9ҼJvATV(#'8>z bA Ɯ qlXM3vB U{kČW,{6,Z38qdon(`P'ӯ PbhlBQIӎJ"j*yF!9{D_7& fh IbHLBDNܹsm5K.v.;j7-ʆu1U>p蜤#T 8Iqo#X bw$ 8gRᔈRiaBg 'H !eBzp*:Hc2a6q6a7"0OK?vEDq"kA-F5%6Cۢ/"f!4Ȃ*#ب\eZaH2OKFbS, A94a HF;I6qfy`-d^:iɮH3"pu1 ~R$ D H%R`XF"2hD:\. fӎ]eC2?m4 .uƙ=qz CУ׊`T֖F]Z%\'T[AJkCGT(ȴ65LxAutcN-HB#lv^-&>M: >|hBSLR$5`hC)C-:ߓŶɭ/G}[J:k=zcvDHLǠ6WѤ7=iˏxV0_*8A\s(*!hYjX2Nft~Gk[xxKٴ83X*{^zx񬩛>!`+IO^ٛEWVxM&4nY ;Ful\X)1?"O|3om5>UC2a*vL1FvAѸV_i﷩ǓPD:0h^Z}돮 `W,|~F" `v9q65S4h=Ca0=C++(f/>->٠s S2-Hi'%3>Xd5юW|7ךr{Oϫ޼ KW\z](I֕S>G;7CZӶ\퍦:;֪?T/gtgy1Yv]q]g\{sppI 0ߵ=Ԟ?[ڻ6n-&>3iD5Ӻ|\|nκw }i7 cٺ[uY~"DStsU-'B1S'H%Q #4""XL4TGmbKgtr` !N9>:bq֌M&Z )9X'!5@/ % t񼜄ix)6Ʈ)ŁW 8 G)2-Bq``f`87w!fHAp,m=3{ 0IHc &G_r4'Ks}{|zѧC0/>Y4V;ƪ4:5GC$ooE2#NrP S'm!=DqMa|,&!jl0Skǖ:+y1qɺPeJj(Å;-S8 nt(,GP\V~<)ysXhpCYچ$OU:SKdqy“ߧO'-Ol#7W"%1\VXRw/^\qL>}wVJ;R.-W)E-@$3k;gk,<য়j0OҒ{Abӄp6 moܬ5/yZ2uK(`ʫE?5~joW^Ej~hjwpҀo:(Cڔu"p"]YY==)@lA#hx<.Zz'1;7R9FN\%򆗌qQjJlfF8"[NψVzuczsRC4z,a;j[ sq1/,$E,gGy;5CiㅑJmc}J!NR7;D/8]湫<}0{.͌I'0]X^\xuꊒ! w6, YzU2Ŭ6I+ 3kx T`ټΑrE8`B OL*5(K)µzqγ", yG%,0 DᄖB6%1QMKc5h@%vU+F9ru2FL+Fhd&NI!P,r9 \lll\Zc7:+/Vf1۽x p@],&\:x./a7œRjڤU[|eVO?:=jѨcNziFD7\+YR-CݠLy(AjCņk6M=}(q9 l*4TPlѸJnV dsv :[WazTLf&dg::D),Q3hJgn߀J882BBhȒKYC#KR2ő}qd lԵP= >,"?լr|_TgJxpMqo2MNm>}~JCE=49 .,,l na兴$=uf4s-\mnܯʈ4^Qg#+pNfJ-VE#g!3r( yU^QUkK^};MQj;f忦umX>|gTЦڑL 0#h&$.=@$-'>A')E7A4#* ǟH>蹟{H+=tp][oF+D6gCy$'0ݙ9!}u,ZQg6~IʖlSeږ<4lVwWUU gU"QW\E]%jwurPW_L\J }( XﻺJTu+ɴx3D<U"lמP+wuhPW_Xj-z=Sckаr^e17kQUv=IyeMX1_|e޿.[˺ٜ\߃s0T1=&r$$J{wxӼR[9rUeܝw7G)8LX\̧GKgprYLjLZr-cс3qtk`d-d(:Nlh<\R滬<_c%AͱF_RSO~o}# $g\Kr1{4\, \ o<(),:lzdۧ!8&z::,!V&^cutBNzOP0vY4a#A29Hy Jx0liooTHH*Z\9񰥠*&lύ<[-K@q@G/!JPEJJD[$rג3yE}HTK-h Ofp6]SMZn5tm]5e^*maVREKq1c+}pmZ^b ݗb7 u63{XOffewU?&n٤<^i\)\bj! 4̪T _iO ~ew$Wzm[zm>fl}׬DJ,o&lm[6uFY7b.Z>|ߦ+տ_1Ԛ'^{Lmtۺ Yk ɼʩ]f 9i aN(ET8ƒEiC@=@rvgs3f#r+$-q.%1pÕ}f{~ph_~&8@>J2tޤ]r< uIj2Njeu,:Cp1 Xyå O=cF@1gz4‚S.GZ98{.w9J/ZLjma@=$wrZ#}4ÿ,\*ksF# :%\riP;8WF}Φ}Aɱ $^ !"srk5s F"f `=h'6à^@1/ఃo/8aq9omG\3d <)]4SDcpR4 XZYh{V ᨐvwk@wZO% yw!@qkqu%-ϪNQ:<`ˠfZ3wb|3+hA2Xi3IM@HdHRr@ݤf&p9<(~OXAA6(s<0XG]Ä$X'8XGS` Q)1,Ay\7n+ARG/9^Q$IS!2/ *m@? 1:=Jc}a݄-2-UDPIF MY+5#8tV)r3%p0SrY(A?\m['*Xp&s.ܑIn,mK9!I3*{ͨbn̖Eq0nR6'dyXG@X|>(G Ql4{->Q/6} aVzl#i*f΋Ofֽv/;tO')/x,e| zActgS%rD|^U,SsӽmuवKzoJD~?wkR{2Y^ؑ+ίэ0b N'|[h&:$0E"64CmRjA]-t#MRSdm [u~'jpl )C?cvDH}, `0߲ż7(ao>mFuqQ QY2eI,8p1.W-ƝL&y<;7ˢt ~W?y}bg0?JFKO2ϧ)ʵܮ*z)FGE$L&PR}mafjf4迊/\̫l̏5_P\ߏgăwތ,IֆZFzZm)츘WAU{A-#LKy;z<0ᓓS[,Aͳb7(ʂAWE,\1MO~ߖn!iY 8N´2\4o"FcNp㗏i%N/g՟Xz:b< Q7LWO"L2Wu_\L|ٟ6h刅ə$7“,hǀd:~XTr)[K[.}ˏ7|5?N)uOw\.lۊ_b6*dxW\ls'ɧLm>1IMUU TCYΠ&M ϔ54Ktj>w,ӯ}͋m=wu73mx ĉ#8FL"`i R yo}b8Rnym𖴟垅SLDq]_yrEdfrs#6hg ͘$AxrHSUL!x\AxaYz~ZO{$ç*GGх?zą{ C)ǸaHBi0҈#&XlOR cQa[)LNR(!%CgJB!X9&0N;ʉcHq(%TJ riF6jG$/zަGTK}5 .x=` l]'` >BRVQ/$6"c.˽s)-> c'm4vPy2OP&(;-vA_kv3d'thy64rItPA$ HghQx1@@FUKMvL1jv ` µ-zqy3:썜AC*(|( $C-quyذeݏHtK],F6.;@7i@ɔKځT1O9X BjpX#V*V _u _EضW+ב DJNcRY VXDrL > *C㞄Y q֗N'A ᘒ!ΕC9d(QNQtc D C~`as y. BORF:h4.ӎbI--yRj97.|;~luggIe&5t,BLqtnLGŴF&_+ on˝13e<͓nn;-h:v7oim&wm -5#fژV? vagzgmUou=ȾV{긒][ZFJÊϟgP}HY_ѬcujZuOˣ0>ڧ'RG~|7_zO\OoЎ'e No̽}:Lܣ6MK hzKTjo.rOGٳdv6[n+@R/oxͬ=h3lvMbQ\e~Lr#^b cBX =ҏڃL <ϯwX$#=}6 /VkH,ij*'WAO\VEXĺTذVyM+=_w37';;דּ:Puֹ0svmja1~boSAŭ>e%:QŇk@f7wt0a&L`gUB|/]q\B%=!k9 |'ƑgA\$4e @BG<7[?T®S g=ru|RomۥlDH_Aֹw׈KY$Xm eFگ[r*`>`k˯NKw/7!hD Ѕ Ѫ}J*E@:60bQ;ZIPrHP%DU̙(lQkn>|අ8)l$oLV(D;O؞ѵ$tuyJog7MfZ~X[)ڙuvV7ٸ^~>j=/麁Z_;t)W?ª"@U3ދ9{/GGMq]zz&mc'ty-YY4a;' |c d͜\"r;a,IJHhޡ|YETV$TZHQ"qMbי9fNS^/te[;VeWw8XC^-|29FАјy!X*8dEydL* qvqM]H*[I+^~ E!'!1B*i-pP0* j,]ŷ|SJLJNw:k^#{iQU~}5lȕH6b{ll?LAʷ( EcQ B,ڶYQ&0HS7h/IY7ѣE2*M, Փ VI #hs4C%r3)ښZwn֌JbV]u ئ, Sb{JrstEw748N&klBRy@YD1LVZb^>`+0,tvҨy@PFuڔ ^C4EBH4\sHck՝5M]kuZ~ޠTjI I<"j ;-|PAdR`l3>J * r d2kbbA$lSIBG",N>֝aK~E1 >ye8hkv崊ۭyPRzVܻ!siAɐTErjq㤊emD.zG,$472I 0X$Cuw:zҋuI]uV[%wՋ^^u+>)t9%O/&F95⯜B͂М,4zeD"2ŇЋCnT ue:l[%Yco4OF:i np~"gm `>iU&ޅ#ٴ/ N 4zPR]  R"(R YR \dއgGeVR@ȥ4 ZWQk!eɭ 3Z'A̅A$Kd:0uns9Ͼ>?ڹ?V3vFj+9:BdɤdfZ&#IY&HE0-Vzt:9εs1I@ hy"h 1kHB}R;7K8|6[ܿk߬| ^#WsaY>t`,[Ut{ ؜rR#$!4A؀qM;(dU yg')ݹH:Ctl'(pZ4:MHL@!yLq24^ D,ɛP't .p|Dּ*TD4jDqL @Wzy ԟ ik<Ro]8Y8(B1wQcX=<0^c.6UɔS:^1mZs6*5G$bE5ïyQ,DxnY"kt```\> lSB;dnـFLp5d %mP^x.C <o Ep0͆48醳{IY==d>M+ ^جη(yz4}Đuh"7r&n1\*A\R*o ZU[g IC;Bos Y3鎊JQ_/E=YZz;2p']Lm|ty9\iM!E$ J7N d5ڐ>q7 #=;iܭb+OjB,Q{BT>q$t1RN,ˎY`:8D)h:=Vavک՝[YӣdN.!+zC$YI!x]*" CÌ7BHgQ3)t$4<2F1_7nAvIRw.) @=9z0@bigT2AsaO)UB"*07\^ z|2ni,(a3}T 0+Ɉى1 Wnr.fq: nK2 B"ʂIe /YP ,IBdldlB1 )dYpAm5ƠD@k;?^|fћw^:1{ƭ p ob$ g_|GW*U:l)U뾙bQ[ ;_,RP,;,E^3q:^'K־ϾlЀSZ4궗#fW:Z{xyljscWWa8_vRx./|緟޽~ao lC\"5,1)+.ƃ$A`qQFiP%8`b5(*:Tj"0żKi1G.hp)0"8(Nhg3IխsG$ y`u%Ƒg!9t_/wGP+"s(67NݻVZJ&^|~К4=ښ_ N>\ԗ\X.Uc7j_-x[h2+ ?NtxB?6i[=G߽xw`:;-}UT,%P(oIbQa`҅;Pu!ɤr͹fAmRu/,SnzH~{xhYۢ K70 Bpʦx$5hX^ >=== gx|HrR6*gp΂W"Ϣ(AR֭ͧFI0H)/e:C&`{%`'6Lځպs%+ځ7 ZG˛_cZ|s5'2,] 牠>,f$ږV<@\9А JM&q kP& )9wa}CLL4l Rhm2d̳?%&k(%U ] -Tyff?y;yڏFMj'H$a4]ƾ~PĎkpS8!*)paǟE?ua@UiC(XK_]Y?I]Q$QP0ǫTR)R#fpݺv8RkLciВfΉ38>dFUqveM\ktQT]j1??8Tn"# |c%2hǣӳjo^s$䓷y\[#:rK8{jUb"'AԬ4/h_T5FEj%V]J2=wKqYju%GKYB\kfþjLo3(m$88yUy1ƀ#q<Rk LV"JkHlX^=p77h H̕JYMJ +YD)iKuN+t0_L|^gh&^eSiN=}ɓѦ`UwjU4a;\]4{&X G)=;ԽD)]͋ʘJHo=Օr͝n Zߵo~<=j}mi=%0hM,.='գlU^8kt4g_˵2M:-`$5uʯ_?/¿YE֛cOW5_4F9ߤy"}+7o>͹.T|M+ݧ9^DgԠ $CoP4(KJt  w ʘJ\Q~ڻ޶h 3 6[O!z,ڳ^n$ ҵzvP ]d^qgm<:$ ^9 "vj/Q! ŒE*L<)X{))f$b"XZYh6BaY~XE/qِo>5@n_NJ|%e_qcc'Oޖcw`x)8L1'O-449Ѩ8h֩¼慴pWSz-QD0xW+[ηDM8E\J6:Pex.E .qf{ cA iĄq;5ʃsQUrWBPèhRmFϢLs碖)NURQQ 6%)}2FTv}&ccxy^4˞Tgp=T{&Շפu.u9AAB|MJ63SrYmugqD%vc#z!xwH# f>zJ-a$EV 1閭3sfMgOtHBlҲnϿ)冧Ա`Y7zu:t1[8|BAtI/*"(8:-X;XL;&"-d N0<`Z!J;c8aq9om8xGCbGC<6RԥFY1 ST\v\C#**T y6 qwgtZuQ/e}e,zC^aR^M[en1}_q}=+mY\B3w,|(^H3@MD:~J qA j\lׇ2Hp۾ MI#H||o`% [ =x\ C;7tf~PՒyK|BA(R$ PӢV7^k-e[ -&=|!u\g@6`eŴhػnUu9`ϟo#Zfq-d'h<_6b|;L`&,yxErgٛU*fkY VROFt5ڍ.rtb:NQM+|)FG$lQ! /؂=ѵ?'{&7 Saxa/yd_oʹ<-Y|v:8M] zG<<.z8T=J\Sm')8AgAťp"f N `˓)'>^ÆNl>;ޟ8OIjLV% .>40>wSIJ'qզhzQ>c4tmנ?;xnxq1)_]Mjxs1BMFV++Ad'g_eiXӂnjYMc/h<̾YJJ}"77\}viå_IosHo4;I!GvP(hJ]0Q>WUe [LFqJ߲1gBWT`T7F&W /nB u疿[~#DbYwb#vĉ#8FL"`i RIu(hq [,&Q6{JSA m)&9Rs#6hg͘IJAxrHSUL!x\?{HrJ8ȴ]w.,n![J<6~Ԗ%-,־WjuH'tU )DdcӒ@ʫk6>N&LL%3;ʹBv8('I݃FK \3%T:XH4FhJܜ^lg[JBc+:e3;#m^yYψ~(Kmcu+?'72S,=n`ЮSI7v~`shXP)H(/9*G=."N(CҸ8ƀs>:MuA(o΅̂ w{⩻(J繶)uئөqV衳C)A%5v?{V(j,17VHT1jaJid4Uvh1ldhz>֠+.!h@)d*ywYKWǣpV~$u;~ɳܓg)"!z<}S-ɤDޗ^B(mҀUP&% ɅTw!mb;:ڈ$a pˤW3-zBRKZ]44eDRh Uۑ$,h|@+к &:k2q[.CWMZn Y껏g_tv]+Z54`)]S#d^^EaB R(nk2W\,',Fbtxc,uArc,p1o$i״QjCOxSS5'J'$ab1ڐ>ܬ)O˝Kp2`r- ΃ *Br˝#JA0c Z,}驭xqcսe~CzUݷ=m\yo9UVUW?[߈R/lYMw3`Ovy]?U~_U_~|`:Qo 5^Ň\iBQE+YmM8/PoY$3hۂ~*|J B1xߣ=)5Sbe 4F?ѓ@"@I%BQWq+=>]^a~³o, 'ᙕw'ƒtED)'O Ir6$dDNzc0e7bԚ𸜎h. Z(}'/rG:͵vZJLCU~K =a1k3Ww r'zHn ;cQNQ%+42JhqW]!y6l@) |E9m`BxxԈrYe .-x!aD6ׄsa˺7Z2y%WJO740:,?+`tv;r }7ZA \erd)ej[? WJq2)"\ ZuGW\ʎ2޼L%?YW#rm̫*狘敤Ls#`^=#qek>|juY.gKՊJU,>pv"?#":yźIWX!Cc,n*eU/_:4:8 JwHKH( y鵦VDC145\r=/'¯.cnxb(y T {'W+gp 072+E\bep`ܮ8Մq6_AU="z9s2M<'&QU3ƅxۉ(ZJ J/08WE}Ƽsy=nDI%~Pj' X;@G Zv3~ ,ADd=odZl5m5c$<\m#%j̻b8}{*wK؆˜rGqӱ6`ʉl]pN屶bI`5-!~˭ɂ'@SH7W+ J3ɝt,0p9{Y6_ uq=mTPshb` MǿDށ^ (I {3>b"F: Eq\o),8P-+F>U1QB&i01s#3 p4Q X `Y#Tʕjf =x?@j,$2 D@H2ВQq#"p IƠNh1wbYZGB٘(Ҍ6f^\%޵qdٿB & ,,;l0 g,z$* }mJ2chBycHyֹVrՐR<@xKvֶڔ^*QT,tR.E#OAQ ׄOQdO])i0f;Dpn+ ]+(I׆R ӌT"yˌwh|z`L9&X3rVb{UPQAm> h-͡]K]qLڏJ$[PyU벫ԕA[4id, |Der HM$P **ӉPu5%M%$`AuJ+j+1w\ 7CA a ȸALAAX[@( `5ePDdBEA[(2#χPʨ[sVt1$V7C7a?%ӌ/%RC%:&|B(8nL:v_ bUPP;)ti,7H( ;:I 85WՕHދֈj{TQR0bd+%32-([jkIk Vh8(l,@E@H v/UTy V"КeM.dPg2W} %_nbF\*"u44U ULl^vNR'D̿h>`V9Àu&^kr4;h.~TA׮z|ou/C&>Hh"X 3 o b9PT8xi;olJ7#+J$W{HV*U2P (yCs ~;XQ|QΈ ʃVs I"9d^U0P>xmBV5 eqy{'*APP#H܎m }KEUGUS Y_"bΊf lZA]֊ANEo"dDwZuSvNvrf}t'`zߝf|\bZR7͖\_B,nbR9G/˟f&jYm/wcMbj+X^]-=ʟ opy\+jr?˳˦}1{mE\l4l%K[~Ngh֋l-;~vлJ.ǻ* 9pCF7H9'}s"|xq:4 `'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N vx@F!O D r:N 5qD}N t:F'ub'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N vf唜@t@'r"wN |&Q:>@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; NO D7ݔ@VN 3'w:F'PG@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; 8Z/!^}?;y~A[MfCoݥzw?+#L Q9d\`c\\&c\޸A"ٸtƥ>F묚]b2tEpdN\#F:]JtЕzЫS81xQq P2 w׭\I/-!X^DޝY.߯_Gn/PWǞhr2sqn { TB.٫e0}&p|] gއ|].ɛ9B}_sFmLB~_,ſf?X7{[[̖[YiɳdUQMoNrh^"pxF/ Qtu 2RЀ{ȯ6|דtP՜5\]<[Qop=`2Z;~1{ *NwN {C E5sl̳obtcv5ƚi7yv;4p3<>2CKGg ߯T S mRm7A)j;J ӡ+T hNWRJ#+M; +TwtE(7n0]]qa&DWبr*tEh]8t">2]!]YAk&DW f*tVI)]QWLSxy<]J^Li]\?+B+kv 7X;AnByN@.f_l8ºgWw[ z!=-.G+zs;v.1r&¨ mSڿQɰ4Ua*,Mhuv2 {'CWwvCkݡJ)oĔnpd (]1+\] r*tEhuCt'zЕѨՄ |tEp ]ZLWGHWV[s v2tEpd֮S}<]Jtut7vJ q2tVN:bP:JF(?vVOS+B^]HLWGDW7\f~9\*/DHv(6P1*]Լ|==zeIg3 ?K̖m'Yby4Bd6t /lWQrB]5Cc|1 7*h/lv٫Wԋۙs O}d6^ ҕObESrwhQ7/!d[T&G wi7ndhDkܤ1>W"kQGlXy}H=1esӦt6p|=(`1?֟܏S)5( v7l&MG8gT/vtLp}=]|@o:>ƥhtSt8K*1PB#A(5[>2zs*<۰w pTft\vCk@ ;02;ЕaءB(:JMOW2X#+;ی;r:*5"F:]J3wxxtE+b'VMF]ZkLWGHWyi u+&vO~Ptut孓Cw$~m&Bӳ,[; 'T`''CO&S6kd-"?ֻeVJDAm^@7׭w*-/ͦw87J;w ?ѿ+guRUE ܸ$!4_l4Mܯ+ҿ 2\pۛoޏ7E76חhz _\voߛޘnaºo,u{o'>=` ; jvWRVpf|aavb=ͣE}iYi.,2m]޽|j6sw~?WStYv6%2]:.faЬ_-FhޥZ6Y a\~)D.2wf>ɡ[3l`b9?6oٍ,"54XKZ*Tydb)mJhh`TF 2jӘG"h@QAsgl#R*:d:9;qNjXnɿfZ=r6Xj`yUժ9^gj1Hnl2qO"_˾C;3 ~t`Eqsva$&ѾsX5QV%FL<Ғ!W0Z&B4]$6gy)鬂brV5Va*+]szDR9a.#AEnЕgYںsL~#Vi.ucr':ȜKdkczsN܅T'B>}ĺw!K.$T)b '*!P@]$jX2k)Yxe]DʩT-N5H1ⲋo!T҇c t+t.! 0=} OVP,>! O:#)W]$[JnEJ8REQd^9Mu-Ri~ŤҬM(k{g!Ĺ,泗"x4),[a:(dF YELywX>L<47SDc@D)Gd%0lX;coP*o?NGKb- yb -i/2{t OJ0jCZ;ɽ-S{ I@h?DB-u1aZкjW-ݴٟ0Y9ki=}@au#(7Zﶝ~|19wetGޥ>}~Wz !2'.|Ey@* \rmP&qx>Ҹ#ԓB/=C8pښ-G輲DDq-5{Dh tN C!fH ԆT`g)Oc.*5X@UpZgQAB Kgshgku.eam]Y;si%φĜ`A7 mwי &c|H^r6$Qs-9˝R&$}Hإ􀞎j?Qle78kW-J̗ZmYՋ7bkVcX^#O I6\T=,.'32DW0q2,}6φ%xKnC6-8̲K <Y4 riAٶŬ5vؖ (k|И&*l7pcx{$fmT.F$XnKi^ R~dJ¶%,r6q aN(E!ƒEiC@}/e/e/ebFVHZ \Jc+!y1r/-f('F(Pʍ"6y:2,"`#aR*D)$$s8t RI=-ڥϗ #M07Wh>DJ݋`}^ P86u$Vde,:u~/yh78Ŝ9s˭#:F \_H8Un`. IUp;oB!Oq@1gz4‚S! 1Jlj׾&@]t9H)8-lS r pP?/Vl#},\*ksF# :%\@qȵ*s6/xkMቖ$Վ+6#JI D+% Mnf.:MýK<2EnpdZ5LxAu"w9\ 㬄6[N&?7A -ǛzIEqtRyam %EBAFhJHNZŞOFKe݆-:{,8$0tG==Lv0IFIY+5#9tV)r`3%p0SgQ2GoKX7OT"nGM\o#!F8YZ|Z}Bgch;vVܮ>Ɂ\YRVy1b 8X۔EZ5sY *)E γEӐIPg?1=|P i\4eoYm W'$#}(JDz?5|_ف+.^y-0Q,xK財*%02eRmY:S;zkGu-/)%%xMvtñ'R3qOjb[Eq5_~]_o}p㻯뫟>}z(bTBTVݴ$eϾLWYqcz}{0ӫYZ񤴿]!Dl2]waMDvvu5W_em9  3-Hn'93>Xd5ю V(jQJ7}S\斻/m/};*=IçM/ߞhvS{U~.SllrTN<:؄ogZu vSR}T:#u68)&<Ĥ>$Mȣ>$-;QDa>,԰|=rkp][o#7+¾,ζ"Y49f/; %3mٱ4I&غX[ZjYVCb[~l?Vů 䉳h탛.^[{+vz\@cX.fQ]K153Kp}Kh% vOޮEɹh="W~_[jDGd5fY[l!:} sNK[P3DǭGsa?< U4FӪozO>`bn˪5Kg] / p.ȡ3ڏLE-mq%& Z,P f%J7 >E2mp۰Ǚq :ӻ,]3{crki_~iePppjH5O!VgUBdd!dǕߥwϺq;ͱf[cԆv8'3*u2+EI-228t"*΄c#t L  e w)q!X`hx^JtLBYC˛'hYZḛbIa]A2:2pb*鷏7q?_A<1~7/謾r+?W3̔]f0M+M62o=Li:JK!gDjw"0EG7y(p k(ImR%ےwk4xFPj'֌wDV7.:#FK>K)yY5,;Eyg雍[ 4/',8/oJ3磟}D#yF}uKgAE_daDT$\0F %q@l1.8agpt/MgDWgVז/#怬*[f0KjL /=W8|#'j4ET8VϾQ8r~w\WHzؾ[̮nެOp.5gY]tMjkYu6։ϴ|oTl{}Z{4{Z_-_~ޭ;fA ̕c<ɗOVڻg92\OR'OV2M6$Ɩ@=զff`s3&mL뚟 ΅\\~]>])ت`OV7r%BuӲs4|~}Wuxz?'TM9o???rϻ|K+_ R(Ս7B0{C>=hÙN+g:Rh&vT]:>';8PޘMK#nՐbڎ2mG1Nl=mg9(q>M5Jv)ͪ?xMȬRxGW\ɧ.!qO"灁Hb,}2 ୍ DUrG8x#s_E^ǼUշ/\ƚ}# jh4}eVjl4iz5 g]7 |l~\Llh {"qky-_ue5gm>nG{)zRhi7_CJY$ޮi 9Yy)\9~JVZ˫)E{9Hz־Ά U>%"r Mxr,Ljڒ9g+AV7NWҒ8hR#\>f7_ũE1hG%Q+!hPI9r%3QIg/=)Z3LV&{{%:]G}T\r'M ny›G!9R|#qv *玵 5~U0pB`C2 cfeyC`srB)w䮫]W WǢw`UDh  |R+ @! mm4)z6Kr9pf.s.{gknml Ζ5ORv7<ʽJ@5v1S7CO&S)>1l^9&B&K7l{_C1: ?C./Og3&+A#E5Ig &%Rc Fr%Jg+j J`{u mJm!a#yHf69̂yדbv۲=oYk}k즘|J@$YP$[-|PAdLZ)0Z:ʇYiGŁ"C 9Pd251E49gG *Y:Z<χuP<,ؙ-ؕyˌF{p9YqsTg%_2&7Sj2d|*ze-rjI2> $472aI 0Bmߍl , '^Vե֬dW^-xqŧ,>)gsJ⾘k&K%0G4 Bsn+,$"/‡YǮ|-!ƇUeǿku:KWjq{\]tIpC㑲 #^p` p䂣js܂Т9NF(nU CQJczDW ] Z<] :A 8+6s^Е@ä ]ZiYP ] ]ure{DWXxW}+BK|u3] ]JUDoղ/tUТ:]vS+e}j/uf猪Eu*(J[O`ltoE"RUA)PNYֲ/ֵ(00jQR|n-qNXUZSJ39oFF?o% 3⢢2υ:cDsm^;2_v>I2t}<~"דIjjsf0#ʞTL/azҕL/Ӳ+ˣO| > A~x1wI>6%úڨ.Ihz1`ٲZWJX^3Go++9U,1*e{sX;\dQ1N) Tۛ˼W,^a};̹}\*'(ZUi*2[yE2 U7 eθFQ޶rdXr65d56Ń4_@XY ©d8,컵@B u|kSZ$ `ӟs&ϖu*(@WC3߻jب#U#U3HPBǮttu)4=+~cg5Cu"zm tJp!0*p ]vZ:\!=X',p ]tE(- 8=+~\}Vu ʵ{']])S0JjzCWEWUAq+-`GtEUm_<]J5 $])A~-*pUoj.NWent2t7Py%䈫sfϙ9Ӫ\0onoLW戡~\PeH4}:4m٧#b* ZyT+Л3t@`<]#ȏ|Zǡf(u8L2]:p#* ]BWI5Bdo #*.r }+8t J:Aq{DW0',p5 ]7?]kB]]鱍 7C J] ])46*p5 Z˻NWwut ާ`*p ] IҕQF1->cs@ .&H#.o.[c16C jSi˙Փjm:]ϦQg8jԱՔ̖L~ɵa(`Ţ_/C ^V k6kQO=} Ұ>3=J)p7i1Yn J.Cl>%/]ܗ<ދ Z<]z{=t[=rR :fp Ιc؀p]fk"d_ۏٻ6$WIpƞWq69 O !e+U)%,2 P<{HJ@%ŧ&r>g9@7NE9hL՛ߋI_У^n̯]~stSmH1O,:좠ߌa<=]@e8֐;_&1"@yH 7Wg(6:;TkџҒZͭ5W>ʢHs)eQ?@Yat'OP24"jHΟ7''I}hA񗙪d/XI>Jllvڲ<Иv3 e_R=b=jI'O Ox7i*NRRBT+jDJ%ևWI\7pҒǎ3<++ {WI`Z \ | ,IIBbROpVîjqվH;I][z/W\qWpŕ@ \%qL3&iγ$ zJ(>U$SKn*pJr)(AqT i"y;[cNtxbvI\70 2I05´]OS8#O. sYoZ;RBӏ?;U^Rg'bpxpft$.ۛ`cV]$>̍|8˖U@LŲV":h7e0ka#%*E|T0$Ƙ)N(zLN(+D-e6*ߕ M.&Chzsl/kB=a?Q0KܡYo|N6PI{I?s)-> c'm4tgS򢤹5SLV* ~@d0~vjՀ,qݣz$%yU^'IR5mxr^PUjMDݵ#,WR5όONi~'uAapi^,b; \z7T%cIg01 #b}zq.de^Lq|`1U}n_(ҁN ᢇu+¥&LI8FҎYAYJy>%&z5lU>8I)si h:q4Sbzj OVuַNtύM}`+3>suN]C |D5oJON?Z2;U,Y[|:u{fJfμȷBo9g$ *a|Üyd+ Z9Rl PÁ &iQNMI,ŨIjۛbT1<13;tY!Y(`dPeBN|#0E3DRP똪 vp٩0ozbVE-6`M@ZX.0^\ʚ \kq%[߽NKfU4o}s [&螅lhj۳ٯiy5 z|(`LF긭*g_kg%r޴Q0qzFճLV4:k..~!ߚ3?'OƧ0&s|q-Ջ] dEY2I ^?w Bj"zcM|S5ds5ls5rFCQ=z1e1utԸf絵tsFnuC[$Z&0T13N1KVօz ЉߛEs:s)7G^Ӌyݿ^'0Q~0|=0FE!0D qmU ͪmQioS/kn>\-[;}7rkmЯ\ /ʅq`-F^əbS~ܴZJ%^;m8ԳSuJUs.So*oq&#G}$8EUμnc8f,rgWɫ.&Kf.s.yj~=֤ \lo Lm1_x9fENp=JYMJ +YD)iKuÜN+s:UOT:b5КNQe207=df! 4gK;$NjBg EW٘VWaMzNV:d`y\x`r>{ZgYݤl{:SN߸?̫cMe/CR)"z+lNDv=-M1M=0RAA\ߏ9G|&u>:- NxºŸ-e벩=&B4YH儥v(F˯n)m箯xx؜_/#FݵI=wβ"<-CXmrJHEx\RelRԣTz*V,MplkNnil!(Cz@Hm7g<^s%%cH:a10X<}}^5ې)oPdc. aD& ,[:FA@7(?M}]Xf[C%sɏ?^6N̮k gս7qnւ͎bWC݇`W~m+~[>>~WΡyFm_Kdyn՚fXz" јC=_`[ [!rFsY04\`D$| zZʫWn}*zP[v?rS\]d^qgm<:$ KDĎY (cxAVRXH>9&L.F"!&" 7s$#-Z`ܭ{O9SGI8YoWK{"gh3(*Z/UD03Ŝ`p.]`DʩR @JkP[vC ?hi_Qr>AԚ3hAtxtY'nHQh6;A'6p"n))e5S N·o|YF;Sb-@E[#q! Rj˱GVQdw$P |G0CoW12#AB,3FĹ*aakq_,-c:`b\Զ$1 adv} EuY ʏ~5#60c^@J Mj$'PTewR4J*bSE#nHΞQkMItY$CҀ:"enMۖ_r1wҎvޢP{j& 8YtNRΑ$RǸ{cbkADL "Q*-qxY1!fE;$` rcHJҘ5qnکMD&`-/"7 GT]sϢ/"f!4Ȃ*#ب\PdL- Ą3[XA`Ir_fzj❟ IMEE!h="=.Dw^a?^W}~IMq8(W=5@8#,[$Վ+6#JI `D+% XL4:?@F9 _ yeGa ޵q$2_wش;NB?%F4%8N3C4$EER噞_=J|`nZ]'yZ<[?NN,9NqQ~-+=olxW)FRmK9j4.ji -.m2ϧqHsFȅ3Q M'`n4945 < c1F?+%t1N~d4 82c<3IՠoQ=zsǯLdރ)5,깎\Eekc9(K[7o?]4{'"n$EV agH0IR~ӻѧp{wn}ݷ'4?|{KʑzTMVu ;u=hLYv<8u뾹Oty?)^?6 m觉^~??tDFC؀bs7b?.TwUHy@lg)=P++(ԅ{f>K `T}ߞԝ>j"7gkUefcя&!@i-j+c冁^?߀s7ÕO ?&_v"Xawߏ ~̙گ'q7r|owR;?ֺݞ32PR "r^"x?˃zձWԇlBy.dRE[JBM76O 7s=ܻO T)jiw]}u|M,:w~,U_0ݢe̮'i%|{ _BKZ@ж~̬7ֺg3]\5W_LpX!}4fF^%sdsS"jNVD/ձ&zPGQj{ZO|@AɔKo<s֌Q Ed2KɄ` Evq&|ƀAt*)c'2$9?ZDS*0Km+㭑^AKG֥)['yv:֍aYWUN#^GjďDeUSu>7>'>qMSn[nSfZ8|k1ʫv#ZXhmĔCA m:*<0͌Nalr1μ xCaIrWեjI~USadA!Wa#ZOH;~O-44\!EY "1/pؿ;K||nAGaAǔi\{jN[cs>:MuA(oA9vKS*2!RvB(sϕ'`uՑM#%_V7]EEKf2Q `Yd[̇YLWD$m[`oHN;kd4Hc,qXq ͉(EA]S%..99A x2RmW[I>STwNa5vϘdA%z@7eQ&5.pEpI˭4$Km׬lM2;r}ρ,y]Zq 9M *cSX9 ^t0 Wes51^=^?]:k,-p?Ҵ _SQPNq,Fl0Ouh [A5}]8p5bR*u7;84|Ixްh3T\l6EԷ٢i1y7xOwTҙ4w*4&z\2HhdVyKР*v#UeNϝV;ʤ . I[,s)R9(R9;)pp(j"7 Fօ؄/Pp7!y|HJ;O4삄h8vؗjM`DYgZ:.*ra\'ۍǰz@"BsʢH6'k rfS  ư<MƵ^tJ6V 0u2ܜ{ i<bƙ^'dֈs,"' $#cy޹Yl+w=D{ ʽd`Cr9$&z@  PP0VQ1g-Ӛ `1$8߼3&H e0:uڵlhͳܜ z'Wk5T˅ r'(=Ë$xkwOO5q#<4n:::Fh7tbC K P!{f d EJ4R(ך|2[с]@WGR6CXb-|)dkEM8*t¾;?_ b0?/㰘A=~bY(A.<{ 4|^ G E2U׾=tS>Z.Auc̺YP j`mt NJ;HuHE0& N,/)8A|0-(O{#՚D:pNz.<< IQX)B9!;Ar8he 螌uFZ)0rW Iogej^?^mՙfo0[Ma/O܁ Ko9ќ*6U1E/ U1G-18*>U1MWsNG87|F i8O .•hAy)r-,A9 JTP9u*viqaO:+atL`RcX Id"tJDĂ)XYs:"Eq=xsB,~?w^<˰|-?|=l vN3=fJd_)/ð dŠۀ0WZ12ea0\W;~[L Iz" .PWH^jKD4ĬZqD ժoBhc"Q' (y$4J%Y$ 'eDklx _ɗ2l ϰzGDF5JZA7֛_ ebe*̾z]u oh<+t `6&r%iӜe<Zߟ[a85޴YT,;Zznmv z^hz=+Vny5fGoգk:&J۳Z++,ϛnz,![}jSQJuYǟxmEͭS !'5grx3v5*)VzVDW@0}U&SWZzA.Ew'^p~J*Ɉ+ W`|**S{1yJ։(W`}:*kة+#4űL%7zJn8!q%\Jy**SLlW/Q\)9%tSy:*O]cWJ;qŕJܢwb.c2w Z}az餼'X/~~M1ͪFsS?#3WSc$}z6-=׳P\.bj~m{;̧%z`Tnr  cSw=LXG|̒4L{_ŭ܆WԌK&)-}9~+bzu|Q)l3A}6ܥDr=HKPQ4. }]KzKhr؉Wo6a*49z$V\=\~XtHj KqPvj'T?U'#2\ۉT.t]oFW!fLAffl20,K$ω_5)ڒ-ʲDɲ_U+$iz5̀ hNNg<;Ʌ2П &?Ew"Gz'^+XĹTF[%\[P0|׃0^Lˁ5ѣ"U @9c3/m(O䂡Dix~xC-iCfBP͂iC و_327Ô臃x?.74#>0L3m6 ZM֦AkӠi4hm6 ZM֦AkӠiA#~ik ~Лr,s<9?4QJc1F1sɭ`)$ŤU7B#vUBT캄Y QGVPbV[RW1i&^cOV-FӭF`rOov|J 0@oqi\\^_ 6EvDNj@D\[t0`1>UZ4J\o:m eV+wϞ@1fAҘ+7Ki Rwap'ƹQz}~d"'Jќdsk,QW+Ljweeһ2Y98JHƄaK2rIiV&SPʍ"6i:2," #aREb"D4Ls'+ZTۥo0g;Y5`Xr_} E>}hJ3S=DA}OIT6{O!TMkj Xy[%P= w-c=^;x]z q\(wp)BSϘ8P=tHbaJ[Z{z?5i7sC[ jLb۬=~LH3咨#4W B`ox׼7]ط8ѽwf@N1UdZڤT9EDX$&*aXFwf W2YЊ2""&ZH0<)c" 4FΊ#k\> >}.}ڝT׵51 htx=sTK˖o RN!qakwޤ^DP$pu[v6zmf(hL[xyƭ`hEv'0bnq[8hЂk&<ⱑڠF,R<>epghTH@0h d%0 $6,#gP!m+KQ}@wzƀ v5$;*S֧ ,TCMa4ǧXDK j6#JI 0#VJnof.:MíiF F3ߊZJ&yeȴ6 k4TԠ Q m ^ }ۛTWd3);Lhvn.'Ka5W763I:!ޤAO~ܲ6rrNŰx]^$㊬)'K{r~!ag=6I'M]e>v.;qO_̰8R>Nλ}y3Gi˒Ԭ~zR{h@Ir Yڊұ1;3Ѱ?ô<;XCA.,z;== " /I!M`2p^jMt! r rЊpCw8G(Nm!;Ĩi'At{D{m\NLkKKw{qFegWwNaarɍ$g&1 ]uSbpܷjξuԻw?&n=*|r/X\kxM!H_,q>&M{dX)+\feY}~H dF1ZdVf)u{4b~z~: mV5בk1Dν AlԦ)Z )9X䫈'!5i^#B/q8L [&ү6&xcV10 1t1,"W4NlA\ץ(m|'K"iICMx =|m *Y*c,7J1s+ɉQ*m,ۨض^T%fiG9q )J)<Q5qBP.F(EoԴ5'H-1яm׀(l[qnb>T!uTѯҮ(jnȴ1b[euQ1( &fpoޢh7Rv#zeyp"bsRB!1ԛ/kXR9|y[MZʄ2a^2=hp TQ͍QmQ'(/$.-W[fןUj/ 0*LĴq4G@9%6.νs)-> c'm4zfd9.7tÚ>-fSόS5ZnCEsǴKO "L[A4(C<Ljİg5p#ap)  {,˳~d )I\9CBZ.N`#VAhǟScYVH D)e#xFM#2(ڂ+%1|·n١ZRyյQ։3X>F> lOӫdZ̫ijY`vI@"|c{zV-c;c~jp{kږ_E]3V,ICP'4i{_ٽNuV ZjɺVk1eq2j9Ͽ#n}9 fIlC J_?nT%7ʛuugqt㋗~LoW'~|s:yɛW`$Tǭ1 oBCdn5Ӌ_Ѵlihj*B3^O|vY]i꺿2uWKn1(M]>=ݱKsHb#m\wJ3_Ȇi~Q37EAտnR%=w 1#x}q&R,/Gh^:['#BjW8mtfpt5,J2iUEӁՑ76T"//fϗ}48yH.3"O`'EDZIㄕ,XeÔQ%:TȞ^6kRl(.]gݝ7n(:trYZqہ'9wuB[N$5ić @D9 wDy}`"` E^J+FmEl>HWZNouFi!f? lp6HFZ{@z@瓦rʸpߞLӑO=_xE eB?ui+9̲_ R~{s{y?>kMlw?k=߽ި%W튿fZjՂvEb"BZ Ͷ rha&+$T΂"OAs߂M֌_Z[M@YY4@%0EwֆCx$LD`!r!09j8m 8< }{] 넷I7/u☜aV[*" QpbN0i$,hXr,QqʈRlZ;EoO$ʹ7V_MZKvTnPz߶7JiLL - ߃|zEmljM)+A23]r|+yeo.# E.ϛ, u~(lF;~~ oܹt&h9L`^ DJfYX̳b#J;Z3R=:TUSE3tzwG)w,8 g:Fq)E"߯z4HV8aLOUWsӥZn7?cncEZ :l}}c_h~/N&Xyw3r1{D?0Ź[U[̍ 9KQG4GWg_pU}Iȝ|#^#.T-0ǝ0J|<@&C<7idys^W%4_QiދF-ѩ3YZckSYXh-WQ0 67kr9kU6Xp9 C>]%/; r3g\=Z/vZN-h*dSRB9K1I E2x"CVޠ0gtfƉZf@q1b%b@dP p}OՍbhU5Vr~/ j=.5۬Icr9si{/U6R2] 51ࠨ=PP ń+<4VQDSu-8Z!q)Umw,"`S zr* Ms7d!hB 542fvd b m'G^*ξTff?l/^Ml<].'kGlbDr3$fH^t֤̣Yب8ڄȩ䑜l3g;vN"oC"" !Ʋ]iFqh1O%n,sM4(j Cq*eA򍳪Sh2LH!Z:k02ɓh4).̜3@8eɵOFCqQbM#>+J΁!Hgr.h.#krIk0Eh3MO!0<|[ft?wkNM~EWdzvHk/Nk;|DŽe{>5 E8#BY/c6ϤʊlK|4H_l> Y[(\;!Zg|HDOR@4JWW? 3!܈,oz)M_f9+|Ϫ0:;o;#hj+%Guij*_hXiJ249[S3$~lNgDoS[r9hƖQJOD>`lW0/fe]屃_O|>Ea+gB=YCngz6QIWOC,jH t́7LWAn..䝌t!U={B 'U\sRGٓ'ĽBBLqDe$`&zb6Z.ɓ[*DY B]HQ@SF2Pl's\'F9S1ϭV+7;,ujLŨiHh4n7 Lf4B|ԍ [ik4WccOnAֈ;?VCHCK=%le`J#OCdyK#GDFo$@-k "@1(9Kal1kk{!w/{!'~!'qrxo[ս9DA,3@'-4VZiBs9r!pc9A'^d]s9{yihY.Cv4өV]UEt.n~K]_w]o|| |\iE!ɓJ3'E`E;Xp9nrs$gq{w>0VGjU!^c pݩBO4(HA @^(Faؤ^iO>qD>nDH3pZdII;CC9;=?ī'ⷮ草J1Vbd7#DE x4"KG(-zk(>gJBG$=;z3rT)̺ɧ+.Nt8H#.G٬?5˙G1<}k|ߖ>s@0] h9 LWk;UoVR̎ vdx\UkwVjpj v|+)5~?\}Ro3רo񗁫o-fWꡯ2 ~:8\Us<pU\ \Y+S}+9ՌŴK:V Ú7EqN/MΪ!0ű`ud1"4{njsk,t+䄺\5w>£wUĿ/B]/pѢqsO=`.U8gݻǏ~?;ңOt糯yiābTZ74Q4 $&gX FBy5 w;VJFޯ0Hd]"vE|7S˻E; ݡ˓ ?=|<[)Gtv?Y)#yBE^> ]yyIyg({3hL adj*$SkC$$SdjLMI25I^ X 5ddj.ddjLMI25I&$$ӞUlL6I&$^HdjLMI25I&$$6I&rܚ$SdjL1djLMI25&$$SdjsI25I<x)j0eFSљRfCJUF,yk"W,dYШ Z=·|X #'^KA|IJL6cm@^3N$Ip MVbШ@ུ r :2-M2ETC>6 ̜o+\]ً[0n~攌y $|o_e-ΰz-;R\cG#!$HnS6wgQ#bD68A\2P4"`(yb(dZA ǐ$tMΠїIv͇'V۴/ma<_(^u|dDlў龻7.-ooq=~]hF?u9n~rsQu7?ON'g׿B߷zs By^|^ Vf3O"]ͫf-JQ[2zNm\Jz)JUSPtJ}(-Z{ jkb iAkJ '8e~L0/'x=i&fU`пa ,jcBkDd8q|_bǶaoTs/ eL?a] Ē8潌 f4פ[ZYJ`2j+<Aj*" ""0 39*r,S@dbQxJ F $ B̜kzl~ ɯUgrfj{dag[[*.ܸZj:˟Y5vk)Sw/[.=_wfX!Fn4mZwVW"v̠bo}4ftvwl֟8'-]9J;> fwA:љl>Zw{'o|K~+yO䟛?{*.OutQ*[ŭB K]ZKeyV;ђqI!f7VBNYM|nzyuz9k% k%Q ȧYEqitHf( 68QXfKɌg9e0޵5m$뿂7מ]Ps[CRUE2Io^$$%C)UDb4t=s78{gf<%Ayh-Sn;jx,7z4Wܫ3bD2q`U Y+}*˙JFIj nR)Jn;ώrپ]]G*#R"%1 NfE{!AyWL{!}q0 U5=-6ƺ"$S0ĹrH1 6@" )4N`#VAȃ?iL4UlW'|qY0]jye0;p%4 JO|5oEūv<ejBDZ+9Fr1,TNHvUy1ƀ#q<XI |&pw1R sKk7kzf jEN=JYMJ +YD)iKuN+t0|6fw5πve+ w{*kq~ Oy˛^tIa3[Wga5i% rJwD9AZ>c!UW"-x_VWat jQpʼvZ]l;O<-.4ix(GkͯF9۹BB,[JL.0"~ xޟB]d^qgm<:$ ~XB' 5e]J,YydcR/%%\ˆDpCLDXYh6*}C 0q:~&3F Fy)@펅ףҳXv7\i|,/Mƫ2XE3~{'N* A$,hXr,QqʈRlZvۊ3)|SMhگ6&/QuËެopXpWCNjT0Tu_0 Ȍ+Waej~<^Hv>_qfsI!87H\ M#&Cp|u^|=:D,&xe;Lq 6ŽJ!%ĕeg' uG6KdGj{&"^_L In( 2z ^ T[BGa,RrBd4A![5$, jxT~8!,$S)ײ&vC gӒ"0& W6 =2JTmVQyFeyBzJEp趢Hqk#N:9Azf5#JۭU/T$qS\fM*Ff$H5sf"Tؚ8ۑ=_5,lM3e,T>*]%ϸ-MuqGZcfây #6xzGy(1494H NzzJ"j*y"FӐ= B` yIIbHLBDNEg;bȯ{miQ oQk_ڝOsU$RǸL(ւX*ΙD8%TZV0r!c!xAh5> 'H pC*BT"Auƴˠ~7qob㡈[FD!bRA#eyM.(wCHd!vEbl[0dL- Ą3[XA`Ir"i$8, !eG[g.s>}lMKe\$.vI7:`![Z[C!DWH\HiC񐵌ax [W񱘅rܩ޿Vd'J֔ Gף|%s~R|Fbz=l!kxTF[%\*+*c2vup{~II_̕]dWW|m;,Lδ `D;"|~jZjǗ)}[Xw.}Û~'mj~x:VkxIޭy<\I ڐKbxϢjN+Kz?41b[euQ1( &W?x//\nٷ hXRU?{񋙦0żhTW>U"3޿ `ii;+W[gڗɃ720Wܘ"oqߺm]e2y2p3,O,Mk᫬Y?(^t!LyWZ+ӡHC2) U\AyJ}O|$$-+~X4diX\|<e!{H>^6i3.Ϟo ~AAiCIؐCp^EShdp>ɟw-d1U Z4P v1סm5": B#-`+FbpѼgn%6BvJeo'p8'+!Z?lQp+W O;=AWw(c9wAg#7ƳE;˴sБBlKwx6ݓ`-rZ}D+d鶥*ǏO\]=^=F߸XgkK4ӵs_ynzvyiݜ_W3p0T'M4Fu Fy\Ou2qucyhɜm?Gw1]GS(QoZ+k*IY-RI[T룴,&|{z^O}(|4ìFۋv_6y.^KѪ% d9P59MNVym)ddF4A oڙ']J1EBYk@mS'gZ˒%U'I?6(4Rs-v֓2ZWJ,PJ,$Q R,cM1L.);-4=6yC3m,j5sRںjQV:#I("5m!m1i{!dk1U)+հ X՚IbbQ|bv_; {2Z")z6ӗ0Eiڜ9הDە5[HFgo\9 EѕAPQ1k%䊛2؊qn!Vo.HWLĊHĈT@6$eȄGZ/Z0e+ R(E(N` i:UeZEE` ^Ou/"hWFb$36Aw)~0cDbu`qZ47m9@T2z(9P:6W5sS4[7!yE n ×Te2jn5GeP܊`m@] MdA X?*ugD!H5 0!Ir'#ǃmN>}r۶>֝G"q>5r[`I%6Fx2= RŤ:"J8ΠWLE`IWSk@)@d2;Ez(%'&rp1Bvh1#-P*|LHY$,%bl)'8dMl\?rG=!td1q(T#x.Bql[Vi(J^eQ{)@E8]G qR[@ A") 涑k٪e&ڋ!K6!hma~򼉛^4O5m]W6k2njV48zKpHzPdPf1kS0U{}r8%F dL By:Ʈz rRjT!Kߞ@aQB^`+(69Q͡X(m@ Fx.\1 `͏!( q E9b $Ti`cySʏH]KF@QR X`j?ʭz?v/n .msa1lC*4U Mjido~!jEU!ebX}MvzknA%../ξ[vSl9\Ŧ!|_{{.ؾz ~V\XnoQzX[oe 8R|JJ/I|v.oKzСe[vzy:  $DqPS/Ĵ%qulkGN w~ q.7}>x,A˳Zys?;ȱT'ķsnNsƲ/7֦Jdi%YblR֠ڴ" |jtmm: [NM5~I͝z\~kh%Kw}x:qQ~'O7l0ʚѽ3wF;{gtѽ3wF;{gtѽ3wF;{gtѽ3wF;{gtѽ3wF;{gtѽ3wF;{gt2昌p8Wޢ/Y1t +J/'? ~}UK>)χVâ'8!U+| 򃯵 H$\??eHSΖ|ߊh||syrxsr)r{>o;QIq9` O)^OG=Pso|kcSg|OXW|s_]}?7q>%M6JW^cE9(kuA%eX*^cmgI)py<ޏ$G/<>?>g'bEfWVw<Jzc=#gD3{Ftψ=#gD3{Ftψ=#gD3{Ftψ=#gD3{Ftψ=#gD3{Ftψ=#gD3{Ftψ<#ީ#P c3Eh(U3kPVi37$2P}<P͠!R!);.?{ȍ,"y$HA&g3jk-KZ8[dVSdu7Y">I&F<O9pp|OS@ap,p\֧:X"$ŵUaүF`,8&1"D Dr"T3ʴxbDLʟ`3f#sc3'aM7$ni_p->*`N$ x5|k/W?N_nb΅wUӎײVIgQ场m9,\"Vz0g+%iK][MR`8Rl;slG!Ccӡ㞇aYy+e }?B!f(-"2h֊(Oa)UP$ZCtgЕ}OkxT!F˴XR s f^D0Of#Ue H&W ހ fIf%`\Sa2${"Gc:TcF, p|~}}qޝVqɛΗ:#L{-օd @g1z .Uv.%7'8K$ޯ?]7yzs*5lA[bꄱ.ڛ={E]<(wY0u襴 G~'bGA6W鶛㬳U:ɮV;UcN_Z0# D wKU_ס76N f?߯T- *g{? o|ï߿o_Ì 48N:EyǛд47o*G+ ]|vY[hjy3w)@tӯ>}3BA6,0[il?_\[E#wwJ ܕoT"D#AM]-0bnM% զ<3"Ant=:^p)K)~`FI<ֿ тݻ]~n~^G9IM`nC22^6,?Oײ \<(w:I+6s;Nu9?05kYwLS n:{o&&G{-ei_҉ ^+_Ή .S/D VУ?HW<+FO!?n&|9`fV U`he)1H-hop 8GO;;_F{V";kC!A|\wm"zWҴWrڼ]7G@ſ#r ./Rr ¿+/$%Je_\]GesA7V͞O]Kɮ2/'IU+ k( )]?OF6,RHP#}y וK\p%? og1J+@TsŽJ!%e#ٴT:+V GBF@6qvcҭ*8HKJE%6龚`N0'/Z|g6O8P1JnCe" )pӒ6cڨ ̼" Úu8P{FUpä$8b#YmBJV M4܎G.UT Iq[UvԔ)ag-J<}1s 0,2H{u^ 12 F&ڂ^vpZg^r^")zm=>pc>PZI:J$@|XF"2DP^| x,0[8T@-65,oȝ/b||jY ̝C" n#_9`p-WubN'$FP+tg:P )G?nJ0h CУ׊,+Jy&^ J= z<\ z'у%ilA OX&Fg*2N56V9hN'x ILTHO;#)3אM=<'ָxLJt<ڞAn?8m9H e,S'}W/XlzX#} V𣡘K/"srk5x1JcjFxrۍb(ER? 78 ѠOT%&x4c#A,LEl𝎠<Û,yw{F空͵5Jl=C"Z |3_≖Ҏ+mFP#i:E#h䖁!V8ui.'MNS4ϗy\b_ȴ6k)s@:ORhcXEd<\7n^SQA3k?Q$IO yDs 8DlDDqt2zuݴ9 d AFQnϮow J< *BUe~j\Ni!)QZaOp^In80`4!&K1LQW㇄e{Okix4C5j4QAGSߜRbJ*ZsE\~?+La|Q ^^%ԣ w$B/}}uKgWI^AiWߦ(>  .<(Cg5IWWmvՖ|3kQbXIZ:.i#֦*ѻ0l}OؖᦇL݁]m_ e^=٣zo |[誗EѸhr`MpX{"|ƭq`Qn+J|yW r)J;`8]Л`34fёlZM*sښ*W%L.ʀ{c~Q -.6+C4f6^phI#+VђFz#hoda\Lӟ2wg3ke5D63WIB7C&wWmHȯ:&q4$gGCˋ\i x͢}s[ ˔n˃^xbZv5wDORzwPlXs'VlvۂO,RO6Oux@m{GnŢ2(mirgD>0[ 6U<]YY1HBX")K@.On>s~!hPه#&y-f'xG{@-2.,NGUReE>ϊ|}dG =ʴ!/x9>:bqSlk\<13/$&EeQzx^az=>tl}P|8o^u ν(I%yR)RbDlwutGRʏ1C6pd+0R֚Rh(pV% $a_3+O)BNW!X ٧1d3S Y|D,>CIqsE]kF+6lр1p2c1#/3 z)nwI-vKl1+X<:ue- - b"XfdSP (2}8W-B:Yu:t"$SxBa%RX8.uH'ЈcIERj"Co+- :8 Y0Y*4H$$$Dv\4 DI HɑmM$\e)i -J;f2R0&k rzqOl Ξno%9~#%"=hh^C +8JNhɃtEm`M+Z'rRPF I.-{lZ)2\*밂Yc,I Γ3BElKDM7{,}|t^(81_Lqpx*h qJ@W+0qQ " Ȱg|։CoKmJ(ep!M 0` UU=9m/ OYu| Xjbܰ^zcyx][myEuhi/_l |V5`rHJ4@{HP(~{58/5txy:_M/6ѳs~h>0# ]s`f۬p.`9ͧQ6Y$d"iY"`i mcl:* oɩ "/h*Rd<ˌ B(M4 COZ=ԉu!*7[pm1a`4NnB>ߔgPD VJ.x}e|C8̄}ݹ!a# T̔*k6݌Q5k{ݡv~it:,@Ŵ N橳23z ӌr~ӂd"KKVUdXV8d1JS\S\S1sX!i.p)1jLYN+$VsL(QmGsq:2,"Z)6]Z v?fsr97ikcg?=|$KH}g>0ƣ$KYsEǸ>[5c`bҙVgpb#V>+q'mXu,Z.LLryå O} 3`LLj9ԃTk%iH M!l^ EzûW#~r\\U5\{;#~^?ٟw7>oAEdg^2Ek~[ݢa>-ue" 1AK.[uLG e2cƌL"^ˈAQb[!--;Zޢ_|>/nS'n=Sݞٲ6JZG;+/`/N߹39ivG,ɼUG BFlb} ] bxX1)uVޞ=Tg>ۑifs77f۪μZG΍C:}es~˻ 2BokRW<J{fQ-뮹,2"7%Kɭ죎9%;g4,6?m.C{كݕ= -!߃p{PLw\»BW -NW %'=]] ] swV*3t*τ;E>o?r3(d zW;05)Cl`*`< (ߔހWRM" E1] wp2Wa\RZ"Wm!~JF+?e#sQL%yye[ZQF)3ZdQ-+eв̢us*lfV3GTb  "D>]5(br-`F8e+Vsw-$|(Ct,:NpUgVZUBIIOW؞MϮ⒟~A;;F'RW<b=]=1R U,Pg*+tО*A(5=]] ]LCtvG]%*=]]"]Q̴B+9º3tzah %G=]] ]1*ּ3tp1]VsR.8Jv`:CWW h9 ]HW*j u`YW*%gJ() ƺCt\ͻBWp^]%tut%n 3;O gX+!#\mv_n8kXtfZ"T2ro`ʊVO,ۛ07g{WC_ntwa>c m=G]#ϩ'_+|!C3rJ"M7ް5|/J@R̖?՗~ ro\K<5 7E^pSر~Aĵ}jH7Ϻ>UEfme߅O@S G3 _ގ$®P..MLryå O=cF@1b"iZ+IS} .f.k/Sa*g0`/FMhFs 8~pwҥw04u5> \p<"ϡ+lm~o]l436d4|k% P]Dq6_(l TMz7)+_ƣfO.|4:_ 7Lu1+uL*?QYt,[Lg/ tNZ^%9f? ը>j..]76Pmxrp0v}px|;(]-ߛw̓~)e׃2'o1Wd-$Nj;f)hK}S^r70 Aɪݪa>>()5^3&B). ֍&%?n|ނ""Ɍϼp41eX)9pk4؆ZC "6ʻ_lNo}s:y*,Ora6zG [<ӌQ5SaH'AK#fܐ@#5:(cb媨;RaLG e2cƌL"^ˈAQb[!-<ҵk]*^#ᑖCh:#)7}U͹,ϧ0 \>,=v vzaٲz kfy%~8#Oj~ݕu5 x2/ J{{ m6vaeSjv0I>a uYz{PlGuY%cc4kRf2znor~H;ʉG΍t<ޒvyvwYe#*J~QdX/ny>[H{7 Uxw8T6Nf2 U2=5sS=Uvn\a0Wׯ*S8&[Y7՜PG{ˋ&-fؽTFbd7-6?mZ5%Z,Nh9; 7]rEپ(x+˜RO؂Jc1F13ɬ`)$Ť4zŽpG}wq#+(1GrD+KtpH$/Oƞ84Z"$'P0vY4a#A2sVk/;^sXe[vg;1!Vaal.Mg6[ªle4T ^`}v/'ޘ;1DZheն:zBE>{:ԕ})2B ҅sϦs-{^vEҌ-^kL+rjTKI*!ecs9yľ^4cߴ @?ȈR4cK2 ,ӽ:F@2221f#r+$-q.%1pa5& gZ{8_7c$MFlB?%B"}ġ(eQ%q5=UOUWW#g9"L`"{R5D9DŽRn&x9ɻՑaY\ -J)'RHD#e&Q(OQ&[6>x9ͼ"dCj }cVKh(~Q9%o|=a;VENx&yc)3[JpAҔiaJ^`X6*3/rDFF7O! b58QQ8`$PNgroD%݆N&Z Ɩgglx]$`>~슈83"{DܘB$$S΃kJltYHp 3dwEbl[2D0t$Ɓ'SBb%#1)DXҠO0 $Lui3#b6qFO^\EM)֙MKvEI=.n!X|j%V PF*ύ@e$X.fL}lڱ+xvGߞ2UiS\4ُ'jTzGlr5̱C+YRY[2uipL*@I} +a|'0Lwg(c?S`D pQ"Tcc5A )$c!J)Vys#z!ywH# z>t]SFDD b FрG!eLD:אM-<'ָb2㺚i܍m1޷a[HL}%/&8B $^WEP$pu[v6zm F)wLZuI5'ER?FG_4h5  HmPV`E*"5SDcqR 4 XZYh3C6q(Gg`=wkWz5&N(?>IuϕnR 5xU#Lp$Վ+6#JI `D+% &i.:M=dv{鄞y4q9g,r#0 W9\ ORhcX=dAD׍&?Qح[qE)aaT*^j&ir{JT6e1쨶{pDFstX͗g9IWl Wcov}WUsۖ)XF$ ݑdX0tIiJ#aunc! >0.45Ґ ;#<٢yi  #ڥrn4n6\WGqbVa宆brZ7 W^Ղ\LK44)cY2Tf 6]ƌu*ݸymUj&u#d07ګG->HTSll=>D0Q"ɧbթP>X/{k9FOfCxZ7g3kY9`bZ4=%i_J2yܕ^dB~ՙ,51X:9wpp]2}y9̦,m|\;S^;\LW wܛYf?Yաc:}6-aZIsv|чz˓&;y5K$DI5Ԍ\;sAW՛u-rRh n68E:ﭏpc8Y>"'"?ݳ#eZːC7.9>:bq֌M&Z )9XŔ~'!5A/ % t񤜆žlΦ{R7//{z{o8JixR\ĵʾڏ la+aTXif%cؗV ST4 >0h;]f_7Z˽8]fwYo<&fiG9q )J)<Q5qBP.F(ۿﱃ BRܵ !iaxΞ&u}mS'IY"=Ny\CgJ€gxReBv^TĽoy"%#5bǓ ѣ  }J׸>̓Iqm HgU}oy>,_ "pmNRN m돪obYmfas}U>}Qբߙ]NF|͏z1rbiz.j2_Zɟ/r2b2\ =39-3R{8w\炡ݡg<|uOt*FޔS[m/c\v uK/= UIi͝;y7 Gg{AeY)'p´z_69\:0{%Ƒ S\Z"|)NGIv`sya*ݣX]n_3;cTkz"cȥ'qC\&A4(C<Ljİg!ap) >$8[ s4ݣ8*w_[%I},6 wObY=#ẃ)㵬h.aҙ7Uy)H*bZa~ÜtX^ fTA+0G:_ut)PxJx2?1'R")9InfEK`A2|uF4|Rٝ`@ @{SD.bt`:Ÿ6vohTNR76՗ 78u| pto~w?=~cL958/afp)|?=D?pek]/aAn|~YGpxʙ[.x= .mn7U+ie1kGmiv,NR`n 歠W'aV5ɒc<@D%Ԗ;2j18c&uYJ[揵`dEk$;v=x<1La#A296w [,(콓jE*ױy͗iX4.xDNèօ?CE@b0 woN]~7FOFdkCz_53$s9}Y[Z3ѮHߞl{DLK&(Lk{zH!rFK*Y0R`D$4=zZʫ)Q= u=Y?޵#E/{ l|_C. vf0a >eI#8߯ح8YWWhX-s1dD=8iO cip:'AFRuhB~dΕ󂟻#xQzWIurAEW)h ^"O<1Gx҂49v6xtM{Sհ.eV67+(vży߶͞ mϸGLW`#5` LT kj|#قsl( T31ʖuOVܾdi#u $Xcs>#Je][~N|R q Ql0{ VcđsOVvb=4Bl7>f*U*g\3j+4dm2@ QŨM8-UZFd,Hf䴳FFC4R9XV[\.)Z!J؆Wpcl^G$Յj Q =lS,,Ŗ8%DhF,KU\T FUor25`BL/ C*2)) XۚCJȁx&HbւHK:ZeqD],Xh2i "r>5a0hdђk(@7"Cc1.5.j҂F0aژ8[BVz_iq?w^7_}!_[Zz0V¿/[a*Y ë*UjNߙz,- LPR Mp[ME`8J]7;=ٝVPM2,= 4!\͹E 9}[N)`L(ɡspӞܳjǟ y2!mʝ ?I 604-" Mm+jD RFD16 9LQ Xqhl}6M}&2{LvKkveqXY;.B -W0"IVQSء$m۫O4&yU3ҹ㊁@NH-S'" h5CA1(DaNzch)P 3p:A 10tNSlF3t:@i_;I\+L0qJ:/ XES{%+$Enp@U8f5>9"^JIhB K$C8iK4w@jQԦP͌ Իԅ9RGbS`(t,ggN(M$iM`TYgDP:݈XG$"40Eܜ1k˹Mq,V2{b>k(V-Z{1+[A>欐;Zw܁\R -E̋ǐ}hJg7dai48)9|{ț_;xp(x=>I`{\2x_e|3O?MZIWӤ\'.JЊFr+=r8s-e''/S>(Aޤ?Dԋ.R7⻇V3lE^ |X2w @܆K0v~w~~Suϧo~\vf_mgXLnFq|ϱλ_~Tg7UzNBwQfFpvعx.jf4;S'诞r̈́>"u6ǣP\ر,-7+Ԁ9"uŀF]eq9uUr"hkRW{˩L(w ?tߔ>'dQ=}8Nd<=ս(C!]Qw7H?=^dI@6, irڰ6, igڰ6, irڐ6, irڰva9mX+х|\bs ]BIT\YRaNS˴Lyƽ,C9$c&zD 04QQFϖ2ulAxG!B+<~,3 5.3|39봻DD+L7 Jv~j 4 hk$jQW$Y(ך!|2 hAHo_ͥAwg@Gm ̳6clk2l-DQNdGs odE>,u :(+/ٵjG3)gR e@)N F @ۓ7 Aznl $ ukP ^O5ZL.eJ*S$tW I?Tۥ/83RH̒LԸ8ޟw%'Û` }} v0̓TzZGEu7$J^6DE N8S$&mzNXXjKbQFGυV()5ؚFLqMD$4BhSh]LOt1θl?ثߓv!?2d}P\a[vpm÷PoOPѼqw}g?ŷC d" l(1Z jdǜb'*k6-!IOU; FʥR' ²&jQg(Ebh18% )K$EDImr֠HIlט8[W%D'&(-vga[I ~ 5ipoǓx]da7 (ο6.}Ճ᤬-C:}kEľͅbh3Z_uJOQ SxX,]`tQ^_=gy^^׭ͲB,Y`b:[vޫyQxsS-j^*~>5ޮ7}fΣ]L `l.zԜuVt{F5˦a2<0+fn$\%jlls\q@peqpei7\yY%:ږ+pqyd0i!z6.ч$ 4iu(Pꕋǖ9VgUv{,rX ,ya3P9MyʩYXmY0KHRN5G95צ̡^03zkY]:FGY$,qUqNL8 PAKp:NM)k] )3OfZ)XHڠR0:㊈B\@n=w=V UxByJźaZ"=h Xj 'd$\p,$f)"z鿗xmF*" OˌTq{ Z pa@S]H.ŵ[`#L03\ZˊRI3AwfL%*b W#G~w8l\9믲5xh}yoqWgq? =j|VJf g/Yn>v>~o*vn\?u]} @IЎB>u.qv)hd1G('tHN/rjxz *3YYqzBr67uŔ"Ϡ/P*?Ζб8B_KgU mDfeBpc@Q%%goU kN4yF'>G U@P:+}TItQVPx{3<}nF14`\?^~Ɩ.9G:AI|p{kږ@׊,oO{ۻiTެiatA6B]izҘ[. ?u~m?+waQsŮq7fAq_W!qO(]סT!}*H=>(yP_w|fSsb b}FG w6nT^*:ͭ7DgAVib^h 'qh+lU!ưFڵar|;Dp{,5XF,2}dhU^:i< 1i%N#g:8>q^?Ccsb@2%/ý:ߵ9?2SNv^sUXF'PP|ԡ8.#H ؁D ι+Pؐ F 1j/(Չ4ѳ<4دMzؖfڵw.{ё.o-Rx6=2kl|vD Ǚ-$%LoшЂ6~3ؠI`O(' b XH=!$dL|ըC*H\@D 0 5A)΄D8* Y@o[1q6;P'^x/5Wd_-oKd1Ww. +|{cvM4{6䂊N3 6RɁA48“XNT]hiTxYSBNM{S=<헋l8IlȂWPym=737\r6TJBHW 0lv ZcDaC O@6D1kuS]Wy@w]bf:Ir0d-XZiA"2M=Whq08fLK14\> D|\)BY-pQ%c$'JڔT1F|!H-㒶:J6j@aQ$wAkmܸ _.y? ,.$]M "uV+9M!)-YGi<,QÏ3|z cL͌<̌ll\M2B [Շ2^Ye癛f'4l,c0#<j$'PTewRtJ4$gx&ڤB:`^B:"*w\pfl;%vޢlo`]糁MsT$RǸLQ;U3)pJDY0r!c!Xh5> 'H 0CGzp*:Hc2a68a/"1vƧ%2"̈x`ā7QyST-]gQd33Ȃ*#ب\p#7pc>ZI:JDe, x^ f}eC2#1j˳)n~[qӏ A^+,d4*JT{뫻sBn 6 v~y 5) >΀*TSX )$c!J)Vyc#z!ywH# b}0uM-a$EV 1^C68[yNqtsRRО1of WuK^LUM'44P:H/+"(8:-X;X 1i#,7b<@ G ER?FG恃h"uPLhFj4,ic3l}g`ɻw5b0M6DxsI-DK jhHjBZ%$D[tC &pwP@L<  ^?|ƈmi%sD*PК+B/ݛԽJ|7_ߴZ&y #ݹҍ΋th|^|O3M//,>Z{tq/ _'ΌHh[6-ۓAj"rRՎcp(X@v'VJjtpTIDO.cB\[ \w,d}@0aݸujh|2\uu`'WWVa7swUFҒןA@i拢nTyRTf)7 c"z_Mn^Yv֐ro&0?ŷ|^EōSVq2 *OmBdXm4y7Dh.] Vfx9h"MK]U0ݐɟׅW~M$Wkf՟c 8GMa9QN~Q]n@r>;0&fCh|vDOۺ1o_s']c;ͅ-RLPs@nr0o͈0D GnhnX ݯi$9H-}g_<7ɇsbRh n6ˁq{!꼷> dA""XL4TGm#gG~ֱ#B2e!xs#6hg͘ڤAxrHSULx\^@/N5&).ɴJSoJ N;/y}2ulf?\GKoϏrw)zW'|(q(Yk7$A/*dQ@VePY"gbw,.ɫƯ m07wihsm_nV(|zjq@-m^q"|\ׯEٍB,jmQ̯YMpoቊRKIs]fK t Ӣ8?Yz,G/.g:rԙ71hcع ˔VjldɥQ2uxPwT1*ő@P$wTBBI$Q?J.""Rjt4Dsp:18sUDl ha[0 ^Иe.8xd?\-ymy˧z^H*<-u^m&ua kTOQT^8i/` Y[*Շc -uF}hczlCwÐuFR\M;`4XL]/3Y)`)x (2Cg|t[kv6h_Vjt[ש6X"|+lG65X6V3PTO,TmSc2]9ӊ 'h=/G -]?N(_ѰҼOW`Jp) ]%tJ1DD<]%\Jh5:]$dHW`ۧ45*)ӈrYmΌ/΋[oONb+?('_*y}o?4%N\9! D9?F]/GՃJ܋{;pf ^)HCÝ>l2 B0]ZVJ-c1 ^\<b~mj$m}~#p]FaxR1J%Ճ*>F'йm=j9Uk+eXG0떔\YjLzD%ԖhQjc1M0 +{ؕ]ǯE^r쬅wd %ȗZrV:Afo/U=ZHer\EJ䰵@97t*UB@WφĖK/Ny7Zv+ɑnvCqj7But%w鱸t1UB{P < +BA>o6( c?ɇqA]Y @)dzZ6rX@N|(Vڟ_8:9:/܀v6A18f\&}s)mo}8^[nW+!nŦvLד1/.¼lP}y17/<ɽ8i!%iy$qQɨ+# .ڡŇH)R+(_ 0gWŷ5u0jҶʶkTѴ͇fmwfUJ~Bi){df\b% wVK(9lhQF=+L¼\BW ]+@ɐU#Jpz}]% ƃuKpDW)FsUKI_*e/ԒUpo誂gЪcEq톲kQ\]= ]IA]g%%眜3z0R(܊"'2^Kp5 MZQi:$MDVJݧ+ENDXtJzt$'dx]%' m]-^"M;uW .GG7 on()]Jt0]%c;坧RЁ^ ]QB0=X\BW ]R^ ]18W=+,|\*uP = +N}rtUBY P<8/յX_j]mʸiu[u0Apo|v[P]JݯuqMu ?ۦtK֪n67ڸbiu3HԧCKSs~ݵjD/笪9J͙뿙bX)NLZ[ -o˂4if?{׶H\EHyo?~Yy+,gvl}؜nU%͚H"Yd̈c=6?wo-s_ɻJo#=oNK:+_;C? ;6SC?Y.T}¬ wp?F|N7R4.ݟPx}2fR~򧹼ٻϯwd,pFf==Fȓ zK?ߩ#^'@I>Q6rGu}v&vS6R>./(Z?ƺׁ3e`x\rWk>ufS) h>m$Ղ>g ށѯݻM_UoX s!vH#TeoO#Gu)6o|qݍoO~x9Sߨ*믗W9tOWG}a;ޠ2ڱ{_CK8~!GkwTsot뇧Lu>tݾXTn=w}uw"]9?7 |H9%lݽk''WÎx|޾f4Yrv<i:9+7HTN80-8/ HG9~2R*lo]tgHam=4`9NB^fe=1 #ϭ=g:CL M<-fGewX#bj'CW*’>vm$. Y4Mi%oӰ64o= ޘkWQŝT?vQ"4>Ck~<5 Ga)~S WXlZy+4[k,w֒}e[suk̹Ij7G#UxYLvςu뺩,ȷdjAˎMtgUd놮T7 [>zv-~?t4}8SkkLLWWÝ6$Z/0JW/5= PWo8 7 +Aitx#+pOt\7t%pꅮR\:] JV:B2V7(GW)tCWgOCkNZx-;+fЕu ] Zڕk2xte'u%C?teWڕ=1lPU]#]9َ {JAAvt%(_;RvЕ N'@FtutepZT`%}n,xyFW,h{Rp/4-hiZP5 >FBs;++ JPIwDWqO3Utt5 p<|)pIih\] tWz3]U{+A{tv#+={]~Jz ]#tfc+c#hM7t%p]7JjA)V:B G] \Е=ԉpSPJWGHW6Xk{JEꆮkM]ū+A֮\bGt%ꆮnv%hՕ A"”nc 9gQ;KȭƦNhw9eZviX-ZGvIsx[7_ys&w~ۙd3 xۙDFPGY|gxΗu O.Y1d?}ix.ϓ|h \^bՋI] Еgұ} _OW@i1]:S1x>&/76㟆6aLD Q+]Z)TGtK&mPtutevDWlꆮ6pc/t%hJP^銕AwDWu\+AkJPZҕU YSlz+A_;6tuuDWu솮.^Jzttut;+ЕuЕ}kox=]%S\x|4ĽЕ5WWd ϒV ] C/PK4v#+ AX~J7vhɠdҕ3ԕ>~dUt%h}X:] ʰ֮~Ɋ v;'uvx\sr͏'^\^JDӃm\“ߟ)㿁téM֡/*Cq{"ߏ_^K:c`?wЛJW%ݥ|>5\ܾ((^\߮ߡq7β;c۫ohVl{x{T.~hB􍃳?e/#}9{À^'0Rn}rߚ \Rߏ㉨}ǎʏ m_-Gq"nt˼o|AcG/0=ˑ~џ֔Amr} S']֝ؒ06 \05Y:ˑQQ6 \d/n/ow4^cڛ_g7Kw_h uusz񡶿>(ip5r C^嘬8l))kz(2\/GSf ͨ't!bJU!iO9W]уxT^zs؎Xg엹y+opas)V,&gjAC(ZD[KCuP!UXZ`H̍4Q ҐS4)|n-6 OurTc4 ֖h*"6@&k2mmĩ)5Ĉ1 2pT,38h 9"䢇f(:&.\T q(9%|N{g<YO Zi2U61k(Whm/ZamM;Nr`*0ߧ@.\ QeMP{6;[̯UhG^"L+Ј* I}KL* b Z:Gd3<%cNօ9 ~'9Q$W-tM%[K9JJ kĠ@rC}zcjaN>,MѵHIZ%1Zk|`46"G;'Xr!xQkU {KC66WѓvzUQO)Vէ@}Z 5S&8`|FnJ}^\` :`֞%EFш ޞ!|m.5Ӏ0I/3^*1 ,آ %{**6(:j8  xjs+ (gu%ʉs*j]vUR6 ɠ-] !kkcl DWX čd DqkY6[Wz\s \/ #+E֎U#(Ѱ!g%1*T.J5 ɷ*|@p.e ,>wS~oe9Ԥ eUSXYbXf4!Q6p.J8'X(BŹ(`2_%Ce:|C@eg d,̼޴X =2UFlTzCZ2ȸA#MABXG @$4'X!AYP6{icEBq>2茼5gAx`1f#nnQ!6Ks ` 7D pȤPgWD52F(y#؀d@B8IVZ hȽ4ݙRPq seĸQjQ=kϲe^ 2@BZGWm ()g$^ɮ\F(^QYUDI)JuI2"-^7Dr!*qB='Y(k Xvuԡy VКe8M236>Fthᗪ(#擌1 UiDD1M`߯wLBaHEXOJе 2"hшwPSv d<Ŭygу%:]%@_>0zfH HiLERpi(a3m>%d MIJvp ā<:AZx _3XU3ڄ`a<%?dyD0g@ $"EȚ\\Iq63+I@ĤBiv%&HdyP*ZHo2"v[ۿ2WAMEBdϒqp$rXkI὜{8Z$p9 ~XD s- csZ%XNDޚmÿ@gѝ4YDJ j@dCqyڀJUJ vPPv`Xf1hEy碍yX`ᣡS@lg@z~{4AA]AI%VM C|# 5h+:)Ő5XB D Vi QaW &jBo"bJS,e3SNc?&V0 0baBSPQ¢TMVqT F|T5zg !-й ǀGT\ֈѳd>: )#fnnO<Ԉ+6"g&I3"tSQ t-`5c?kS+e5o:i P6qn$z@6(=,CaA'XJpdBhC3`h=pe랟6)AX0#Jx~SdB)8 8˒Ҥ.NI%@% "`~7X6RhW5K/1`.6c,hV›,F܄ȰbC/ f 31v%0] Ax3x c W T|N7t:_.rbIskeT7YĚTDL6)g7N/~y)jj[,L5< R~哵^zYrQD|p) v4+ jx\2tZ4i:RRGz 0 Mw)fUưq:r6<;~i>+eNO9vqը~Ȱznԥnvoжi7y^ صu2O@=TꚹZrU|Bp~wޜXjuJ @^FJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%*,,II d/(WG J(#%)*^@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H tJ ,I G .cqGJI tJ RjR@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)NV g9#%{{*p%=wZ@g?OW"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R(>Nw|6ފWՓ'%մK˄/14[Wqz=?@Rj|KleKz (]hI.pá7VpkIx-f yUmbiuyV϶Vn%Yjj9ß矞SKX@;T 9z`|]zM74] .h?v.(5)Ҵܫ> `gzCWdUA[5p*("!ӕJ#~p}\ RrDW_:\Zy67tUZ*h?v*| :A{zDWB1uo p9}V=]] ]r"J7trNWJtu:t6J 2*h8v*(%]"]iv=+[Z}T/lvQJMzk][]( Eܣ1~7{2@Ա\6d|ޢ$U ]^ⷥrEiK蒀CYP>mV:s*2_7u+\fQ(b9##[N౾eU!uyDhSJZidݵ'yl,j%YժΉ//Zx ;i2qujږ7u* W5L5WB譴Yn Njl5*O6Z[aBH5Sr Ѧs0f!F(ps<'/`[U|9S{|ov|_v Z{)SF*'`ݣ,57t=][i#DW}+uϡWfE<~>1;}?ՄUqj?Ȟ=J]}sXi*p ]v ( #:EVj{DWmNW ];]] ]r_5tUͶCzޛ`UG][N𩵲GtOw>y}]骠4J;ɤ]?tUAˏ>,(]"]>m7O 8]轫R +[BXM+fyθwhv;+}8XI.pUo Z폝 JˈOj;wmvV\,_|RZ&Q!|uvzaaWž|ҽ"j ^R'sL`V׊K^7¤*ΣmYuod]ҵ+ySb>,մԺ'VnȶAd(?7[:4uehã{>Aa2be\{wqfV7 \ŮVS֍6uv_Ll>X~69.0/m(r>\ QٺzbuN)W%2Q|߅َaUbƢ.ղл\ "3ˬSE*Ś&oXw29xa".{(yX3sK1*߲u|P xLV&[iRdamT "9j)xg99`"d#uVxApI^xy*Ty/gL!JbZȜ,Yβm7p[m",?6YZ3`+s.(2l@^|k Wɴ9P ˱mt؉b4 +{r i'E(np a$ûl;4 #̼">lGi& Yn{o.;W$[]?ş' -OcOgQJfg;0/fOXU0,$ga/0Iɷ*=Ҷ&H$Jqx rW}MxRM#^p}|&dm5i:E;ro> c7!UGdN~[%=O9vi?jo{pΉ2%ly w®yT}*/|'E!sPC[$|}~x=Nޝ_`oC𷫔>Գw=^Cgb=35>tߴ F|/Epw^OUli?iHorW w^:v>e]9y9#SMrX3ぷV /6'xI]ko#Gv+|q6p[~ `x o6 l`wF+ )ʁ{n5%%8jiۀGX]]9@uRǾ01Xy-{y|L-Ƈ<*(DOx4veD+B-j4m- w/e|1|oa4e웶vO T>p. wжk,[żHun@3ĸBVЛݻ&M :~ nEVk-ramv`ZĴu^iy(g?m*ZI1A6A h go45|9ѓܥ|e]$ GTB#3"qE[  #\d14Ke )nJSR6#K<a'kЖ(ȈUNQ !RՆC0q:{'כ9h5N jg| !YX"gs8,hL\kўoPdsp(<)Q:k ʞ$:- ,`Ucov񬻚Uidϛ *i#o6A<і ]]t?H b Mw,g ^X!VF ` P lNmoc]ќwGRKY ZM:F6(E:A: ǣ(! A)1$r 919 )B+ Gtx ^c0ic\( [x}1Wo!#CH:Ɛ>m>")sP$u:7h!,BvUUC͊Y/J7dc$y-e q d$VH-SʆZm8 5+yt)i}{|a1w6; }kkݱ==i=-w,ibRi2풹U0JDe;xZybRh\;8  ]$ŗ0ANQr#s j#cGz\Vbn“bjD 3޲8n;r/4py9ty>v&fL+GA#Ȓ5 $X&#R"nR*bK'Qm(b`2Һ6Ѩ`T'bHãP>+o0fdѺʈ]m8;LiMbk͎ǣB޲j߀ݦوaVj"Q5$(fS{#BQFke0[^Q 18)Cb\2[šY4"  \DETjُ_,M_k~2"D'YRŜ'i*ƒs//Ș!HZT( )h_!zDV6κeJ`Bt*lHJZ(KFHFY;6pfyQYW:͒cqQVE1​f|1@/ewV{ eLqĤ( m* x \ V⡪8<<lkv3yP޿q'ȅExLޏK{?NZocNgbu!4Jo3IY'y2LDN=/Ta?Q u;yB 6%§ᔒڡ*95!xDYt"ZF :qc3`䮮P"d UUF:KH躗JJdp2s UC9 7ԧ@#/<쎿|/N9xuq^J<=UxƦ/e!]MgH/JEhMTq2ىv2rt>L>pO Y0%'A2i< 4C.ґtd)β(bL!dM _2{ST8e4H6W #،-Nytd_ψ6bMeDtNy_,`l>jGF(dL`VEp#͐,;d|:3ߧzBπ< yTQgKby਼e9DS\ˠ %~xiK>%ty/F'#\@HTh[:6-oκ RSǣ>jUvսʸ{\V>#MݎXo|Ene@(ڂ]WO?q"_};e]ʏ~0ðA^{OdGM;/a[jmq>޴Ua'WWlú:ҷ#wmFﻎ#_|{;`\Jw`(4y1钌f5o'0j'uᱭW\MwKӚX};[R>{t+ ֍m&em? hMAȣ&k~kɇ.B!PjV 7DR̟%.u|269w[0{iB ˼~ӼQ.h]xLwܘ'Ba}}wӴkNƳ=}ms+]Nu{Cɖ[nPj[ڤÌ,4qƢ/flj{ .=w׬A"Z>H:~{L>Ȓsق‹D|1S ȈAT<JJpؑ#?}G޿U!'~ES(B6L+f.C1lRE aqdDU2#kzU 86}I*ߺXc p?i~kUus?/nћsLۭo>7_VboN`H>a2yڀ?^P?Nӆg?|FkiG}(Md 4ѭKᬢѿ6E/\ƥ QC& Z$c,kz7x{IsqW=-gM T z;~Q[|mݬK7`oGdvWw 𙊯r\Mit/KM:q<׉O8Omڵu<3u=Zj]R3d&TTf"\0/eJѤ#TcoF>{>N/v˖tʻ aIpsZٶSX8baMDV.eN,{oX|{|XZL7ܣe60.rC6ƕ5:#$'.Eb j0f&D:rș Tn8AIMB{K ] ~1yj6 M{-k/HIMRYB3([#gu}PMF4 1n> G-By0%hsYh9n\[G!&0Y(c$ KEm Le6H8'caEzDซ"9$(I@(UYΚ >eX Khv@ &[2È42!dNA&TFi*a&*CBlW]$xӆ^M'"&ȑR׍=H{Cfh}wl sϴ1'⮾od]Xq||2=y5~ t"RR74qhʛƅt Cɳ<{vV V&d!0|bxC98a!$mI*%~.`Ҧ. lx4F0ݵB]٤CRlU~LI$`J,ۜiiZ313u.1u3F597i%H5xIy tan:I)adPق goY휬XnR,uV*l Ǟs,ǣr:S]f9g[ו] |1b镯mgוqDhQVUu^t%߆ )])vWJ[jW#uHիEU¹mk:hNVhTP 6-^Po'ݶX)P>T5 <#BG/Wߕeu۔mU^@iQ>JuIF,}YS$bjTƨ@rSv66Zյڦ|%t%/u^ymAy8p@5}8RV~o5[ kcMm"!Uʢrh1Pi@y,|ʰؓ.O>xpɞaqsbUvME mMEC4ucDM |kQ] UYjU:hyq*4kSBuQ=ڠru,n3'?ɽx ".=#ӇhwG46;o9:(:l`2b2 ]M Fք2%Ɯ`t Fc4΄g5 #?G݌}}֡C1t񖂾t 4)X!hE}Ct?iKEs>`)I~!$1:BA$d+znܮ}ؒަw3nͿkw1Q՚f]{8Do~bK=GzvWo旋O;6d߾7 v6.ymuiuicw j0ҾG~ox3K7خrEK{U¾;BESut;Fٯ~'nUh7< }EtzjoGC͘[lhcy#/_b%-`W`bQ*QW~ᤝ KQ$alzC[[7К*6شtMwXL01x5y!- 6힇t,}֯ߗi& rmwM7di5G:)eS4n)C_ {Frꖼ=-m:Wohj.nMg65 Gm[*Tb6!" ;4w@%]UFB>vrͣ칶h0st󢢨_3u>[_ڎ' c{7퇂a]ԟGN Mjn7h׼qЪ,9z|zI_q{4.|t9yqIyxvu^=m'# :ضO͙~Y{yp큳>O ݗy6O=OۼooAXx4o{"w]5p)K>:~9;r=7o3Qq0}5cp-v (Z4kFQڨZ`GAy S^+ q]1muE>j2|5IOsuRtŴ+ Je]PW(HWlQNtŸiCH]WWj*xDA"@t$]1%f]QW'(ࠂ]1.D)bZSS<>G]yyA"aO u]16RtŴhSSY+}Jc__N6bč~Ʀw34.=LZ\\o66mr@A::ӆ3c rG7竾oֿMũ ^խ?^4Kj Y,(b>-h;\~09UoqD](o (ƟuT'Nmc~\\?m׵^7C۫ҙ~vXPџ>a}!T\}nj΁\ \1SvV Z6/X;]͚їީSXFWIJ N^vMA5Q=f]uUAUT.Ou(B׵jm?/i$P ASwC ZZ(iZ G'931b̔f]Iו۳RiT <8:=Ɖf:7n\SFD+.?J\WLkmbAȺQ ]1WbtŸH:(!f]PWJX['FW]|tCڬ S銀Y1"\g@?p]e&q˺,*{A"S,=WP;>"yjrc@Arh1btŸV)c++~t^WNgimAlu5#]yNw*%+P/w@ 4F%gBiNt&JUQjVN` &ZSjubtzTI< u5մG&z0' #t峮Z`(HW ]1nRtE0u]1%ꊂ` ]p|qiQ+LmOެ8U+vr+ƕ]>wk8\WL} u[g銀3vŸ6Hӆ;DU+Mp+:Z1CL\꺲b j-EWLkBbJ^žU*x&8\;qt5OceLl 0BW!UOV+Ɲz;qSuQ:u5C]銀btŸi{p]1YW3ԕB+֓=)iMbJ̝9*@/i슀OlXb\cimAt9BA 2]1m4(Qe]QW^9T 3)bZRSu5G]y>eIl*W2qa ;W_piV7i9j:BQZ#FW<eژ#L.AK{V}\*0(`3q:Nq8Q'xM,#tZ;+FW]1)!j+A7z)"Z?UƃqYWԕ Q G+µRtŴRSbuzA"`=Zq &bکJ>j'IW| rん+5*u]1%ioQ {W̓Aw{ʘu5C]y4QDƵb+GW=e,u!b=P(z9 ҄J?jZJqyG63\Ɓ GZH/J3%Xb1f;)M1wθBkT%ڙ\ɂ5&AC  -03i]2-rh!(i191bZ)CbtjGTi5&#VW#qq#i'چi$eb0_WcRTtƉ㢓+u*u]1`fT|tE=Am ]A9N{8g}R۬p3/HW <㑸h-&]1jB.H ׉ 2mL^WD9L)u5]YJX+9ゑ+5)[uu]9k8Ab+(H[ {9O7jgSQz'2RW^+g'E*wf Qî#^ ؁M3QR4ʹ|3Ystt:FHWUbtŸNKz2d]]UKOa| qLn$.N~$(Go#tYWOzp [q+uŔuN+5A8]1n+uRS̺7^k+u EWL듏2(N+Q8h/FW{ ]1mcWLU uEJa(`J*)bZSS]QW.(0Fɉw(ц29{m\ Z.矁];ǎ(wP X4(ELGLSQE夤*Y0KU4Akda`ک*453e:rʹU̪ՙvq5k%f KKG嬑`\tfJs{]N8]13i1ILis旣+g bO 88QeQ6OЕκzj] lq=HFH]WDR5.$\Q oÇ7 NC?ެ֫5nQ޼S6o^-~~ly6mMO_ݛW?|c<^{50f௣%j.fl W@fMA^xޞ_"a6K91HeK}+zM;[ˑ Wh}?W:o\ՇnK7g7`~&~J-q`Qnfi?/axqJ[g{gx}y}uZ^j~j{h kETk l`q1mpkL}fA"`rtŸS*GubʐsRg ]1G1bhJ^WLil uEPRtAF-EWDO]WL 92 ]1btŸA̼ SSf]QWV%l7eySxo޼^ DU.?_Qtw!m(oߊ_ov1dxÿ+W5pYBԗo/QUq}tww&:;C_CF֩=U^`t7?{Ǒ]@_-Uխ[n X"LqS3!WVٶ9`QG[[]{wO88N):e]Cl kk*G(ގ;;|}OPo܋זo~YX>>W w?Gt<|Bx\Oo5O`} ~!Nߡ9qw{. mT7r1?m&$ǖ =Dd8]XW:RLN݅]щ, 3mN{^]޼lmwv֗ymnLO(uw5z ֫>eI1Y(+G p=fMNH՘FTT Z}ʹRtݫRyO0v:ufCh$k~\hho;RPcn{oɤYmɢC,ZVCp.ַZC{ckD F6mЌ}ԫN=H)rv>7](F_c9$\,\j6|g 5SmmjJ1'ZpO}A,=:d0vcF4Ccv..o({PtVbj!SRWkO@Dk|}瘟VgG@SF)hmpmYC&S$s*0&?rih*T{}ޡ͎Z]G}tBz]F8hIO,MVT)Lhi/N{ü 9 9X\#O/֧\rޜǬ*ګuZ<RIIbUTw$ף(]'@x15wnE;IF60Ǒ֑ ~B_O ҋJsh#JK6>`DmT%/-"$8d"n6kr5jgb>iiѼX VU.CkTW)tO|E+ܔ}Q\` :`ўJwb=5ڐ]jy;aF%|./VhK>#(<ݫ N7h NAkahr/uh;Gm^q66*AV*.:' t±&6VW]s`ᐬP\ZؒaWg\/S QxB@2n77 VGRZ6fa=g5n9TMAPڑkj &*ZE lɔ2)O-` \:WR-+5VdfC2!Y6pnXKYBШl)t i𯒡2 *$_xl&!̂ - %5˺ZfH57]\?؂1u6r0K% ȄA[Z.FC(dԭ9ƒ9 qN0t A7= `m֗GC%:clu>J@T`rPfDk!\(VZ hm e*3RPHq Ls$eYj .ڳVywD"=dH_(Wm (ĩ(H!VW.#{,u1^TuA7VR]0r 9qVoDlDeV"2%%&dY2."2Dbn iDjE}B\FФy!:380(crnЋqcŬۈD|bLbc i5 !pB!v ء3Tήq˚S8m-xP\-m&>hh"X 3 o b9PT8xiҗM:V%Sh+JUe ,#(yƝ@FyQ|EΈ ,ʃVs I"9Zd^S0P>x!fd8qsEy`4/! OݗMd :[*x $nGfp6ρEUUS Yߨ"bUƃ d-Ţ.k"Hb"a~x/gg}g. O=1ɘ}c6A@F4=R#ϷAXAJ./sgQ@R@"TeP0F/%8CܖSBEk!3Ўa<P9@Bp5C΁CȋjFNH˽eGA}ϖQ`0% (YCN~,A,J35Y ٕ P?AjDPqyaSQõjg0aQB]%1#tӈlUWH!(ƶDla5{-4>VE{نIʣjd4YICx(m@V5KoU5^"g@zd [L C\[`=Ɓnf es:`t6m/nNnT&ݺ+ ٣;w64zRTZC`[J(8vR%3,FnMŌk)DmzI(ΓFCkBm1y('g =5?AzQ{20pPaRDluHb樇 tyB1C[usFiI,WRcLw*TP=`cQ댂b)1#K3HπOt0AzπeT}fQ `cWP!>w XW Ӷ6b][@57[YH._ê Q Ba( jds:j#&g=]~kP` tB)1TQٔj1Q@54'ynژ#*Yy6@2jETAlZ՞KЦk&d d, )Rȅq+YGy~5k7.Yai\x 5^5pgBs5qCU9P-U'zdu!~)AHgk|T= z*JO (TcJ ]\1րn pQi8sTS.nKC1ˡ옕xMPdpyBEK't\- 56uv\]!D;[`]pJg&^c#w}}7Wtels)j'rr3eRGjbu￿}ćD4픚qScf߸pCw_yEU{ Zow}}f{.כv9n+He^PN8 NkooW;j(xN1;y[Iggc_[~^E{_0zW7O?s{7W8۫ɛ7 IDmf#WhG:?vyo;NVXœ/mwn ڶ0MZOgh㶭WVO4I=40;5z{;7Kd4 w1r8zzF3 $F`4p1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%_p5Al\}4Oh=w@ —d/ =2:BPcz}o}GWu|?TлTޢs] ۆM9f*Głf#O(&F11QLbb(&F11QLbb(&F11QLbb(&F11QLbb(&F11QLbb(&F11QLbQL?'s 1QL=RxF1d |2W R WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\5\J2\Gkp hy+rJ^ኍSV WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\jQ |{cGh{vfgoX@Oڰ5@.ty>"`FGCW.cֺNW% ]~芾pB+1]}\[?mP{̳+ "؝t4t֛c+=vJNW@Lzte `BW@ 1j4$t銜w|Lt5 )\@sxHW+|DjX h=] ,%ֆuLkWlH ] V ] LϝJ^ ]9oCpGDWγ/EWhځqvnwZx95vD[}r}hEoapIc2' Fa~BǴfG*Ziz *_$MGٖ݊vWt^6/r@5:jX%j}@Gyuu5Y{~Ҵ)pq~|{iػ3m{wlUw1|E hwu8~fjRHyāa Qwcw:qWܛ7}E)sS7hM-<-N=`痝4x\!HA[޶ӴA6%׸9./Ӣ7jާD!LCw0ۤ.F)o8 >hʓb+/~XtǮ&"ʓMM1տwmJ}ņc^E=86d_N:#K:-jtWJPo^0VsOG/f[vޗ[I.hY,}1tA%/Їj+j%o TԚShc (.62AK}QAdre5iYJwf .YSK?<yݛ)22Rڗ rйdY"ُ{AGu2JZʅ .yd$@prj'\Ou=MbobrzE|hY DeeZ(k/\Fr-͂JJv#qsƌr STr剘tuqK̲rnozsUGtoBz" "!,i"YeֹBN!+)9],{؁v%K[a*hIuqg )#RK `(/P2Yfu(!LB>$k \)TM9pJNST" @[g;sv?9́}12/ ;6Uƶcf`*b:z"jTYYV7@F}EZ)ĝw_򤍵qcD%*PYv@jgdYv@68-XF2ؒ2K=! 5ppgh5 Vn-Vl_ }U!xe~(k}z7/BGĹC*hi:a*H܊ȃɘ4[}J43!SZ<Ԇ3 9qSŒ? ч\! $2L=ӞIH10xFܚ8ۍxDd^1Xwsh+m!OO77H٬;Fv1h}u5RuRۉtD JQ?TĕRzi5{B"]B7CPjV `yhA+bNܓ,8fv¼ےKRk2tG(ˠ59'KoR*@ "Y^ (8ꕤ)eA1nO`%:UG\G/_PG6!׹ƖM!Q'r>NMK'4_ntv.[22[ 輤ۮ5[sD"IfY˃J Y4:h/Bllw.qkbS'VY V/Nz!x9 [c!Ș!/!ɞRbg!t5YM MKhv;' g"iFhDy@ O;.ʧ,n {b۷{ztSC矣 8xj樈R5=UT(@fbI1< &d:l'iz?:lmĹe{@{~_0tB uwD[hk>( B)AgmfF7LXgv'W ȤY'>{bG߀/;@%Jp[rD"!m 8GjˣeHo4:v$VR(h,$O8_FEM1@FE+=1qo)&'9WlMrr#۩vn62)ښ|cp[;VЈRɱCG'BG9:=w3.F_$G2[CRd:I6j^ԓ' h$VK rpI/Ie]#^%XƼZ5vrC MC_ >Nϧp #i*l}N- v^.#Y Z[KLkpjTAA*)8FӢl/iL?-(OU#HJЌ{({(=v-3 =>$59I'2g80F`2y$֗H}*riMn(I 2@-[>:u^I8Q.7']B%ZZ!:2TEK%bBΙI"))5Z؅^bjԸWlj^k>]/,s rͺm5 AA=@庻?mT$Á&Ή@U / sFz#x"()e||LbVĢew4zP.(JePHǓdtgZ{GRR2sC7̦=09-Hgՙ!ff.{gQ$\lMjrM>o_Kp2 b@}L]1) Ga{-5vBnmRZU *굹G;័߯z&tiR?ydKg؛UC]3mݽMUˇ_xtՃhmno~wsHfͻj42@*JP򠔐F}8|.R'w[٭yWr( D{Bʔ{l6aڱ#aTe}u10b`Tj~1󱙧YYFzW >g7s"4!iVu7^SۡOY6hE ȔqȽ4/E~Vdog?qO:ӥͩ.ܗD}lzz^rLDДCYfD S+yMAS gbz'pu>s :<]ڡleo(̉( aע]BFR, 0* U{)*MM_7Q )D:<ZEBgQ(98\xWWL˔mޠ8n=N:Ϫ2\'.ZPu,%Dft"][s+S~f7#~QkI]'ɺdqpXH[߷13I 6faV[%7\o~l,ۧ#)XJS|UPJj~$vc]sN Ծ;ZSٙu" cpLI!2h֊((:XEblݫV+j D)e#F˴XR g f^D0Of')F:FH 4*x7HQkO1I k* IdAH!C;T_$H8ߺѺ454c;K|ګ wG?ä3&d:CX+|8Ք|1^aOp| 0G?(NÛ< Ecǩ#H5%ÈU"O7|ő`@2@SD.bxm`<ln\Yuqa*ZT'KJT`t8V`iفZR9(|s0:QinɏҜE<࿕iśE61;#~^[k`)/lqa!6ѭ#1zaH0}և§RxBc{uuT֏:dۨ-s՘SgQ=Ly$ a48X$8VT5 ԝ.Na;8:oLx}1&?xNRVI!0D@; -s Mm1duیrNe|2s#b J?\zV$nKDĎY,Q!䓌ᅆ5fj’E*LѢTÃhwA+tR{++*o|YF8CH[OnP(Ri`VY=4ogE@qbrK ^Ȍ cLʍٌdlXmg uf,T>*.\&ɸY?ܲz&q J?{׎ Д5P#$8酢 4/뽓B 4,ET !){D]7& fhR*#a:0- 9rEfĶÁ_P1ws펻vޢl_ڝۦLs@T$RǸLEQ;U3)pJDY0r!c!Hh5> 'H CGzP*:Hc2a6r6aO/"0NkqfD"v6* 8R=)ѥ{%I1s ,2H{u^ Q+ IqբzHL8cu4(@&LS掍F,^\OQ3./.̸H:\pqݎlVkmh0znrD`,#r4 ".>. fC lV#lX_KJ`Uy XŇ0H[+|oM7Gχ@fm7ݓ-W>/aENߺmo4n2]WU'ËjVa㶎|*ҨuQzm=9) T=p@JKfX:%m>]UwqP[mBBpZ LycP^4]̕Ef ʶ$@T, "GjX"t*U nfx9`":Mg Cm+g.m'wR#3>y8P_1~hz[xy('.Ϳ<}y9MƏLSx$4:좯vޡ`674X7 &lв@M{ClŬ3hcަt8mƤ6f;A""3}N:~g&6$"lD(Byo}ȀE1lEhF""?1#eC n{AD8n& {-GHbrDƓdttܩb}WIW.yRmOB)Z3M,亱_<Vx[pQeMz/_VbɫR֌?\FX|K8FpmTOQxJ<CX=.(؇Q EE@(n':șE*Qe&&q$( 6 *m.Oxzgzဪaz1QkUW1mr[LӢN}\|`ͯRq92iFE+mu.YgֺÃvcq=HM+GE#Zb`BU" \%jURASI\%5r%WZv 'VW\I v1 MPFf+dprTYP~+ŷ!(G0 SpR77^rrI!1D /j'%,+VLO3cb>CXϨ nOD0{sT}9\ᒨT;\ER={d0OznhN;+8 0JJ-Ѣ2]DaLvWdz:1_{5Y;[fOʿ J;0NeB-P8CzABHm'g{sQ bWw ZWr,͗iK#ņ?L/>A19p>P]a+޼Rl{cp6~&Wįvwspoۛд܁JS?|<(k\8hö};]E^0疴^o˫їO6"gpTH]ZJL)RZ4ak^5 ϥ *.26c}ZL5AD9O2sԔq9eqL%]Tw|>Sg8ϣ=ϟ{nLξ;ZK*٨ "9Q(@$Hhi L09 A)E/,,,Tcq|!dV]3+ՁdVvÿS8!F# K"fD g%^yb*vryRgKHrU}T=k 0tJ*$ۄJf,FfXӅ8cW]h B½W^fd\ݎۢO&~:f@eod56"c^S3.>ϚҋYC7$Bɧ QWCVqQ(8xq`B3&NZ3^h6+UFBI(&p u*(O0SgK.>- s=~,椗_^jp{9k{]bzrgB/{6Fg}o?? $a4wJf~Y Aj#rr=wU{[]7_ö OXȾ2m+JP hK.0 & {u G?J\XV:2έ`lFnsszmsS?Va-h|U7mOG5aU=n]HNy5)3@J{e-;"qNuƲE5n_, Փ\^yϞk`Qؚ1!&jfa6*V֒Fh,k-5] @>͜QJ&&v?3:;_%2y*߶feN>0])ϔǂCyl9,=b?o]DP]ɞiʷfc&?ֆ]]CQk.<;N6ystV,7ہ{&':ؠvfސ[1 DŽ턷1MtmZ'.XOqo)Yd"JH:Ю.Mue=܆D̦ģ0 IrJXΘIƦ#_#?9#B +C(.뛄!yL e2#OL$Jj/ 4F+2tVMPҹƮ+偔- ^W=[ROR)ZJI0uhTM9\mҒDye*:YX%Zue5ZDwi)xLIhA4c)%p0 R2!m{iFye-(c">+2&q'ɹZDS*Km1r6[ Lfsm~R%<{x1ӧc>al6 >:Hr i.2īef\A{R)_!Y.J[ Z/ı{MRRzorY;r'zǍ],NP`D~*c\ZP)H(tTx0͌Nlr1μ xC >^I8;8\Zo,?']h;LCrrifM_+$D}80RrX }zفdzU^ʛ\xji[tAxn,KBLR%T*oB jxV[j 2up< iCcLD"FC 0צfjwBh{(~Μѷœ]\~7M5c~Ti+ˍ%:)bsg$( (U8o x_ Zrg~JGQ8ã_uIn}Ac;-(S%#UJhF{$H7e#C-(P7dOcG#)0d'xWG)*80U{m3?1ˑ79֙l$F:<"\9pGN3Ea?[BV{?ūޅH=~lfZxz~qQv`H.Q}K.o+mT*omyv|z }>nS; Ϯ5vQ\y[_x}9~f%FQ4`Ĝ4}l]pᏗo8| Y5+WbJ竖᫗Yd1j;Ta4\6zrf?]ws\nrժ+Հ`RWdzH]o/'7J6zw'5fRޞ)P}oN߿}'K܁ix($A=_B'Y/o~ҪҸliK/wn6򒷼bݻ#lϏZms,S~_\h3zza$[NXu*yU 0A%!%ʥ,/G7Ľl$8ga#A奂)hf& : pkK"E'{Vbm6pw6RqgLUqO8bX2}FehU^:ţ6y`bKw:ET*n11|;f _ssXՅ.-dpy x.d_:o@uI%dzz\@LW rńOf6YrD?uԛ:{y/}"C^e˄[Ǐ~b4Bs.l~9"oF-R;H|墦LեL(KYS9sm6y-o9k05?OG_ծYσ˳pAͮ.x8i .{ӻ^S6B={?/||Cn].sR/ƣ޿+_?z`|nUa7AI9I2HR(*uğΛJz²{HSn,K}șmwa* \ (Q':T6N HA{LQ{oiLNdG)6ϡyNj+H:~穤4~ 6L,(Az!`D(< 1BU`mPWТg _Qz.5-\mThTtxՅBuDG*QZ+#%F΍]mQہñv嫨r9 (cdJKP0UW JP"vk.pex PLFѰI&1"hif5yD*u`Z2Z#VaHwemH0p$蘉ؗy؞}iÁSJ"9"e{&")(2$[UE0J_~Cy, CRdƲh1A}6 ҙ8/ 5<' 4 1m#c&h9-s%5Ӕ$[D}5CrɂqrU`Ҝw"q*1ƒ2WFS3I3Da%!P0W4ISw4|ª0&39wn =Ss@Wp_f^=>Kk]D)!.%G*Vj7XYP. AF(5$MY:&NkALR~]JQIF3!w,6z]{L`'XI@Bmi)2g0*5g :2ΪdS:`e@fӚdu=7w0IncN.v~Zۊvl)r]Q-YR 򱤊rW-;rֆ##4:I'{B\ztFPX.Tt%pG\Z{CfK`,+h4v $i(j~Fփe!2#b\C% 8P-9Hv%n 9wH \~qJyT6e2i=)KMIX4,R㴤>sܤCs))q Jz (8ꕤ)eA1nO`%..Z VEOW>M\YrلDfp,*=/$h?1/Qprs]i [JyunV?nD. H2nNZVJ@iB:h/Bsؙ{g^L6ENdPOD*8tN_Jq?צIgVJ  w5ޓ' h$ c ᥓd7;d(z=Lό{fSӓ s'fXNё_R8k|_7n2UB[PEE-D4 /8yj58a.F.E#_gQ[bI6Ʀp}&g E|8Ym[/${u{yh: 8>Kh,xpXGRex4o[rs,V=^KC:[E70bzmozkP{} X?4p;]<\﷏f >II26Z^>YU=Z^yr>nd}Gq}}JN?Yօe#Q+SPyRif3oyjGo~#աI2U6p(*Fp/Xl~l~2)WfͲl~g|OvCK!2fdeS6D6sCJ5Z-jp^%)*pyINxl!i!jH ה$rrIn wuM}wJ۸p#h/?^j (^rXic?49A+⊋9A+҂="1p.A`pVփ[nl!{KF&Ȍh=-F"h#XTI"u4dW;D_>TS)sdgcҖRgG:f K`ʨ0!'=D_J2-8ß>|0c=@p#QkZo[VkZoGVkZo[CUkZo[VZVk.&\Y*uzQ5F=֨Xkc5F=֨jt2r]kczQ5F=֨X r4tR7j$Uԓ' h$VK rpQ/rKi˭AG.6X _m'z*Bv <~;ʌ+ںVHuH%( O8mB XE1PfUlP(V,}kɾ3 =>$59I'2g80FL=qdS*0Xį`ѻoept^d$2JN1LK9~3UWn*m83~ާ0 G?~}lHᨏN!V 8~?UtgWesjN$ N֥\}]?0J%(91Zœz?&ARlGl>!qܛ Kt~8-<(X⃆j<20ٽO @k*ndJt|`Z˫1[a8vc}t+-Gw4bbJ:̝y<ʚ.RZ93O@VuFw=i ?뫖ײ֢{MEKD5%.꽎%ɶȌZD[?0m58FJ/6r~ZfbZ?{ȱf uɺ9c 9ut Kj@\${P+/ c K[Nr@֑Qp"$-UWL;!cqN9I((_ߋ+FL0DXBL)c)NL*IrМm"b_d3FMրNfe.HnCYYrPrrW1!)"oBٹ( p-5@&<B[Ts8&$ژg]M7_ C`4:wg)OJL@} Pxbr^e s0G8$ӠD H{K`8\>YGtJt5RJߐэØL?\HWz+WϠ ޴^eʼL^f#AZSOn<{?m.&VH $2mH(/63aT֗-\|zK.dVSR90-"Й,1 ƈrN˧ dC`Rf☀-:1R(C# P`fnW;w4_|^PDZS!dgS9 VyN"oƝkm-"lyo{خVq`k\Ss(%B=+ݐ94 OɐTErIUp \쌏>XH:ind!'-`! I)k&wG'8]QR*.vQvV|RrJL2Kr 5 BsnIHo.=:Pv LUV}M/tYo~]\B ^0b,%s.N*O.N*$=FgU"d}2[Xɣf"IO{v%ztA\+~m̢o'yjb9OO)f(%;Y"FtVۺcOFU< ݔv\㖼 8fv+KeՕϷ{8B<3n@/$`臉⨓I)eeܫ]9:yȂ*."zv 0ǟOA.")HlZ͗'g9)W WmovuE>)XWD"-Xħ7HͼZp;i1ڒy{AW@xǖ|:&1WfcR7to{~"=oҌUɹ~3xrٖ]TĞ&uWuw-s]G?_ՃS%D. e.h>|yP6{2R7\OK Cxcq`ןSl61e$3soQ֖4#w+yq=F|ic8n9(Lme,Lbm2ݔUI&W+&8=q<>I?9P^vy\yvy~@h8>LL̔˹k$|ڥNFt&Ëw X<Ê Z=8[q)oceb~kZ.vk"E!>mh,a^PzsMп{ 'Egf B(c2c A3 N R0:鎥zT[Ys>Li3l1:B* ,h%a)^WV`ȟiqjkczGJŅ;o^[R9}7~#׊ ^pCQsU9yҗV3wghqk3}`s1v%H>x.5ft"R}7rl@#oHI% qWG#^u An/KjL`+set{ ʐO"U)u:"r)@2_JR\|}m6.|{RkmZǖu 4gG]M$sv?P!$Rv#DLJVaei2Je249[S =W[N'ǹvW@^S#}:asBBBB%M1VO8k8 dS̫+z i~R&]jt!:l09g7TMm+jDr`HS"rPcT ]ϐ9V-{)߫ n3ٮPۮt^ͳ*;qA X\kodŴhW9XEtZ`}va8/z%'+oD(ǼQ(L -HP ʑmˉ!F;(BPƹ4m $ TR$2ABTO5ZlE:Tg7BtȠqR'1<oゝDXS͋`O}J$G>&Hooq )p\iap)K$IR gsr>Ql;96ptòp~1׋T; >vNOݑe.O+LĖOH^ *f% @Xx 4rg:x`O= %\m d\}((`4ɫ@|)ׄǷO ؇wxe4eߔs=,>D.].ж|0-ĩ{k{Mz|0ۼ!f_ݺ_&Wr m96̚׵8aT﹨QbD-<k{~24CZgCOopOor?ٯ b+~L>J`M/ \z'.O8~.㤄HQ/j$(Y]EYYNgIvy07?/Ԭ/d˗E'3r}&g( G49AUr_Q##^)nor.'IQfQK^ )GcԂEÙdZ)OR"Ϙ(ib1J&ijb (w[ %QLmrpkʱ͊Si Iz"  '$8MY/5ͩx ,<Q<GPF {ggqƿՑ=jO~z]o'&&W\Y^Ϧg_,Py^đ9ϟũ>78||jQBDURI[ARژZ${)+QRl~.:A;,M`װ(pNfh{l!?9V^=XOhLv'\y~5o-fي1\HQ pc%w\J݂lH9a=--7K\?lb;5CFzT_1?׫ɰ됭f\W_w+6f p5\T˭uZ{Uvzۆ^ϫ%u.z^k}>\-m*k# rtGwtܦbXyOzA]N/n ̾;EKp>I*#k03 pL=Y;!AvB%f R\zmabҕc,3'_bf=uIwU|CdA̢ ⌳)¹I>hL0(ÂmmмQ(VÌ4JZS 1" 1AxGU: 3bnPHLm' wócv/")GE _>gEܹ8ؑ$/,1  ~U[=P a?F<,PB2 !! Nd8鮾䥈Ӝt53L,H;.~m&.N.\8U4/A|uM \rI ٵqG\ F[Dy[Jۧa; 7y,53boY:mo<}"lha.M>Y.|.Q>J1B+`V(4l2)6GH#N8'ʓH#`s(z$9}2ʁ,r4 +8ciw<<>{7Urr"6-2 UB;F}MInc'R:.i+d&Ȩ:Q(r}U"i Ѣq#8TŸZECΣ}igRVx>}@Sp.6Ot^g%%\z*\(dEQ= :8s+l.b4'Z8+yy\i u:]B rBouH2sfa1*Jn)`Tid,&nd,U b)£b᭜4̌;={ј}Qu7 *;MO_ 1R"gSVA,z-/W -,*d2\uu4OÐ=ΡaK%8 A1!D F&JSJ݈&fb jGƼ%P{{e!'K+.HՐ$A5ZS#³dk6$iA-CdЊ<*Ě,2%G(`]@xKZ[k~qdlRPD#ʋ\SKn<%Oўu$KB Hڮ*Ab%#,C4)@8jI*wq5D%Up>nWm]Wm=]>#NXY#oQJ (8*jȧq_o+ d)ڶZϐ);;$;ѽrSOGIѭ/W~kW b'WW].w$i;ʯՃZ*L݃k7yQDz9r"&0I:flRM {]i$(WZuekڄ,ƶL jWFɇx#Snl[-( /v4mqls4ۡͻ;T|vdor#ڇ~[$jv=clGIj:iUmQ6)QRiI6PP9%mŬj`]Un9 fu[|P/C&)Ho<тhWJ"2BYMKhIde\?ٻ6$*K roۘa`EyV7'xIDEʀe<"ȈHS^ 2J#"(X+\$JKƺmms7QEL6YeQ./ЙsLpj_p(~@` qh<[l韁,y2,ơpoR|K!)~dKۧ/kWX&70ݬ\ G'@v]Vn&(K9g e>?>>`5f7QBj:xr??Z?TrBu}5X+F{6qqgoIOx=`S6o[I{]g8|wkiֶ^6g)zr-vq֩v[݁7u쵲2aDD&p TQ͍QmQ'(_:^,I|9 ھw~/ˬYT}2Q&xݗC88cy0Ź%tZ467Wb=ڏͯiYnk5gl5Ĵ]k?Pe1mhғ踡H.vt Ef!cD bس茈!ap) 6rV&t6\3n(iAŶLJ]bNo?|"&M3BUHKefJgɕJ7DR1pCJ sAPz%`R)JxT14>B:@[Z<ͰןFHFǤ3A 3â K`QE&ds}Ǔk ~-\Tw7^_ g.1;!hwc}[c;ɺѰ: n'tOBZ"ziK/njY3dy3ly3SH5HB>Elu;7Ll_utN.kuI_5TITc)J^ߥO!,c|^u~T{>*`MV~-og}@tOz>cנq#0 R$  gѴiho4UlgdA |vY]^]j˜m-/я]=p)fWxW]+@6Udl)UsW!|I2QAuTNZ`_kͣGHxoc韅 ZEqhK͌61TIYd(`Ic#l?X.k`F v"\TjJD`U4NXɂU&:LH[H5{:UĚ(r@(ܹV5a; Jua;Imsх!niB[a]]akRK ЁrJA\TD5IV4X;|n!MZ'huFiu[@Ja^ Z{@z@>:xO"1=lz/d Yڢ_C7~-C@lmB »/e$'z۴8󳋼vw: -Md}̀rޖv.Dm֎ڑ֭ӷ dK!s#j\JG'0J0ͦsM<6ɳ>w^>}NuXZ_Iq$>ʩ &p;cS\A7ȼr{2(wLNy;p<\Źl(4ͥo)Jf'醭t.XJ),:lD]6W3cjeݗZ0mS~]wd**Nr2nԔˉE:Xsemΰw"1-L{i=-/"͗DҶ?Fa&J LH1S$#u`@|'Ҵɥڟ8A!%Z;5LT,a)Bp0D(O^ Ja1 !rRgG/6**b)7{rIo53Hm"a9Nun_c^zuSWsd Z* Ym$V"gF#@iGHk AS `p;AVp В&4jV,h%ȇ.ޮ-. å+Ђ%i>K DhL$xXk(TWҵ2FH+Fhr2Z@(@U` 9K;#ld@j{ *~J^z5OCцKUTo7G,Q?lF.SO8ٌ߽ZAKfTʙ`0((+(Kd68 q[:)=Wc .EBRoEL!"w-̣,y2,5s޴-B6TnگCyx9ʏTw>vFDfmxg8w=2hTE{<{[n;-?G~[;W!Z%'|N32vv r%,r^t͈B4OCOa'|o>a}wxqNt #'4+\Fb&h4lw LޏBv".)uU1}nX}8L0ˇNlm/ҝkFQb˝u@>MBQhYSMA]),/HEǸ/xD~16+) ,y.Vg^e5VUEZ*LgU뿽{sґͬb2?d%<@rFϵ,wJÃ&M&#ƛIdWQTͭ:g\r !,3ٸtnkh,tn9w$XLja0Hk!,ȇZj AZHTR޸^kAS$ ? q+F\%r=qUzPhճWbũGR,vE0xq*[q(j)UWrf{i/ĕxZw1BW@d`U"Wj".6+B Jr|(*QmO TX4%+. wziI[ͮM%,n|ۯe=\LeO*,e[k8AA4*rT{}n:Ass.Yagl*z.+2h7?O?}vk\عI=ͤIyXبR:h/5i%")x@ ᪊ӧx!X-$>t/dJȓs=@i6nZK9~ӆzca㸑/v$I%9#N.ł5ь2yC+!Mϳ%1 k!WG??%s0=+y܆@ʣ.J 6DLaYf4!iIc7ͥBl?XNjOefhFax-| b,nxY_Wa|8&&Շez#]L< ~g_BMv?<ʏ ~\MGj~5=T͇Tuhi rtlT#?  #"#bNccIHB" $&*aX#cPIHv<^׮>{x3˭>zs$ueڄX{9/-r!OGnX>H'?/"srk5s F)wLZ#p-1H]:A& UHb@$Z)enҾ 8ui>:v`8GzK<2EnpdZ5LxAu!At9Mo\4DDhtBÂ=zVڄc*>O+ų?UTˤBϊ} = vZWgEzi|V N-^ԧ>G ,>v齃IJi4[rV9j:&!!iUz!iUlCZiAN 껚4֟O" ]}q"}̭C?->\ƸPuVe 5dM 6k#/VO6J_=A(3.4Khx0ix3O99TqRE\몢M5ș(RTF&e'N{D4Tw]O,EG$a&ߪ: ˙ڹ1*+XaU5S+ 0UotW321E6C2vH ][R7۹_q҂n^W-t&x2'7 GZ߰BjarڡX2Zny{Z|d\۱~}JU֫tʠxjF3w+vs[Ns˾I6ֹW#WltS:&SlXB10N4)(hq [,&Q6Zת_^hKI-$wgC_9>:bq֌M&Z )9XŔC'!59:V^@?vܞޯtcam`ݪ7a?_NN WGp@)*Gx䪳$ŖgInuތ2(8:Fia tQ*=e<[-Bw;}G$}E܆{L,c05(Ci w MX5u[mrOkKÖ1l)lBp sz_&?.)O7 `ya;ߑ c'm41Ws/Sqhj4ǚm=3N5;-p,b; \z7TeZxDȌ2sA {n$\.%6!`BL1jv̂Rʃp-azqywkl3LnS\j{ uδ-!o׹٫ٖ7,TnNNkx, 5 L:*7"2)3 b9*7̙JFIj aV`[%uqƽY< x0U^G*#R"%1 v ̰48K`1sNFeKb'JhY '.a{ BEY<-y|d|Y]x3 # SaͿ~,ǖ-A8:n/_1Z"%Ɨt in573,Cd#X?^wǫ:YcndS }՘Ӓ1#:J꡶`O=B1ErrCׇG/7T*@'~)+ۅ\:wt͛~~~{7xskq^IQ?#Cd47?lѴlihn*As^WI.7|:w:fa}uwzL~^fM p%wb_gWwGI%鞻م(r^@̘UA~\K⋕"*FEH0Q9!qZV+j6:R3cGx :q; LQ`&pcj!.8\h <z(D"x'V`SF5Җ(Rw:9Ď[q)ow+nUN?9NܹVv'mjFhi;ct v6>v);nBXHX; vN6_PvI2_ (@>- 0vY` Ѐ1knd['7Ee*ie(#&=}-te˒>drR߆A^{oz|7 $+2ݣj^o2sMkϭ\-hRp -F8SU^(tf& ֌nMրW/TՋtj[3z?0]C X=?&%B0%(_'<}lm~RR5"K&;n)2Y#ęWNj_{NB%p!oL~YpoVE1Ysr^UtX^po6l}C=Ycy)U#;z~@M OQ|Ϡ͏%+CԿ`=%==)@d8f87/sJ笿pa9]zs;6B($TC; mi+ʣ5}j.26ca;f!D9O!9j8muKi >q8B~Zdy<+|;G'$Vz"Ǟ)nqaʱFY9-x[1*Bx̄"OGYvkbPj|{\y!9?Vc| M/RN캅b8I0Bp 4b8c`#z-7ۚ/hk8W5EAE--SQQ 1%iǕeg' =K]`B<UX-zg՘ 8豉hj4BZ"Za.J}rްMU}W8oDD6qD T[BGaب ̼" jPCR ]QQq䃁fH$*0䭔H 5qj^uK0g);d׳G䁜=򾄆ǷLq~Qʵ&(Zև?nV/{BzJEp趢Hqk#N:98m5#J*] 70 ()&zSNq#7 !`̙3UZiƮP >ӓmǓ=˛yqȍ{/ tL7_d +8CɡF2Hp ED^{'No4T0QS=]D`RgdHu2u;3~ؒFdq]]="tz7 4v6扚rCK 20 y&̝ϨW-BD CF7NL h$胕I' F$1i. &eץܝE6IW/NSQ_E^bF|RsJx_LZf"$樜fkv^2~)b_agcW(;r7.lYr&']#Y6w]iӳE\~S4Oӭg2;Q=YԚ5%zky}{oWIɖWѷGQpƋdi4Z&3MdTr(rvYz?Ÿ:kZqerNcLiIKa9 6 6Wyk!qbȂ"8 Eef,1hmPDm R;s0dAĄ5mp`VYᜓlƗ^%bymY'K7j+]Ma; L24 C!1K(9LL JƜ fBB durZC)#xڛ䡮H~_ g(MS`Zn}qP|y#u?AG([\#z!臒B'OyR)٭Ծl>qQ 2,KPnQ$*=a3F*]>@w ^DWLY1,I 5w'gt`ֹ̰cי9[2w&ƣ߿=FކV׹of^q? ]ޗw9="Z?g|)+hYht%U.Ӱ4lVe"Ia2-c\38 whDG\IHb #>QmNᥑ s`]'7wf3Nx4u屃_O9L9,c=rOsfٺf֓q}75tzޅU!4ACl 8(9s yNHBr*tBb;.H ,D-L|x56g:;b`8 tA @ȚBE>J'UDմNE@s2~:}\6i~n@\98Q8+B١[^\sX=\al0nLuT*]:<١8wD[hpy^0An"UC7lt`tFerOd\w@zd@68# \ {BlI[s"d:y鍑\tft(ܽ,B.NZCƧaSNbuwW^"J0 :(M \1ݒDR=+yJ,P)$>ӱD]痓:Gu/`2ǘ7f=⿵:gm[*wwB4Wi yliE!Bâ#*nPpdi"!ZfYfq{eRvAD * iS^.]Lx:)h$ 6ie$3fv z"qܘ83gVjU\Nʜ-4bt~Z_y4WRJe[&aK_Q3@4uF1jMRw%!Hfz3%9$ž^/I2׊ѨdnO9UDGnK%I?2B%_PIc[2BbNXZL$`Qi,&;貊`J1TXX+u9+[ށJdpC0,r{{ vHHA9@c5Q#QEH7P^$`Azb*"tE,N//>]L1652X؀ә8K.~!Tj磬>߃_ $}ˬ6npEbq4&s`nZ[WX-H@k|]/8MA Y9+e:$N s H2^q5揣$R؍7% !B_xZpOk^ b novϟevvE8 2ӣ8| 0I}38=\˟yx4|M+ɢiJL3A_YҡwB\G?PW.ZGSǾ֟;^tfyX~Z22 okim ŋZ wd%ؕ;aul h4Ψԋ+5>oN5a|:(ISmcN#yK1|; .5oR6mq=^\hKO}Uz}=ZB.˽oo&-u> G4lsWLm4`p[?G߽Ygpm&Sl+8^nB}$m»{:8sS inS"Y{2z Nm\J+N/eMwN%y>T'(ON V4E4e LQ&|(>2y!)c8検ڨ`+gѺ$`9{ iI(\@"Jn1KYf*pZk7/70v̜L7PLv([ẍ́F)Vz''<?S`EB@ؓAZ&$G, KT|gshH$r$a&s}ܛTpځuNoBL.ii )e:XCtΠdJd@FXCL 1q/ ش~co :ʾ~ٟm3m5 _ޞB*#SzG"Rzh wd%pwN`bCqWZ}wWddW讌ќ_U @hPzv>y&4"*\0.LU-?0g4S#=Mi4 oQAʗwmDyb9PY̵*Zw/]X+8cU1XqW\%]kMgZ+z9lFs_gw s5Y-a֊=Y){3pW]_=jU1XqW\s0XkEUz3A*+u0bmUEuW]ipGkmz>_T6_?E* 7Rdlա Q+ j{W]Gs#X z.i_iL;;=Jg14#W0/QKD3h 9qVՒ2 32z2(a˛s~LN/.o;+;OCm>ht.golOیrHVŴm2϶Z\˶wuc0ѭ97~)/xxгLVߪ[5oUj X5o}))ۓ.j[5oU󷢣9:0\Nu1xp3טC٩.bwJj_w_NZQ%+UJW*Q^%+| RU*Q^%ʫDy(UJ (i U*Q^%ʫDy(U%B=YgwtxǷ8e.6QDN9@chͨiUUgK~pw[`Az= xf1޵6r#vswmQ| f ceQ8-Y˓ "cU%Jj[#Tƾb)n{O&$=U>g#)s:("OHE*Q%ONˮ|>B? ~rp1pX<'C<ӻۏS%%D* )VimLEL e+/CI$*cm;7Š[`da?7" J1{IpegT"Y a:z;7"]=4"tNpNECtˑEɓ} J771 XP ;k n&ICk2R$m3΋XVjX gѪ&||ś+UNsKn_@֞]/mK=ۛȭܺL]ukZ6>-E?[n[Wmr.޺MΎ[v>'ƝɖO;# fuV?giL5ӟ7 |[Fs{>5Z33 c/DM-:d;ܲM)Gdz[Ы'[i$q]d|zG,tx]ĒSͲjKcWENץ+h8HKU΀Gä!57:Eb xC >^$oO*mWYϟ}Tmm¼ cB|H., i4 (c2T (E#yO>1oÉD:nC\;Ax0'K^X.*ǭ(oՋ ՖA hLEt ̈́RH#…8QȄQx9i (|_=u~q)y3%CT&W ݠNw~-hw^JEs%`۞6:b.Zݡ=* Ω3 Txq({t4#N]txKT~tW爝':YpZ'p1x'|"ΨFE d?B9ag죒Sl]7z8g7gqdg~c7ZOHt6#P%\\2^fY[K^, rKX\6֩0 4- ޝnnWV1J|6^pyϦ&LKZLms 8|m.+sZٴ6&'7ͮY̞* bnoq ?^k.GǻYcuo843G~MðanǓo; ]?9ζMNQ>|F0WCb1jqGɴwW$ 2l3:V_xi[vCyΆopt7/.)ӗ߿OsF'ӧ}φ7?`hUjho>4װKXP`݂3.qo(C8r ǻޏ_R,͍Ghy<5Q#f+8 +H6n6R> hv/ lBx(̿_lՏv{߰fNAs5 X+\41/R4IxZ"H0=kAkXeDC!QV)czDm78Dw:EtN[1xyߞLzQmUv^m'^88~ q'H9w⎬ bPn֛Vp^}>۾6]_>u4ˇm/iE UC 5oj\&R5.S pU2q}ո?L8 Õ3|_]0&J+W3[IJߢ [;iO(' ;2Q ɴ'i ; >P1@Cf(8 GdRXfA]Q1xΒj}b'm?GLNN+,?*qPgQ$jƴgo>wuͮe#m^\Pi&FU @ r"GO #H#zlpç9FI_W[[b{7]4i y,$,Q/46+UFBI#KN)9m<xd: =ly4 r Kck@ɢRJNHbr4LRX Syn(' g{z篌o~דg<+Pw2pBq1AZ* "M0ȵDN]5Of7uUذg?JYEԋҚ& 7py':ξp"B@K& "b9Ғ?%h4^ck"S ?({Zkl)>?@54Q5=1}h'Kq1xع>/>~n/'aphݙzjWG 0Ɓ3҄\hmQ,~Vngd572U:S6Y%G}xXLT rZ];Skfv1!&{7]vIjY; qw]45Z6=pjBNͲ(QGR+S7Ay_溬Pbs@ 䊶aft}W]nUs_H\VYwϦk-{>[Ǚ=ơd0nBmnֶ磾 Km >|e;4K 7fn;Oӯx=>m}\4_J鄌!(w+w-s Q–j, oo?xSf;I0Je(9`F4!܇BJCRrzNFGM262|̗#T袬exN;n"䙋2Y"w3.o(C0Ebkʬ:e?IbQ%aXzE@/sOy*qxw±L֦8$hƙ@-;f)<1WT(J>t@kre +'E.GQBe2WqPc'&q#4Hp#qg)9.pyA#,xhb2dt^"ZV8kՂw? Bt{!:J2y',cZ[ :86%ЃQѤPiMpw"ΝCZ рy  ' Y1 }YYu{ᝦEp`n 6DH)RA[A)U9 (qJ,4(dֳw󏶖wwFH:5PMc5W…:JؤN'>{g批&3s}|}vɕa,g'_^tr :2NX:ed; ϑ닔=b]qA ze."wB9Xn|HʠOUmɃ{cҔv q 9N:N Y0Z]gNu7vrοߟﰆ3D4hk;L,){Y&8g5/*t?jT 6xyD+!ے=ùZߔJd-z49_wY////oʾf׼DKwKٳdzTsK=oϮoV~77HpJc5vqy?Gw!w3BvƞhAZ`k*BG٣'<\ŋC ]Ft-h^Xyqѓ]h%PE@Jޅ$T))rҢ;e<$wp`0)ϧGw/8&XL{;0cfǍ#fWGWA8+CzCwum1%hw\c-Dt[BΆ:ș9̆P!2ժ3d"C0d7 t -*"\9gdsμai>o'3 O,4Vekءv+C`O*CsG}XY]h9vkt ;jםk~e]]\E}ue2a(\1*|:Nt ?y:3K|{n 3 v䂿nv)=OxZi~y;:Sw ^S!G~֏џٺw^\W~~? [?߻zzm Dp߳a~[oo_~o.6T⪿߹]^\{X|AvD ~A">?Adqv/bm۫^\Y&SD+͞ϏVat,yIt]NzC-r pG"ֲ(8P pL0tP!Gg:޵J-#fP8,2Ctt>io0.2LAc>'cx,9_w7YgI)qrE8JKa&sQ2uBg@bBP/Ɨ&HM=EMdu)z̒IςVL8&jV,B̈Vu46;-?GP7{\7-66̼26D8 8S[#2v_nM8/*vTܯj1шyo|DKDYEVE/ 4Ym(~Sycw9pKbwX[qdмz1 5z9ͬ^\[Y ԾzA*wՋɬ^pEnW$Vpj~Txq5\9nǽ ;&POhrպx{vǚ-U *T N8eb Y}X^ެnM6|cnpJpm~D1r{sv}q/Rm?DH'@v{_% 4&ps-ߒգ 3y~QjUAr~lz=G@m+fy%hgUlv|PJK-Y^r \('zz ]%D+jxgWWzkiks JG⁾?W"h|uBIt|*%E.yT* s>fmuZ'C)`d䶳e&sryzзeUCAe溠 WTZ7j$eCB \3"ARk"v&+NؼS *n 0K/3s*&3 eM6riU'̝Ʉ5CN89R;4Saw2W@;ΉN>*ZZg&ŇjATTy[-vH\e[5v\J;z1RaU H (WcZ3Nƃ*]]A.{8Vθ:  X3"#g<֌hJnf\MWi[ L4+;jWRWĕpNkP;[@ ZjIJ1hW(XnW$WVpEju 4zqa  H.43Nje";g\MW-9(Xc\[Uv\J3;ĕcW=+-@Jq 9cYO{(nk$ض3gGrG;LETr7czvi W(Xv\l+"z\Jg\\N8G7Lco&Wl] SkGrtPٜ ̸: 0!\%*5j Pdv\Jg\MWk-MC"5+R [WWς+!!\`m3"N+TWRWĕBA\3Bd+"#r$qV%gkю3Hre3Svt"3&+-4- +Λ\Zj4rqe$35+lqIs ՎMtڎvϸz\9ZI2`RBfg;Dfjk}óW(9]:h'̰Z;0Z;ð55^a`RuAV&hJ/BfP!2j-TL*M|YppHmW(Wp H~"Ty"? SW۱ +5Bsu0,|C^g屸BvM|* HWҙWWؖp!y\ZTq5\ HkW(nWVqE*q5A\IU餆 v\\͚H-gWig\MW )g ;0c%R͸"NpaL7+Hm+RWĕlC"F5+x+B0Va*<>I\9ijN' gN>Q(^nke--1`~3koFjmV%tU9EL;hɪ$ Y$6@jWq5\}YKcDY={{P;Rg׳s*1hux,Zvw?Ǐ?n6y~X7e?+mBwBg=ÿvK_ſK}8 n_@_| FZ*w{h[L9糕Ydz{d+Ccʮ~Z|W|ѥκ}ܗa[wVx=z Ū4 6MM_cT~c=ʶ,+zGAQ/ײpnj^H=?!cAwyg| D:w)2^u֡l߯yipsi^lɠH^At,dBCԪhAGB%k =yn֒8^ۇZ.۳kO97$|:,V;H\5/"Cd%- +BˠϔgrViS]=ߦ`>HY|'cW\B1 ^ޘ@Òb݁AqT~ه5 0S&A0u6Rs_X@ɜ"ٻ8,+ cvJc}"OYsզ:3NU^dc)рOBr>=7g!h^UU oZϩVRnm5ljxIl1g@Ԩc;S\kn`ts=i:wK9jkmߠg1X6cBFUymt&$$XKQ,KBJcu:`Ɨ ZRfb=i}Zm$Bb ɨ|W!'c#ZZal%QE6 VA`Y0h!Qsװ#Ҋ IeOhBJvlt)K(<4&{4::|k@yGta2PPx(f$sbu2Jkk(|-s`uQ6`ťAwGV1- YZec``QT.` hJ֬G7lmXC *TT@5w˽AAQl#Z Ò` UC:\vAjc4)@F&E0 W؄fWNKYbpPTZDJg=l4[-P^f$߈XqvLfj! ]V۞dWNc@7CAw^KCp2Pư)0mߺ 1K@ JAU.1a |k)& Agb͂G Äb :TjMPPg"52\p(lL6 _ gA+l*8(@Xhq`jP Y~F U?о`4u i0ݕ/h!8K]ZT5{QQRA}k!YcI KV5:+ ʚ%ftYk|B HۃB6zp*{M2>Q_ݡ=bA]_1М,4; zP0B\~ 38`7zqsW_֜sީ/{*ַUu#D m3bka%a1=Cw / Td6Q t56b U&5:DfKÝ.̓9ZzOyLPBb2$kuk+<o 1t|c`¢JrCk4=Vdxe&[_+Hja)?^\m>Y.Ru+`%V  XF;KrIn xC4*j]J3n@R@"42ݏEy@r%bkVcCYD סPVvܢe4+oVQ`:#,_lţ_ca,A,J3-VUkʐ pyPZGxwGDޙכ a2,Ƣ tרBj(Yj-Ѫ?h+,*55gV3 o= ZT ioQ! IHu }>FKP0 ֛.26`|M/]ϧ3X2_LiϷIvLf`!nn8c`3 =k(.> U-mvÚZsLڌaγF9@a[7n(g# @zQF6pݰ(a!/{úSiC񯻽?}ŠxMR~wth^䍡UA Gʅ?0mQHitrQlFR<z U *cKULmyLG[1:yt@୭XEѼRO <֢] u(P5j3t&1jj=ږz痽G&&×P3s jX*^Y;8mP Fxi֚&6X bd=pe6~Z)Ahg=>\2 'Ei#8 N’1 3Itk oޢp \8 lnT *KcbX`XKEw,*f-$?QEp$b/m*Iaw0o~>L7qr7~|ǟ췞ˏ_yo>VRnlk9mҮwtW)x#b:orw{0>Fo>cmHvg/_j^(DK>y[fV|_wgJgߺ1pnwzws\/w_ɿF6ʨdj=#Uctv=Fd|9uG/g>I@?~$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIt@U=i<Œ<,œ&PR$)&d?% $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@&iUI CJInpI>$<#Ia@I IG'I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$Б'@ Z$@O&6he@OCiq%@V@Q@@3S @3 @HH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I I@íOq}Քz?\kvno.wHʺjK?:P/S.kVDW 8Mf-tc+Fi ҕk^\򫉅tt1&+k)X"b+RW w=wԱbOFdBWdR ( BW'HWjEt!FBWchJ)ߝʤvZvpj>d LrKSw++L+uz-th߁Q> URI߫ϻUi\V?k]e{+DZ~v[pۋkoMc6EWF%k3j:iew:v^ɾw-d|[pw9۷nۦxW5`~/_F'5JSq%Mfc2%Y!,/1ˏ\WHCM2QN~JocG(< /NF'Ƿnxдz{1U&J;7mM-8]Ku3UmJTc4w*|V[}ssx87}wZvç2} 2&*c2H)'\4Yl꼺gyDa_}s๳_zAcBL{ L*on_0]q1 gFdKX2jM_7 7>"/-ërit.-$~xǚ>aq=WB7(b0=rrYt4K{߷ ]= B ' "]"ϝz Wi̊Js?Wڅ4] ]o_]p0i5tpm<*O{6q|yvOwƧ_Mtq^EZw6D]{GܜsAZpjNUJ.?0>Xd#]7 B)<,'_+ݶ}+]>o/q|qͽd޿G_\W\ܽY~>a?s \'уy?߇d3?OO:g/_jQbj݅`.;~ԚgZ()[T=<y8w_.Q|-$-/;;9V,K>ٙ {-KՒ,S]ǣEXUW}Eh_y]v*J>Bke ua}Tld,935m[Nx>f?勚fuh\i9fqt}9k->.?jˍthş3Ea7 wv~yQG0<{3Gr'^a+'O ՟esˎDzeCT“Ϣ6<6\hz 뎇JF_Nj9:va>7T׶Rg;Zs+ls2悰roFLqOD$+V*7F+B/Ez(Çh8ɍQ;]^Mg_˿2rhؘ@cC?_-up7?e' 1I9 &)l !3?6?5͑Mn>Xe릭u{%Ǚ^3_uHڿ_/ôK26TATe-52ǜ1`-{tr^6C aEz>EQjZ7QfQK^ )GtgAZL2c%1(%BPاb1Jok18o3 RHF%'c9]dbHSi0lĕ^j<'㉂,TDb`UBӲ AU-¯2c7ͷO6V%㡓2_shn0LyzX.c>!P*mq¥]w!&dֲ5 ȿ } Z$%% MGveuJI5Vkv ? Ay}ølj3NC(wx3l$QvoƎIV{z*}plg[~;}U :Er }nc]Dց]m)LNnEYuJvOYc[nƟ[WmtN=޷}2ݳt-+lYuzxQϫ;1yn~f;[>mXG7g, yFXs9eݝNbjK2+*Sg?a((UjL};Kew&\kۜiN:d;ܺf X^f%BNvʚ}7wW7ٯY:> ]`(ը{(co;HxZ-x@{,H&q9A(IV9ky2D^0gGS)$RjqI#̧8 Hv{ - !=}| iu7wW@IGYi"uqf+I S[4" `]8w hᝋQ$O$ w<d@iO Y ;'AMwE$.}b"F 5A)΄72),樀'C2e.zDDI 8ƈO"Tu\O^'SFMQ] *QQ; E4EGpIk  N)}iRߨ˼qYCX}CG1>92c8T:WK\%O򦒵8 ։ hFq3爰’9Y +O@aYNἬWy% 'Z)CZpN B R6*-_2* qƱz')=ۇfվ>mN&qx1BJ,^9˭Y*H5j^.7:kXTdeh!+{60C]7M.$L@dPbD02Wt]Rn6 k*HRtݪ^o%xjI TNrkyYQ J2LJE$iA5CDP<*5!/YdK(*ģRa1rF_^Dfx6:})8V"q;+/rME.yTMK>HD"9]UĬKF'LYh4)@8ՒT,e9*I3PX\D,F,ڡ\bYK\d\.šC/DJ$d0a%LR  D"\<\<yX;PX"`N7q rKr_s*,r'ܬM 8=EDdY)\ꙿw.ƥ5("D TdBʠdW!C,#H@]J5)2UؐG0Z=R5{b-;p)>5?~ga3x|`7_1K:T9O_ 6B_%  ʦjŝ{X4r4󴾝QM"9Y3eW܅x[_#CX@\t}ۈt۾v~[d5fx}]^b=1!&{;Y| 5iUC2+|Ga~6m|^ b&Rd-f77݃Q>pv1!,_aQǽC7/u(X34MűrE[a83lؽ>UK_Hܳݿ?oYGrCd|>|_кbۦmoo<#D>?ģ ÆVS8QǎF@Q KiϏn6~!.j?gCCsoj.>lq߬n{xnE}bwLL$tS:ڬwl6t#]2||.i]~-cz&kAwma1 u&4~޽ɼSְg"J2[ͱ4mv*pay)Q3,0 >$V#u2: BƦt^t^y>B. \yMBU+|vTh#a C-357:Eb xC >.I8;9^]ڭ?$hTgz㸱_)eؔC 3 qyYXVRg1}/jjY]rW\VZnԳ5k2*H!YjU5B)vHڀnx~M:WT\wx[n6V932u"Ky ŵeVI3:xcsp*͔B.p΢@M76' !KiyCŶ6ÂM;-K)Ds^_œzM6En'Oe8[ot״,^*}&6hY B潎XʴhBo=I֐8 F-\ u8wmz]/"P1pZP22D hM BRB Ш~ pc3hO/08sFsdR YޫĄP f0Y02'GaO4=F"g Gnq:Ds@d+2e6Uƨ9FQRrfviYgRt-)ݖhA]dw} Dٚ)Ś/surBjO>gir=(Nj_MWn%{ (IT(H~Llg9t#D`؄?Pӳ8 V04MuIв^2ʆ `JM&ܟ}>\3[[F!ͪluf,yU^@_iy7~v5M}]5.4[b嚔??ՅkHyfkXxRMv X%}5y_#zѵ5JM;8խ>͞5٢Ƌ //hN̹tN~[aoH5~jv_[|ܳ4zjە-Uf'˛T ?,_ܴyz~zP$l[wrU+jAi:ITOsiRae(jBpx~R TќuvN/>t ѻW/͋w^zy_{8/p- G+!!ch/\iV޼iaM/qA }ǴmvyE_47mӰXrm/FI(s2͉G9hY5">- WQ')Gb9 *p3yl(&mj9xɤmKǾx>Me#AgS 4]k.Xg#igQ[tN s%qd+bU.bW͆9Ս b)hD0!*)*UA ˬpʙNBq[ n(}lzx;tqGv㶣-ۺz:Y-5񎪭.\&qBKѡrW;ߚH_mdUoMR4cC -,|66E H VzФ(9oNǫ۹OXx\_>gx?MSL U>QuEϕ15 QU5}wUC?_GGOM =z['[t]~gnӽ՗VI:~^j~YتOfw~>~:70ꢹMM{B T/&ղg]⤻),̲ , hIx4 ,RJff)IƝIj] YMАJcB`&1 XbKG@PǤAQv%1D`.&5/&nʖ"]fhmt-1-g66@Z.گsf"YX#RAg|- dMJb3̖l=]Rp)a\2YFtւ!yõQjI8*d u Nk 00Pjp)v]JcPI..OMgkzqv}_vEkw}ạw|:" |gH׶NE,锌tΎUe  ųLe;Ƶ*xgUZG@ dɁּd7kY[u Wr<ʫvNz+_CMoS4v`Yɲj'*.?O5WRWWg-仁 (-8]Kms:7J`&dXX7H:Ab;.HْE&#W>dfaȒCoiwdH#cs!>C=jʡSVre58(6.:0/ކԟ;n?l8qXօrBwW|~j5V]6د.V`n>ϕA_ݓĎʥݸ6#;W\oUYbPd(UmuVĐj戴 "$NyBBstabT^ieTQYad `ij߫`]TN[G !1%c@3J_8"7>ۂus@w1>; ]);dIVRU*" ui9OY٩IX8-_W{n_ ]yWcTᘐc^IVT8eL:(FVh &Vq3ߣM^v[BbNjZK.pˡ}6.1H|ٹly.y+[_(7jĜ.&" rdFZ<p9qA*9f4 AХCHuH:MhouQh[o ہ{4{rеs{N\t='9Z(v,9\#9n:`A `{CWW:;p{tU u*(QtutUtT[{CW.`_誠ݵ=]tut%'"G*p Zy"vi tu8t%=Xg51 b! }ҮC&c }vUA@WHW^ѕf *peo N,YNʸ {5q,\V2y|=G Ox6.?CRBĀY~U}~јNgi'z:j#=b(m>~T㎝_.h^2k8*+kn>k,љW_ɐH(ޕ6f UU|{ 3U,o.3}l=7ϟ٨%QX[-p'VAfC)4}ko1pSSCe4뭆S.s `[ tP'Ll%wdet ;N*Y}j!YU[s]J$G(Í1/z[N}13&Irٸq4~M빑bD&dO//.WG#Mod d8A<2d7޸բG[ ekqA98)i'EW9sAkePr6|;t9ꈁfJlj-{ Zi'JAWjMo]F \kBWu*(:+ =+ ]BW骠D1ҕR#"EoZBW+tUPZ;y#"(]ZdtUPr6ҕ3#*E]R@cd `CWžЕ%0V {CW ջ+Bi*t=o~]1<A?;b$S\AS 0jW߬7K 1"M[;*]YTLDP}v)FCWCIS3Va}zpCoJ1JAWzM$H=+@ }kW>]FtutťzDW\" ]\#Њ]oOWՠ]"] Uxo5/tEhu^*(Ve'+TZ7tU>i*ƻNW%] ]OtU?g@[tU׮ J] ])M|?h-B$;OWRJkOtEUˡ/tUТ:]r8t%?U;u=ZG_ 7gPȠ xSO''%\/aY%,΅>kvc%Zݖ{Xi0:kF҉qDVrκ%f'_&p'>"X]7ߎ/р"a&qW^?*M ۡ%αb˥el]^./n?"mKaߝouXw?nV?uyq@ӂS]Հ<3wR'P&/t$JW&}p:՗_N^͓%FX~:] TKSۢ u2Gz[|=PNܢڑObsZnb6x>KC?6 ~n/E*{c^>@xvI/޾{1Q ,3r/x$ǫ8![Ͻ[Ljm!W/w(bK|"|t_@{{ynR?[7t#2>;f*rexj|-J=Τ}  3tvH4hԪӒSŋE8qW=o/ ~Lr_vlҷ&%qft@8咍Y\'%'`f-uc  !ç Nr8z=o0ٻuqmmN/r\*n'OI~#v­$_M_shmzS&'H&v)N>?#~]2;Yf:NW\]+]":ך"aaX^YrA>pzћ.N6Yt &K >@[b;yFP%3PAUT9]ټRY9MwL'_]xrr3cA[72ϗbb&ȁ1 Tz Z)SKXMNZJٕ9ɡ\KgAЯ|1&RPy[9Zzs;11Fk_wkV&vc^A7mKʹ0NO}ۖ(B=ΜAFgke^1F6e׺O\ɥ;O[M8V6e/WdM_Yg%3< jeK * 6La< jba3N1=w(xW;]x\mz\UiĐ`UW+I[Y Xd*#WRӗ݃籸bT+ׂ+V+q*qRZYX Xn0 j-q*G\i`˵T X;Pf\C΋p;F W,7Zpa'ViF\#1n=b+\ڧx+傠P X5t\JG\!'"\AT Xf]Aevx-BMCLeW,WZpj::cv[XpWkjg+Vǡ}ĕA+QX:] Xm|vfi!W{+ZױT߂IK,m/>nQVE}Uр*න]odcUP_m|GֱSXO,yiP@EDŽ&>搼rVS-@]>H+mEC ,3rehz=V8C ȥPz&\Wͤcb7nX%䚦"xv?`Kޠ]OvW{SvYq }Eb\A\Zi+ViJ EM ֦\\SMvj:X#WZ[ prW *q2$ʮX rеժgWҸW{+K^Y dW,w׋=S]A>I"QP; v\\_P;JAG7B>fyvx)3 $F[<^77N^7N{uRN?\h/5?(-s(?e:!/''bsR ί? Br'˫eq[moqiW! hV\^wxWk\+~~/Q]>|ҋS}&x "rɼ4xkh^_,+Pr=PEqܩ||zoP-+p l5us?v\5ϳ|B70-) 3dO7={~LsybΧoB6o4 %J[BB֫ eِmJ4e)YZ&J+_yW7$ an#oǗh/J~XS]LrVVZ.mtk'yeucCQtMBS6+̵}n\Ml4D#T!ŤYmF&˔l̤['ivWU6$hI_ٛ9*r֮"!De8&؊\CFPQ!rZ< ݦZWJ,*PJTHBLmbP1ƺFڕjQ^t_GN3T|mIʺ"t)LF5ȤLq:B:!p-wKI aVJKf䘭I6nTd&tŇ651 벳#$<gA>-U"Svph)5n :xEDT&BFIMT @S$8} 2*MuQ;2[//- ^GHx0bzChƋIL" |L$Q(i,9:4 M47sIX ssJDiZme)C){AS"eX[*xZ ȵ(%#$l(s4hVEVkV xx)gz㸱_iud~]~0!3 X e0U u+VKNUꖻGRu鐼<( Erk>Io0YMR[$$5j :JD 6XsH5dFYk|B 4qaLB3V[7BAB@W _ĥjU% N3],Нb,%D5Kxm+ߨj]W&zAy@G **Y:Ik rV 30)O<Ƅ0/wPHąA654DdbLkF^ː>bc\!`xK`]@xCk0AkL umm+o12xh[_.TBNuPꐟhXE#9dC^e Q>$w7_q2wc_g`ixX1.uDe6@^4A 9y]%@_!5Ga#D´|0qcAG j ns uAh76Jvl" -P̔C@펁u,bjF1ZAC8B΃oI yIȶqs6hc:% vZeϩd@PkA;y2-r5 `yxPc{U+ Awq8+ !HƆt"ֺLPkLB/T=$X*W!۲QɭC;&E ǀ洁:ml66ʍ+5^$X RU7P9jIt d2Al}lGMxz1v*mob2<|ZD צo5p7 kB D U!&"-;"z /6.č#Y2'f[6CX6P-ZЭTE< \d* #lMદwtiL  s1:6*f-$_2bqBe=(\FipMH,Fzǫ+ε0B|J/t0H5nzF^f\oneUW7mt&W -rbu?yć5 UC9N 5Thm(' :I@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $NP`'KqmznN ^/(NSt9~#8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@/ `kVlūx)z^nP(wvw˛&^J%xâ*Ǹc\fo\rV+)y莂 )5kqѺ4wb|W+wU~s /4l˛W sTLjU9FSS6o)~E] FUm8gG}٫`3%yۢ5Zjkj+A<>tcLއLٙ2كLЍlB}Yu*U 5 ;iBk C^N6gL{TMD#lbTO"cIs>n[JԫZTNJuW|Pݷ8>Dezo}3PC=[WŦ6⻝;ΝMm DsO -~^7hKgtBs%C-S O?u M-Yuf})S 6~(SZ̟8hS ]1\cJ+FtBW/#ޝ)ģ$Q{ytdu5QHjJԼM+'tԦyCrPtARњ8wb$tute<R]1`[b.BWQ/tuteS[Rb1̝I2xt債+`S)t1ڹd @DW ʙbޔBW7s4N|HTGk+MA4ntZYm o EW{=&4 @HOvx9Ja7srgR|У';dtA,̀ ʁL10);F)Svɸ{ ֛?W:۰V?4a}zi:oo3?>ƧM4nW׿~jxwhD9=B_?no7Ùo^᧺.:~/.VomgvlE~z͟:VǨG) tH3^~C?\X7H06h}Ohk"=lm ɚs__߆ȧg u~ 6LjmIMڛJx2f|<>p ̛I֒6!LԔNp QEdbQ+ 83)IQ)r3w>&rvJ~wvn⤾PۓXVU.: quQޑRPX(uԶuސ###Δ dUGW __χM"!=yfBܸ FP$rA ;-nh"<ľ=&Fe߿=;(+|GbJ h јypQ:) ROmE>|ך^+@S[0+=(L-B)nQ  c{j5m1.CWMϔ>٣$[]MV6#7 ى~]y66[XGW)CW קR4w4- ] {$~t(]:A +\1tpJ+FQboBW|;QAtŀS9tT1thQZ'tutG/&|1tp+s+Fd;c}It NWYeM)t:~e(]BW'HWyK%MS$ 3h;]1@BW'HWQc>n~Gk͇[x"rd]I8rVD+& fNӑ,p4ȨXDW hieBW/#>@_B}l8{74t5 QGRl7@WAM)m +Lbb:5nʤN |AtexV9U ]1ڤNW@:AFk +|&mJ+F{,4N) \R)th:QzS+ ZXIõb4vF ҕ.Ac1tp+6*;]1JNBKJp(n*F]7jt(IUb Yb(Y~&M\S5\n<9ކ6+X%/9XMg~$.JvNMcм/ {|@FEa=mHW!撻K_dmadě}s8%Hij*pm46D"AIr]&APZp5Vh@6^2#y\$W/u=F$WlT4rEb+ ]2bI"W 4"/`%E4rEDWDk"JIP$X!"B`3x?\nb+"t"JC+>ٞ\i*'i+5{=\!q/J`.ʕY\!0H\:h ] T!ʕFj'`墑+u6BZ IP3VW"+;V8--c,vcP 1E D#ӄ 4҂>gGLLcﴊG$Wl* X i >l(S+z`&rpGњNat-$W/u=NHXaNd[rE c+C (Mrur%$wBF$Wx4rX,rEB.WDx+)s:"B`U Ѫ+IQֱ *\aQ$W(WZ2"\b+,T;QV =^(ɍHXxpEV(` *:&2 h2 ~0H:-d8Hf 15P Z]6,`<2@0:j(xY r<bQ/wJDYlrBVT9bQ 0@[#TLX*Mjpu4bS DdJ-`jT:U|e<WG %Z|&) K7_\qM6xn=0<agj$W/q= HXxpEV(uC+a|?`=!b+m+"ʕ4zʕ4 j5&"Z )9IPq lDrE6 :\)l-]%:@Җ#+D#W+e,rE B+LÔ+wbxrWk$Z|tegIPcJ:\TR kM-elM6a>ÔYu]r3,+JҊ9Զ(W ᧋!b;Qc7ߏQ[Z~v 6|6klu:zȗɋ?1~z?f)D7/|mz|ftgeqq=p37%8]gM5dSg(uGL_Wm$/WǬl n# hvc&cHsI|v LLKL@4hf?EΤ#p8[OpE4kV?!J+|z~-/`ÇZh@ѕ ,]='z90H8pY4r l,rF )$W(W1k\e5G :ʕH jU."Z\%$W(Wqnb+* hD;ЙM`\\i0Bhpu4A _RJ2+Hx4rG3$Z)B+T GWEEhw}l"Jk\}5r%:^Lw/l;p8xBhe򣔁9r%\\"+ /z]؏x+!hgDrEZG#Wkv‹^$W(WR}'W Wp\ ^R$W(WJr 1EWlFײhh%]RJK:JK +-f0HCMG~$W{+# ~FvׅR Β\\Y%ifGWXh5 ]Ҥ!ʕcN-[1MFvƬyv5JsSΎ]<9;mol\v#y(uZq2 dF@9ƣ ~y,Q<+#5.q 8p`?ڡN HI^zU thhbԜ':@1"+ "\b+jKrur% E$W츌FWXh ]&:DRFJ\&"ZPRZ\\i4Cjj]"'E 0z: SeV_͘)3rz%\ᢪgU_gܯyqZ-1?]UoW󦌏eUWUVI:$;cU>Uٗn;}#M'ŢW2=a߽Y(mVllh K\`uQw񡘟/^1"*V'wY]s>bn,^~󬸞gW%6ei6_.uUCc-'*WGHnd9*dr[dS-ljjȕZ=%6~YQ!._LS5gs]%G#>$ycE<16VUv࣫AWXY8؅}@.4 ,'{&ݖmsu˹֒e,׿;;9+-7ߏnDzv2?ZQ3WutvW9oEezya,wOc0ueуJ%o᤺XY$AZacw(p<7ѽiݮR}g>s|OP߶TÔ=ߞu,BtKmOpزR`M@u=n[.RڂٲR+W)!%풒kMӃw]ك te[W<^0;:OX.@$eW?VjyocǛGOx?9W}L]Tx!hSmwjfByEQ[UȡiGu7sJXQZl`]@5-`~nD8Ƭ*lKW??Z~;]W<5P~=۶um'wZoGX +O߿o7{6:E ooǝcDf &_+\3Dam΋Z@YBZV0璱MiKekY @qG;I6-7؋Svrn1JxA!-fy]Pͻi%BiU4iZY\lOnYqF1FO ̃]]1vv|vGct?Y !eq^viŃ^gѬ0kTZ.'/wɝXsM+YpYtR(/ڎm@R|,LXbV?4Ͷڎ*xzSXK8YLW<19c0MddÎ<vv~>|dEK@ږMY%XlAu:k r;ٚ$A'K@gd-Sce) ?G="92z+U:S%k3EV:KDqtww=38=6+]rR3Srx]VNc*nj%J>-9U)+&tN<لZ{M7v, zLe K;ްubX"Zey*ezR=Q&Q\Gr:u짮#Vv=,`p4Nc[PM/\MhӰMo?7j}N]V 0} QK"aG6ɘzl5 ߟVE~^S, 7xfۃ޼H.mR=~)s $`;өerf{ɉ&zCrrZ8sfiWBƽ{mW8z)/a {AquQ,>]R (yLMW;lO=5jADMMnU}5r/}}U q3 lRiHދZs.ܸٟ߻EOş .YN yb{="3nh[ns3p1m9/_y̨jqkmV^#}g~4]L;Nαcڇ3=J0g}*LMm..zF)AOgc8dދOG !?{WҜJr翊O3la0OLLGQHOO~TA wIs_ JQeGʜ07O' *OT[w$OS6~RgT/z3rpww@wpf=eTsaW"yCco\ {']n+e4(n0pдD<D E@KrPÛ0t] +eta?oYˇ`Ch=gהy{lo?kݒrv)}NBxPAe`P%)Ǿ,jkH5ʫ'dS.{βbZ4n $Gkh./t#|'rFߖ- J# ˱SF!dvwܘ%p^ Q v-}y4q91Q4z"F [3t8R*r ?K r!C#և=Du\qBE TRvѫ (8'9}TՊJFu/uBHҺPDKz:œ$ DߛE1G WNלɬ55W.|3 G,)*DJ [S?fKKh#gˑ+{r Ǩp/lݐ G+bPS ߽i@YFZA#+t{( ɝ2*]Žz:uOUt3c|, ;A,5J$)YR,lrEvF4%0HFy,_hsHx ԦEmz)uՊrgj#8<9]Brùt}WǙ>ɿ2g%UnMFssTB*s4xKN@`c #impwx 3v[vg/?ᅄ3by)IŁ.U] D0LQ~Lg*E"Nxѧ4CmUyiLsċ$0FpY3:k?n]1C*&LBh)f|4(2CӄD'iB:hd6[B/WԻȄGW^ʒPƥFBF. u]qNpu-:5Pb65c4Q;ub4+y:kHԨF(S$Pb>r##I6c[YW20ANOI漄pWZiGܕw*:!bnݶz uk (!tp{=6e@׊%x`t/^A[Ợ~5ʚ^N׃EEZQp0cyO#'u^bbVV)6W;9WQ4:7ihλLy?=tI2UPjc*ns~9cU=dO im6l|ۼbwqq 1"'F|9) rEdi?o(r5-J.7ƘCԤCYgf_C+Edm yvOoI- i'7)һXC3Xj{TFR FGeeFKZ(jZo`eW+V  e䌿^ɷ/޷%#׋hxdXi)܍j[O}#6{vꊙI#O 'H U"u\j fG,6hc\ЊcIQPmR6i.ppr`:FCGͅ7@fVb5h\hygV͎=܄u^>9A;~Vhv5 ,/mzڗX7:\#%kr@u1(',? eUhV_Ȫ)!=*z25?c LI'y`*Ϥo`Xhםh)L]Q阅ls%RǤ,!b(Tuݭ2$W(DaZeFUCUwM9ZU;[CU9`q;2_qFM<oQ;BDfWl s^.BUVOܳ;c!1R2XՒH"CiH&PG!m=OLj)) c񲁔&H5;[mDtg̟-^ӵbQyh.?E&W Z91:wa W,ce:9^\oi5 qֲ 1[xh19̳ḱGoNޒ4IkI[$e12\$ggʶk;ek`򚻀[9ZdYzy6f.Bꬎ=VE0"V "Ƿ((`Urգ*˪c;+3lh|7Ѫݛx?Ƙ >|pO_I/ѻ#I+YP -x{z{K|jq9ƹz3a8da(3cNN{Fu6=wXIx O],ȞZWk}g 2 j:R=Z1ێ=iٵ#{RIyV +V 81#H>${Uʁ־=x| ~A[m>R(3cXFDg<z/[ϋ,FF~ ֏q\wW=[:ckly+dlt N?4t& ˌLAo?NC_']\_ sy|6͓QJKs$ߩ~qqoD :,Y%4X\D#1B 4FElwnW3(\{\` &-bdCXZGӹ9: Fc(Ԁyo(aE.'eM%uv=ˬt$9IT00(RM0(f õnLAf8`1w.gOoN&ܹÍ(Q*W-dZTjAjًƩ:މ9\SACHm=BefA0N 9~/k)̸wEdK?@B) Zh\9axԢzD<-0_U/[w5gg$<"$?n@yg, A<>pK5>_ )U 8]F*I翹1{m0QmN&{Kc4ͳhLᝮs(M7Ix{QLH/"2"1ٽWw Run[1?7~ @6Ǫ\ @<~d!U{͜o@} ײE§RȜRqcuoE72io,S)II6q12)3#lk#o?ޅQ5}^6W8`Qk;x7(]Jt.}t"M *hHfB~ < wo]B:mW;1`\=IӑFpkKChh@'D;N떴bHcաA(u*.4Q[ԣ?~Z7goCvc!MUVVco7ym 埙?ZNkRy.?VVsXY*P NLV|;`P|LlAϑ7VVȧhU?Q !YFP1\ IY'>})ո͊rU_ CDKM5kE [2&i- d,FbXʳLejx~at$1ɹ,q™B]Y@f.܁yr,|0~6fN=V 1 -vVSkc=Yv3e8jfAę[49ٝab93p5zUny-?wT|:k뜇X[z-5` -J] xP!MUBB)sڵL6l1OM)p2KAYEN;N [`B0oye0LG>lk8! A _)}ViO*?5`C4F>X|0#4zz*dj2lՃa;츌": "e3;OS`amoֶG]{-_>PT/Ci)}4-*n3b@orř⌜NN`sNįe _0DP6ە z~H-s_i(KƏnINQhyWџ 7jSܺיvOXeq北)/ pwXeǙt:`̤Mw`%jO@@$_BX80冘P.p=U.zg%`LۯhRvj1ET-(98#Q5w_pٗM2^N m\=bF/[n̠Rڑt:gkץpr?c9]}^'Q^86w׵߾8T󲈵#: Vb/j Tֻ.B6-'(-z]!X0BNDWfu1PJw[^3L0Pl2՝ Du\~Jk,3⃦P;"7JX2JvF9i䃧"Hxˇc̳8 lW2zeGs>(Hk0 {2BGJ2z-yfŽqo4.~Z^?{[0+Uq))U1^t8]=]LaSQIQp5D ɸ4W_kYj=D fZyXԄq5N g`D+s_);ikL(_/&weViۻG ~D#>zт/ڥJ4P%SD dۂ/{ߔ*  sQFy)*;@EoGA,Zpx\s!"+ͳuiԳ]~oFAtz>fSHŚɍ< yDjJ{ZƓs+;liqkB!B[3н~Y?/ B=g!mT\!=(+ʸF!{Ӟ2ۧV;9V 6[|Vꪥ#v/ Zi0Qh_!Fa鏖MUţEb5H+vA<كǻ`s1?z^cA|i{glU=&0D7=Q>q^+ 4Ry)B(Sė?/"Ń #W!=ADqE(#r+럴$vŐ쒞ߩƒ8a@xQな`1P ?C9.ib;ྴpb^A cP&7yAW v++9*tQ=DU.ѬI,LF7iz8 QqrKz O(SXȷmǡ`N@Rģ̒1;mN̴,laCG:Ћ|VGtVk׸]A2-{O.{b\Us^U4)_!_̍>&k[ʐ^USRb?~@ףxIz_2}=ȾO> u[]teOӆnw_}`гRÿm;6#tqP(x9$p$99 *ߚP" =/+DG}1)U8SSwkH٢BC_wSdp}#> W0NYǻ&^σeG|83"dGKqv>eF)jէ/oR%Vl@1z 'FTAVRvۻ(Zσ|DF8>>GVd)*Y\WsjƷzt8 Psiv0߷@2ՖQܵYJE)0J=N`~Me% F-%i#Pj(de ŤZ`C;aВCIɡiiO;鼽J|lU}FYuv>B1\0Mj.vNŃ#Rq&rĝNkO?&dD|֝Ns8 MC%htܙey'ZS)ƔHƂ9KRj)S'?y"9>j6IXq݃snD ̹';Vn:76bm%VvOHwN9U]}ڭڄ5,vh)8Cw9n%G1D8d>Ҵj ]nq# DAdR$Uϝ'F~\pqR63MH&X0ch3')傄-3V%/zP t=/m[׼,׾Ṱg@r8m<4a 7643.Ӯ-0N|Uۺ$l+c>Xzf1/@zlyvL9wU˾gxMRܝQNxu[Af ͸&&e[tT -׎lS<@ySMi>'z}D2<"BU”-keKmkq0ްʊ{\7~0T0JiU^s}aOzo6XՃkF6H&`R eA3&HeF:iTFl[$q 5_M?G(O(D^/"_@/U\ݗWFvgtZ=-\MaT׀~W7Q?~-샥!F+E}2AK[ڢӲ6}<.pJQ4{EY"J[+UIx\F1w_~?Ǔן?wlJ{߂N#RMnTڸ7W".W)X#̳̖I@Y&AXX*]?\sK1Y53#!R fX׮8xq!:͞[N܃d*ʾ8πP H<1/r]Xa,U;JT?2/?>>k͟ʲ&y3֒L25":< PR (6.ŦZZ5\L\^n=4)0^/)j2}seḧy_ )SJAoX]x{zqKBx^~ޟ}\Ihmply@`G׃,d`JENLYX}AqҬyS`N2$Es&H Ɍ\H7yi]'v|KlgMoXq' pw6ͮv‹jC'0oEL5([>}TQk5\~9MUVo?X<􌹄`\.ݙL.8K '-P,{˝9bf4i~LH?W:nMڼrh3i*-kp҆BIrPB J{' `xzh0oQ!LaF&3 qNyDXK%9͸ȌN~>{fΖ7O!!`rNUn4QΉf)3Flikkͧ8f`Rkky2Mo{o8;eWXeC$yn_  $vn/9fVDi>˳L%L0 wX:@~a]7ƏS'~ /]-0ްكQ.9Ģc_@Ia{1406&i5I":fxPGqth:92,Nrkp@/\Sa㽁,Fuy4k{L#13AhNRd/j9^mX>{32 H6][oHv+¼$b]Il !X XjfIS,QiJ$%؝uXu1>l;$< eźCy ]?ftHiG bĐGYJG"@ ^$IjI($/smЉU?)y|8O54 `eREy%_v/ |遼JML ON]:&dI062 y_O)cQռH΃\ZFix.#HtIF=5,H"sB+Z1&*e}+XwՂNFwha<~̋m' YZbd7bVecJsYF8I4Xށi|[Di{˒vpUj62 :r0,8XY%? Kk0@fʼn:z^)YDY]pѝTg:&;k_uUu}A&&z$JDYLi\Vȧhu2`C]_>\?ڭtީ_tė47#!2xL1U::%[V,Oy(h͂Ȩ¢uOQJq2{ aRiSa@Wy0Z[nL`kp5ucUjNXOOHK>K R > 6Γnu=LWCgh ̠ |hFMҌ_ɲ(ۨFo0,,<&Dz_Lbd=>EA}lܴ㍫xP_a(j+M@T #aRu,;\.$&x.h V|o2jݻ21A`:`dn9W j:|)QցB삩cDHI<.Rĺ]Fa`I<0P1ʭ}Zp~pEM1D:Gzf(.7̄8bRI: e׳doܷOa4K#bE+79{R*E`m4nἅT UMqdGɠF@&t:0ɢ|qL(DIň$2f,8] q#je].EQ*}RLQ`MJvT) \57Yu<0#X0Rڏ[tJIyB%UgEkTţ2ˀ$kq`x*d\#G Bs,ca1ocmJsff5b5. )a*8SIQ^a-3|^N+Kp ;/+WBUv>nޝ_NIsY-ǴyTxN~LG `.2Aa.*RD%*d\cN.9N*CU6]ʇMQt촀s/O&pjV {wje14Ww[6܁hQH(wU.: `hstoa Xe(Blfrt*)}ʇ\߷Wjr<Ӳm]My ˗Yk>EOWT(03mֱ`žfX?'"dY3赽HB!j[ZtgUpR4]c@hFмQLMbnN( q0%;I+#."y4HoVU5RfdN2Pf”f  U0܏8K98XXzFYT㩴mt6L I!3]QqM&!ȷVVG;'Z 'پ*7Sk̞>߀Wi;n=&4,uDG-rW'Qhm3Ɂ"˟ Y}]%cAWhQueEIfi*y* 'IcبQ{K{ucrۢ B; iFrkVɲ(9B j9 K`ǯTFV]]٧Af =\xWOR)ֺRJ媽F)rcOYE=዗˷~kݻ217o-ZT<E(S(Gy jkmjLjiw=x q2+d\/yWaC)33w-ZH&dAӀZ~Jy~+?pxlD|fV #~]]j%a66 )+dk9DUOH4zo@MS.uõ t}NoLư:R,)8רhBfT'G] W0Ag|5X8Qݟ2|nuJDֈ7V ?br婋dcp;4AQ!oC=]5#z}e+dxTtU-&cm1/2nD0[KLu4iL2J-kbc*d|j%VgjUqmzbp(BVkz(ttT(*dxYK ˼kvhʠa<396nVI@d5pp+hPϴrؙ,x^v)Ժs͹kg%Ԋ]͆ԭs?D@r~!T%(0U>3W]MFG$dB5HA ,fB-B'ARWV/ڏ{]0gPtn4·wDи奭B6\s#m2r]V Q=t9N1]eUDF/5J902^ :s Cdg+*65:H=+d\%RA[ٰaFƠ21KkCyMm-g{>0Gm06.igq~2䯽ؓöг"\ihyI=И [} 悰Ej7/(|cqL6Ղ+J2NYHJ2:yck=5{HpRGvwocPtnVXgoc C Cqa\s3B46nH27 ҭ ů{PRIoX2sp~U?c^Ȟih,^|u |Og=xa+xiL 8y"k C 9kMg+M]q\ŗ=`ms">^j5?s/He4ûF޽/i^/>hϽ$7raG=P7|8-]:q_~բ/ tMLjh|תhJ^x)Glf =8 hPjHЉQߣ^ 0!UӵRMi4?V"Β,!1͉@(4T'[M+{)Kn-Y<,fE=X, `$nZҘ--4Z.Bx7m~?Oc J~-.*zϽy=͟Mz9cI/F2e֍ЗON~֒6|ͬmk˵.G6=scY8yZ3bDwDғɃxGN[gM-x9'Q^"qRCNMg54tkU@ Ҕ:s}׷]+h3^[Voߗs]*#"L5i%T'%cv^i5nUKxus'himsSֶAFixn+{BnP˕Kn]ZDln t@E&rDLy{.v78ɂ$Kd栗"EC͌-wi2zGz>6ö!XXaqTʀ?Ȱ B'i1_Ti#ۂ<Ć1^*}$lYh5(ֺa]5Zvs-à  NJWmpU ugio1Z#ėSZ=)r $q-?&ߤglضZ]ȾJiKxVWU1YiqNCi=o;MGmo".zAZZGFiΊ}{rW/>NXk&R "_gGi6sSwiGWA@ c<ߺgފAώҀބ@VQi•լLYG7g 0;FTvc>8ҳ U.bʙtwȠ Y*w vV~sȾL}csԨCɆ1RŬ}h.i.F c<ؽXkFJ2xw2]VQiOkX)>*+}G-78azZ)bCB_~zdɼrw)c<،fzzt0Rd&|R1LY*rYdeT]ҮvO^OhyR妱>0=; PAKwm*UɌJv}:O# Fpԇ[0,d$No'\!;E0rf tU͢6GR~xg mٸa +9,xS I鬸N.\V:[m|.Wf^@_UȺV꧊,Ě<B3'Y g 9 ќ\2VkeYu˪7u*îT&gf39z,0Z٥9y8T0ȤRƃs< ll ,EJ&4@6R D2@_;|l4;nQY}WbЪV*ŠU,ۖOd;ڿ &[%?MǭPƇ\}]z~E~Kgv:kow ך黼xW%0N9x8@n}sϸ6 BOIEhz~ō1LN!m‘U)bۊ*ƒ(\-9|֕8@)'/t@@'*1R ,Y}L9$dEL_'W66,ju}R7EV |#O {mcÎp.{m&9!j/9s&hdi dDmm9Ԓgk5Ғ>"_128qH|@1gkF١)\=/``9DS};(eJKRtk+)H:ˀ1T%+ȣsNFh)Y]Ae3b$t,? E B2$Hk Y9` V)'BĜ 蒕cêKt Rh |'s9 QVmQFo&#7YKg^`m1Mò#r:|Q8m&l*|-fI0aO8#B]Xo%GQ01=VZs/Ǒ+g# a2!j%0ߤ* E%G%unƩmjvk^ 4~E/]-Jr]4u ~37gWӓj^?:T_&_>z:v0mVrzlBaIcDվ̪waj&xlF[҃{߫1\[3BuQ[іX꽠uqW)MoZI~7jRrx#XG:~zWod&e[>A) j; uӱXkj.PDM#]VKwzbH)d8C.,C!W }{G.Td#шn/Zw~RȘ3xZYj ϕr(Ն1VxF+js?zoc=]dmrwиfM?ǵ)Jf9)* '#RY#Vwo[TXL? bbVx]E*Uʎ|vaP c9RƠmɧ;u AZ/)Y]ljitbX -Xܨ@BR޳jv*&Ľx%v:dR6B8C&<"@k,յ!B+2L BOõ|I`AoX󔐺 !gu`h p2qwI6 ɭ껔sW.sJm [J`V#|8BTVw"Z; :̰ =A5g/oO W>?oTm~UCa|_~U2ϫ:8(q'ޅSqU!Y|b+8NgnZMib˦n}?ȨO&EͿ #PNOf0S#\-^5^~M89;'z8lCA7{$=4f2w@he] !㬈KBKU`!¬aq|47]'/9bDž#v\8bDž#v\爭' (T IB! ˬIp.K"m2d1MԪ8WVVAg1ʆL ]Q0Vi#|A1\ QzdERA2ۖv 둜(*N JiR J0|觼-J]qS ~J]$J}(UP*͈~#޼^~<?A~rN_[h<56 /G);Mg k~N,xn aZ% p9 =Ri^kȒ]]Caz7l{!]NriBhzHbc8&r2[ ǿ{iDMit\}4mx pb˷ìIW u˱BƠmOObN<(wnW9p&]:%P"y԰3#v¨PT1lJf:)@6-FG/%{L7">cUn+'誮g;@ϵ;W*%½vJLI+SrKTU,ՎzpEqt& 9>1]lݫGgVY sVfXZyY: X[{-pIR!i!c2+8 L{[E?l1{« 1QMՎ"F).+[Rg!) #'& : 5V}up\F7g܋6ndtM5Z,5qB_TŚe{(,K%GmxG_/rhlSu2\uY眺}T W;jԤpMdcTd JŇY0MSEm%ZWtc`>y/ؐD)q0\O`kShU"/:}t AxޤiՏ=zt{%,=u܋*v_|)R>J*A,RUߊ){s?Q:\՗=)WǏ?~һ-~־u<9sUФTA廒a*G'ʆQJԣ:HC}ص5%&JRTؙteYP>"-?:At{s gOacz{Ro.sV.KoHi_w~o BSblu22B(FUBHrkR9@mZ+vN7DiSud+M-E3{=S³t3{>tr!`._m`c* :Zzd;P `nkrj\CnUQSz9N @HʮQEmNsq9PuRɺ#|sȌ#@f2 ᙇff!,4F;K m_Ϳn)7vc xoՙo=#wׅw?S7k|Y+? /s_h=ۭYf#즈^{`E|G}e"EZ1JIUNX^A;\}4۬_veZ\HjGRr`=u3lId #x/^a"=Gr; Y!\HGv7'psPox^{JDGK_}Ifbj:y_0ĺy7BtW"gԣDP(U\!dK\{b<+A)BcF3ИQLgmٌOLBe*H#:+;T9G&Q=šSۋ@B:^7JVRLtAdK;0*9qުY >i!;G] oc"n.^<1'HTp=[vw``5J_LDDXLmj cSTM1!`k#yP\9bZ!:jlktVV%`+9IŒ.,hn^6iSnÎ zx:矧&'~Ng=*ߏF 27&Ypd 5r+WbE1CPE\dxtә k/VMg%Gb`a-xQ6{lDxuT5$tuhSvd;$w|&t*V`ub8ȯo7\)-q@۾ԓyGnuw7gW0W%СDdbuGMä3臫M 9;1# cq,cÕAnmjjwa?C۟7z?CNqvZjkW#FC9cF`?Pjbsҩ'9ZEʌ[C 499w_4W/c$%mTb jhjcB{J,\a*y3ۜA%^Գ!ɗ_gݟ8U#Jׯ:k:fc5|wpSwjƴ!]~4\UtErH, yW5tD_'c.NF b*ф(s8v8}t<}svtEPuv gV#!5[7>f 8.%Jfm} y"7:)oו*QE3~n/iy ILJRyx4s(Knb?D6Ғx!f՜~+1u#ٸT;4٩G VM1(;N#/@D,o6ܸ_"w{oCTcvDYg3b;$0e"~ HĴPY[CF=/OЊdFzd77ksr1,x̺0boRӮ&猷o)v0'd wtrUq#1'ķǡTpʃoogըwtų+/59/ظjOڇƞ[JJs 1G!Gu4lM:VH-7`*swi<#q~ҳsB?I)kx{8k=߯=9'bo-›{͒Ky4O~&U/v\ӛE\ۄԖw佱">nq@#>ΘQd^&.wR,!;UQG( >ՖTJy{ro{Y]Aڹ^%Sjx{ e>5} b[aBjο SձQ vqL *XUnU֚u2D:Wz̙rk5V[FIOPgb_ƥ­d~Ej@iYbRS} ġ<c^M6(8KШiE/OX5$Z}2=Su){ 6RR5ǒ!V7+\8 ƞւl^]2ť(.eť-|#T>iehcՌgLwxn9wX9_6'umN`` A]]Ynsۜ :S?}|(*cpT8t X,6Rqڸ]:ʤ*W3w)Wn#Ut>9NCu,fGWwPw0H!,lx὘+m`*{rH@՞!qU#n Ja+m/+ JYTεW!9 3 &}GaMh\Kn!lY)V !xXq%<2>V1j%U^w(N%Xp*:8αR’>`fz%OjUF#DdsRY'>:۔gZw8ru89v8F?,ŒFyF-ima4v7E~_ƨHmz!`A)(dJ(;a*crޘPy|/kFvUԾ˜Bt Gkx8cvH@7D u8/kdA[Q\cLDWÔ8#$m1Yg:;lVo|5cOZ;>SRZ7BZY\YQ<6bXn++.ق#:rq@jw!`cmٺdϾ8|SogA AU_.=nϾ;m,F'J}tGL KJn! A$<[gw8/~[*U )?/=Wgʓ +=<ϫ+e/3anYĨIwOٯl1Hw^/j u=9W) '%%ZJ؀P][YHq5m :D(q&:gL}F2,?vw[ 97^>}RǡzZ[UN(;.;K砐oJhl_8"d>s7m=' a07xdo tT\J\*Rvb|$ 1ez吜̎C(:Q)^RW'~&ՙ~Y۱hc|kN^pP7g!\p뛥dŽ|Zw :_Yl*-%N߾{]:f) d!07#iYpdU:v0o8EkB=SP t}·I Lo)^(އ@2sրxۋ\`23b_?+L03h㓵,9[{qɈmQdPPk:4h`!c֣ÃX6'ˤ! ecVFldHE%6c9TGSL3-ŀAljl1Q[ EI`?NN(&M ]LldB u?(LHV%+?ijjSĢG\ܯQp)6hi[:w>I5pyAW^Lnq̬} WԨDAQ-<, ( A]1XW,i ,zps^8_?m;wǕpgf|rDR1-J\86< 3~ P2Q3K'u-ne3Q"I+82QcS}=?O e{x u6q7Dw_C-, Z`q:D}YGLC},Ag$~?Kna;h97WbTdAg7erPRn8}'(B!-Rk 2ZR3g/"0Am -\85~y&U.WV]@pTj?-T5#Ѻ}Nuq>J(WkMJ6אK8N(k/W͜P* %wC;LUs7I26- "E>AuUקvk[OLZ#;{>/Dl⫪.n$gבJnUœXKu B$LL"ͩY+MOϏ+!{K߁QZf"S|4DT(jFKhl:ϸ XX (bH=bR=rd'Jp8S9!=f,SEnK*sxs30L:Ľ/i:n'x&quKGjɹ; 0{PV;$q>PxK!Y,81{Ҙ*!J[`,5n2uj[Y,C+FE y̪%Q:Z BG!'t4 MwX˱*ݹi7׋^湠pdҼolkw~닖,G(1,>})2Mo՘NxqꌳC^9=5>Z`̑^਒㛗-Et嶶|Kywr&zTf V:dzGҚK!f c@"s9;<}.& ,O&٬m4Z˗ϟ>|ܠP|m p(zj \J̵|jdb.g\ƐmrC`O SBX Ox6H Ҟ4}?#[ \sW۫rJW<`qK$ST㽪8|=h y) #h4Wa2'ZE:qH=D#ոAT7(j#.9P>Dm:bD/aoοkל;O|LbXR`^:oyhå=?nmosXm>;3&xXWs&J gM~Dz}䚎ͱ2c h)fk\==#ICF\e]k;]^S *bCv+ȵ{J8ڠd,uA |vu՘1!b"^a,FkTLS"ك?xp@-[ L!{wL_xP􂊓!%koy50(@0IszmQ~5lϩLi{ ͧf7׼p@9$pyqƭ$W80 fg5Qs`Gv6D,SWBs^Z7faڤZ!ѥ$d6Gѥ ?T*}<(K-?KvS1>1s[F{Ru'|T\j0j?iTD[ISm!IVl+VU9Jz\_zK FҸX h+ֵ"FW A[!Kn~?Wz/UJUm}SVV)&_hv:] q -$\k14_F-"k U{idN-hl0CN7׼x\T  2G*\;.P${J˙j9A~(}%v0Ci gP-Jo cy򓫈(&2_?{|j#x{\z;Ш+lȾD ;suZaD R_Oo%yѹh$l;anv6zw KEl+9Iw6u&1| I_pN_Tl1*|a M(;N/G Cf.C%+;cΦwhCuFѡ/(h6OU93EдCڝ6}~j@;cʩ3[p+HVQO7mf= u6q ]}[]8{@KUΚd5ɀuH:DB4f&=>Y,u6j!s˗Z95l֪c\#eɣ ,!@#KiȈ$ n9vu}9v2@=+H[Z?˟<8uWHP2|ZJ%2t]9#f@ڠE{.ɴJCԪB#n/ '9m–/r,[je BPXBh2X@ϑK7: :us* }# 2P%n=H6$~S,9c)MX8kr ;te#d5j $=T6)D2AwQͷ@%9fEq]3?{ǭ৽eB!Hݗ$ٗ`AI[DzdKvoqtkz5.t'8XUYEo>9dluƉ8cFGdKu7oI:19-gp]іV$mrrd hc6^$Q$LN|őnާ LD#e@z|Dl/, Nh'5ZUrk#XJ\_ g_FC_Q( աsN@l=Z@#>\ѫ`\lVuPbB%/0Eg ;TG_g5MaFR,ݟ_O-)D0vb-!t=\ x@TJ1[X6g% MS h| EV\Fw{=e $no H9IKM=NUt I$ɹV?m#y, @bD1g%c_,Fc ΃l<>O3䄎9bnf#'CJH|TAu{gD(YV,Y{C/n.(CeaA9ΣԖy87gɑ1[\ <#{𝡣A:CE&ѫޤ[qiw0)Gupt,oZҗ*H0$Iw (U(NOr7[{(Yh V;sB6R̵N })΁Ah7MU6z_jN+&-e5vⲝrV,л 9y⭖{# #~+2aE1:ԲcWCV;Eծj6,Oi]q z 9<~%o]㆒s,R0 G8M2{w.b䮗acP2vޚdu>$Īl56+@Sg!F|՛9ABa) Ehtʜ `erXq۰hsGcj$aj{zEԻXPG/eH]2j32֒XX )`gYvh&E"gq1A:㽅, oUAfʽ.Ɣ~=5WhK 6IlZɵLc9Yhj_7r^quJHۈ>Za\@D;zÒ"&A`DYq] ]h:Ui 8LʫZ[#Y A>Rb!D^GI&،slV;h\$xGTvr>YQGY\rbϹن^+s}=䁇![iA |;ҧmܤŧM5ц b( !PXX1' KfRq_skGMAo^Ø+K#Gs+|7xgrS>)"׫ $кAY}eg>)'o cGE~0pyEW;z1a]<|y⾽~0, WL0gwnh{R[A;ݽԊ yqiۍؠE @;aWʒU޺^+AqFjmiB;a\d~Ўe佱c:9YPv ڭW}β"h@;a -9_XG#jkLӢv ڭoA'턍KvΘ1GʢBߠ]vU }eD\Ez0WVygw/S[~ URP UиPTGBK/?u N+ /VZZM̻4R*$ilOIO%A☟UJ/ͮe}:^1go ~8J+'ݽW_v3Nm_d>X9>~\\xq,~rvz*P8L<ל~~,G fg8<3oGMn-TVP;T^"g<1F{gk,$tP]zS,]vv ҅ \Ei}! H&(#\Efl_0|sT=Ym-zͲB>.?urfAZ/N HB]0Ȁ:J>ՏR!R>[N)29Ȍ@l AJ>ߎ>(jj jM#a;#~xI&tU~:n.gȖު0kYۑ5gìmQ0t0WTmkB / X]Hn]h[Z}ܿ#Pt;5sY,JP>VfףOO}4"@^>*׭}t]̫*뙟)y= (UVgf=h-Ifm"ܻoL [_#U"#wبL`YWoD֧\y6FX;VoDzܺ %WSk9reIցAףuX4+T1jҌwZ-0߈j6rK^1G^.kQBzQH/ ڡ?vUR\ mR} =*b_{=@_Mаctذm;DB])jHh'ևnnA5u#jk 5h7̠avO Kv@hT!hmvT7h`NhdQU5̌ `v̀ =qh'ldD1:;gA֫>r M)uQzxp>3x7(4rV"9J ݡAA¾׋&CB:Fb ')/j.| GU! *eMOWv,߾fPi|o `Hk+,j+҅3Rki] #w3-;xU_;JJomˎfy&vs=W,Wc 8y{Yj? %)',pƒ΃[SՋ ݜ]2ES6{kB_]qѪ!zz'!) =Aҁz "A>/?ۭbmh]O]ֻ]FvE~W G95kU|E˳ʝ (*FRob@ح~E'p@)0|߭Mxw N/>Qk㹰=ς6, }ly-h %FԈa% -2R8(8l.sx^>g;9ks*D^>|Tr?0'|yC YM"ϝ9[ן[WhԾؼa7z{оr{mgmH۽}lm6@lAC⚢$_ ) _ IsDzș~TUW׻鯁}YBJv`mHZh\hSqi&4K-1,fQmӍsz5,؍_KW!BbnN;6_EA+Ck}>cY.Ɖ%%Qe:8H鞑TVqbO3?1e\UPLe_ ]eb]B2rbogAT%AŚ SWsZc10@;mrV1r\M | \P̰D;0D=1dĥ5{Jht5Έ_ڨP'T'`B uH-NoQnƇs={Pwp&ŜDU -܍ﮑ=DEۗjR65]E PaZ%a. (ǩIjLC%F`&^h0hb5Jap]3l>R ("6#)>2gdp.BMNo+ҕAъ9]  :7'` c5"˄;,@95QL:J{95/#fOҵV'AkU1z8ew0 Q783x5^l6^7=،FG1ar#e2JExq_)^7xc-o6;{ pF ќ`DPYjl#% %߅*bHO"=f?c#q^&8*aKK$HB<2@ tS=%3uowiJ~-r/Ohv}@!lI5]n-to*iAl27TX.nAn.x*=(+Q;7!×*X$[vQUensyu寘O`j{!<'h&8x&XM9PTQ톊i"DnK/ +W`-g7b>R}jrK??g0n?Q~^"~:=#&&]ZY\WGltZ]fDwq6&M҇!2iϯ"_e;Lp' 2bvؽOQ'  n b P2ix0j ay$^{=} B7qO񄭺e=CV?1p-fMq6gFOfS3ea4.c*F3v Y^HkCgH ^gp\oěM& TP[cuy5C |>!{${2e2@H${7ơgy` ?/ ]x@L XFR@Ld@sJILX-MoD);FB+ĜcHH|0i Na܉5;qXøX4 nFćH{!%XN(p*bU)T1aTڱXwZźoLJk!$䔌@v̸btQ !D185F SXLX\60ֻǢXejbW$N]pJ- z)Q;.+O0b-AF`tY+Ċ )NgNKgö yݾ(Q_MasگǵZ0(a>X7ObjDNEcC2[B9&O] ~<Ưc,X͗)b.cy)[2kRRX. , qepE(r3$S9Ԉ4pI%\e%v+ 4Ж 'P\( )pܵ6[a`]=pGhXQJWSOdZ L;Y>f<6XU'9pSHy.1GA&+ n2K!HoB7q)KAD=6|!<P}_c4#OI"V8AdJJ Kp*YI=Z:(׿WZzX]R0UkxnjEfIT:Td OXpΤaa7!ؼ 8pNzOLG"ȏ˟UFOJhN76Aeݜ^1)dЉ :A'2 -51`#X4v > ?DkPAhA~䪴D,u'ߜՌU? :=3}3P4MIG~6UW+oC^L8"_ۅF50&,']D:|>z@m'ٟBٟ;=3mb)x}x>.p"f(%ME$p6ogM1`\2bkpFr>-UTQFꈌKIDy"ijKYDrs*# cE $ʚAthIYT?[#eÛS AQq;&pKÛjB| HH49Χ%.D Ism ZNQz9|xb&GZc]-/OIq{<`[3JF_6ljm'}Sooj)' [-O Y~uBRU0.q\qPPk<ZOΏ`xUgcEהb\6y9'OMӼN̻ļi;żq)ipS !՘(,)\J=Txbm@BҦǺ2=֝|sjDepF "| Jyܞt>+/$!(ƪEŴ 3Maw}0'֜Ϊ0'wMј ܊d DA{/ҭ dz#I$[('TG8%3#Ooߥx.ܹeF^zV@[uJcff#ǎD\C{X *Eʯ5_ zq廆=ƹ$[ŊNwf棄>o5d=D1bpEV~߂v\ϻfL KzҴ;NШ,IhMF2̴_~ lG_Gcw.,sf3 }N ~ǣv /ànOCZ+C`^?xLBR?I`?LЇ+`>yfl/c`O;n_5)$("NM h^iVg0۪-v%H6nջGd؜p^<[ %TvoČW=ob ~8O`QJ\|2RhɥBZKklI@e $uNp TJ5J%qg9 VC1@GT'jݡVO{Ro,Ng:-| G!2 ͘V<1K <ȿL(/۱f4jOrRB5:J~g1xI! /I ꀴTRcC 8ZX)buA Y6cAmh9 nM6\ TX"q!L,7^/(|e^[| 26ĊW{!!EP8Fn'7X8E-m<䥨se?Q_̷qcɽ1 ]ݸ`{d%_Ϳ0Kqs||Ck¶SHvR'yX'ybm 6惎 PPi qsY t3<%xf3%0PrtKw+:V-4W ֱЬq1B% uՔj~NYRugOc [9 v]*$I\UW˫+l\b,WWcgJX\_p~2PbRuĚkū<1X VC\`\)yԱJG)%P vxv{}})jt,)% 4ZS6$e/Ѯ+aF~z{1{\Lꓽb\Srs'g !˄ }҅*kPnk/6߂PX U qs)eNU tFuz-+li㙣A']bMrMdN+^ MUnAiJ`x,HQk<˥KVno5 u6bC+Ţ7[IB*y(@ ac[GsCWPYc yc.)4U|~޸Q"5gen͠ட>R=<=Xxng vt_c:</RňgI{6+? g>q ;y"d{X#)G &uP$EH:;berXK8B j1p*XRa+D, XJ& OqH%b(aD;a5kEUx[=gP|v\2-no GDL+ +VP /hZPVFw*0u86gZy` `kĐ5"vq1QS+_~pR&c]aڞB]. qJ|@t׋,7Di"VM*HųJuM"J` A@Gr"fr!7dV#q ,9G)@H N)&`~&bT3I0qH({ȱC==+q:p l?{pἴIH~D-c޼}zޗ~ w@sJ4t}hӜS[896) [毋!Bhq'~ZL(%I|_ͷqw7zѾ3x'߳rU4Y8 ¥+ߢwk{) tp5+gKaK;h"AާTpn{i qˮSg<It^YnLk{0]x)8$0߮zc׊h/GaW(@$C.r.}nۥmmܶ L6G;ga1%,QiY95)׈`]8Rɔ$ IO,XWɘ c *KS n "NDˇ',G8rXGTZDҲ(>x^⃗e`Yiњ!Six4JSǸA6Tx68e,$Ha$JKӁ)-M\8#hw[~8U(Ew}urNq3Jh*N%qR@ &?I fuqMjti) P 9P@eHDk5Ooh)mEѥ;qG>wT]u1K Ō4#)!%yVIh!R-0!.Mjd4>+BY@] Cjot5$ SI؆a#436;,͋ aDy;1:Kn_3Bfp!JkHG]cq~_u(jY0=*[C(T<30vᮎQƝQw!,??xӲJ($c5b@@xYn]͂7,x͢Rn`"w"_Cxn ]5$!h U%d DJ2jLN:^K 7+_a.` ꤤ8!  A]$]ez0z0- Ihmyd50]iTbU,\1ƹ8)NSi Bl>.ƔXcRR`cF+E9܇-%JS9O)O .:o1w_n!\DId&lŅƤ]:KcQ*LLeK"l!8HrsTb`n*\@W8MbaD`lPLYY8?z^ k3ieTC?dU"n={{b[Aتr4_]a+rzCqk<}<՛<hs})ƌ%L4\)d* 2vKa} HT!^UR5RJ,ZP]9x*'ugpJHÊ)QQOqظx(eh}RPCHeGH[%n;65}<=0Osy[|]K@gW"å6i'BZN;O@-.ǫ%Q_y(1T{.QtERqom]4+>(寏SOSOB˶ZTv{E:?0yk3ۢY9@坎.V"rn֜Cu*h V>Wt0 $&T(D LNAjV[(YP#Ʊ~XAnr{}d{d~4ǵ3Z`7B$vƸ$jNʪ.8JEjFƈ8hjWE+u | &>lFYcb>A50}Cra mi?Hג#޼տOB1ݚ0(W,r5ZZP{g2]ϧ,^>xkfgg7b"}EO.>p\̇ /cob"q{klܼmޛԪm ߝaϛ7rtpq7r< ~W嶔CK ; ;=mQyVp^40)y( , -"."HkX0&-c,L/xq!+;E0+;>2НXFcfz}fTj$ut5}D>',_#X(቎$1$Ҫ zN9/OH1gUA\2N,OMh%0Rmj+\Qa3c"ZpCO/ sjxr3$#>Ie8--?\b3SdFS欲oދK{ +g{L69&DVv%X띎W: )$đ4)I? é&I-E`J"Մ<2T*|i Ƙa, ]`!c$ؖFE+ai&TZgKrX [>O@\ #.*_D{k׎Bj|ɖp|=zo' r mllbq=d/کf]xw6 y7΢(9VP6, u`"}_G<^x5 fb~X~(Ŧp,~.gAI<: w?ΞeWOؓS\lO{Ӌ`Ջ>ӳE}y}gbA?{=f0?h~;{1W|}c5=/> 2 VM\f^`޾|YksN%ɫKé=$x_s4.~}y8yo6|[&W`%l-Z2Msrw5Yv6) inN/6\ﷳpџK`KN~`3|j?^VE2x80_φ/@z?,8Bqx'sFg?ߟ|@:|Q|e/ϯ&~ѳ鯀RGoY K?; ap8?ܕ$gr~44- o&g8+_vvpO ~S maA?áojj9AΫ5_|gÕ9f:{ʒ*_/$r~_\w(n it'v+:>dB%:FD`c #]+s$e[Вְ`c|&̑&kmjNd"j"Ւ}Wgd+j/)f=،z^i{{rKM,1VY&md m9CL6K,Wyع䦨^ݻwUHBPQ:.zPgo@WAc[ ߗ^N ':;~^KO#ilEGNlj2 وG̔$q, q*0N/ۘX/iÑLwdDOKvdK뒳TwdKvd%]r.i8,LF2q %KtI*svn]2g./QQB#Mw#ީX IGlpZH0aߕU)yU Ulc톴:l,>agcu6VgcmB_d%weI|xgK#ny`glس1<[r-HF)ģ*eZM+3q&+n*QnEM\1&;d2Pz$39 Qê~pV| Ѓڭ3+STenesW Mp8yo7NR-TgoxQ==٦ CLn#X: ;P;A_Y@nhdKtqɗW"_A3mMT']Fc$m7ion)PzGMh#DŽ]D-H]3-@1fKACFݔh|bW[sҌpI[h0%C%ml0tUΦ/b/9&Ԩ!8٧`iv8i3宪[xԔh0[zer%{PwV׮=-9-l`6^[>f,{8 ݚ[[ Zyx>-=h[w{O-G.Bgy=b tX׈ ckf}w`AWJm[k G {pe s{΂@)`7 f@\`&RA`"S$1.41x$O}^ 1o%=\iɷ`l*(d\wCY>-pZs[ MFRT2N!6KBm\$g}Yw"| b7'ܾ;7FiFwS 8PCf*͏/n#q;KY' ngu4ŒῘ*ܢuWq*<<{1ϛs{y]ʅ!$&&@ F|@h*Xj$jO'&&t!<8:}$$wŅs2ǜ/bce<⍓mvy$P)/ZU-tu) hD=Fh9εKb:P':ΆPR [n `O"37Zڙwf~] ]wy7ֿ_B6b#6AT?}d?HMk;_dދ2o캻\\Կi$\cӺ+ե;^}5~{upQBi:UR:ؐN}؉~̨$R:oQQ+U"8# #WtJe!Jyh^)q7oAE\DrK*tJWLe>5**+B D aF }GLh%\*"˅DT c#*/7!.$@U&lh +E{`ܕfH^dc m D*Qِ&Ch"q8с(-uTSh炒A)Z$I@P7HQ鞗"EEjf{$E IYe ER#H0= DZ&M=":O,F;niNZoa#q/mOR9Yي 9L20%C熖U>\~v|}V UZ} 5*װ\ê5uXH 38m,swz̢Fuu}HLy[;HZ/] jHq*ahYLXDULhR*Z>a*ĈѭpB`UJd$Tf P4s8$~,SXMpzYו](IPԡwq^vg1PWιY B*F MTZ*dt rbLBSQ2!!DdDYs;SFOP: 4K87 @>fZ @d]IGu .k %`gW%GM}ߺ L.V@ZXZE OL3ð㠨Ks :vY;$otdFʆ[xaDAk:m m%{*/04 - x0ާFc,jb,-8g]")lp~e&7Sog%]4?f8rR4seg/Qewt|>Opmn &I ϧuH eio@wbun9ǼSA"b8ydl=Y0[G]gnM2nW$p.OQ^ ;(/;Sc֪#e +!rmx eD1I4 &mAjm7lde9UQKZLY$7*gy- wH.jUP9IK޼A46$PF֜*6ȈIZibb ctZ~dܱ#ҹCfcu;B؎rGryp=XAy)i(߆bM$k#wT`|UUsk[v;QЩTMn,_A!u Ý0Y'ax;6v;y"Np=SQtPvoo8Gn/)']c/s:Էk'(@̊η7bE>B>`ebá'bPL_>΍lcVe.,j u[- =ï@LYsB 5!J;hY:iGpDC略AwJ?>/Avߪ%5 9ݍA"g"lG?6d˗=wdvby#ꡳśf#ZMԲAPJ?XX|Eo~ Cb#*/a#QKCԸ3Vy:kTGu&x~7'˦n:= {=mMNq=2 c뻫<аLOOV}v!%RA13 :/qV6|6GdF fv[k+kTTaȧkް=Р6{H(e~ V.z?vX+ى>k0T5pm߶'M;7$@YGҁbtZ36ߕ%m\ -`0= +t}Hȟ\D;Ȕw55CQԅ|JA+Y8/ط/Rf^'T,HBz) AK)zdpNk12&F%+h^>y2A }$ZSF8Ȍ33D&hkXFM4\_?f 0u)QW9LJ+ hSʋϷc9X>T$ni)b㑖'jiN(t2f9&pN7wbeAH)]ހ&==Lwyjq \|HUbZp@PMӾ%V:C 0wF$=+5pm3|!ʳ:I6Gw7ۛ僓p%/Nե;^}#;n˱,)z^|^ߧ_GoOȖiH2Ѝ xJqCxs{a-𞖱gImk5ͮ.}|onQ!U {oJϳ)=Ϧ<)] %011K d`%Kkj ^ ˩3 2䇼k?N&g9̧p7]:,+6@`%/v~Wy0lgBǔ ķA w$Z݆46(=!"ꘋ_L'Ӽͦ˂ts-VieV'({E1#܄r(g4 /8Q잃8ĿҪrR4Α5WC/%v!Ɇs6į?݃7BO$ @3ԓ}xxy 2< 4c|2+p'Y;IK ʬR ,~ ZkֿJa`'tP#j%]Ā4%PğYge>x֞(J++d 1̼[-`hSQ}_+JC:l% Il1;Zp)z fleMDq,d=XhGHZ+btnR9-\!p`urtZ1hp6_f+~~unݼl)%Ai`1ٻ+w7e9j/R O>NO|N9=ϋ])pNoy'O>߼?=A{[J!swqrnNH"_~ }.{+|B4׹olZ+?{n2,*߼{`ͺω5RDngF5SJLܮD^ؿǢ$wS,ȕ0mlȉ&n۹|oJ^CvLlс22^,қa\Jkw(#/U$Ľ)%0.`ʶd&s^&/:rY,2{TN%\'Ƙ &aHTҧijFG\QQ\2z*Uo!E \ їOLff'D,Y{R2~ j&Rz;ٺї,B *clJcƛ ֛T}@mLp\ĶQG/F#\ %{?s4@dJmXP*D|gH.=VD3ݧ?L2mʴ9kOe ,,٠ <2JȌ3 Jo]eI,%JMRo lRx!qbw@0*=Bi]7ʱ9 뱴xYN1X.Rʔ4?SE'?¸fLKvrJfV5M[F}bs3EUi7Ry儌`[Y `N|SM(q)ZT!xևm,/ee^c22 W$W40zUj$fDl E`[gE}Q.[{WOױ<z6EtlP=c`.BDemLp!I>Ƙtm`-fܚ-I"(فUKceIB$m.ç:(hZWca!ŧ`jG3p.uz=ȶ: jP2,eSha=k*,]w?}?0׼{6zcj~ɷc x!CGwжj ЗcX|<( _P|?^OMrK?Z]-3"4"}…";8`sP7"-'JCLYfϓv'nzNa/6qlMA7*`LA{ݢ'oߟFv c$މ -*%ё9dr6R].@?)sÜoΐ28 }ByHAK@)! x;g/Pd$xG恼=1) !zeeiRNZD2DEReA^_^>\ ji)nRb,As$@l̀kXpϗ f[N;ܓTbf_.m u@U`0gPfh 9ex DFtв֚qiekR۰gx<]@&~\)f9MkuT xO!`b*'I>5Ir ;6Dם9]$ZOZ\sT\J|%gRZR3˴`\8ڲw=GiotFb}HB߾_1X;ZZӲӚbDP "2p@+ gK .L^;< ^X@5nQt,BFB2L)b+P`p/6!haA;q9lջe%"3Eb<#{S2yw=8O>-V(>{grYYFQj[*y4 LHTLD!"#rڂU[.QѯNvϴLܤ?4Q:MuęFA|ݤg%FCX}47+ 94nU4-hmTpzFdRݱ7bfLD ֻOd"ъg67VqLj򰛯{bG(ɄHl-TҚX[(}DO;)0|9YivfAka^-'LT)ef$6䪚&@J&rа){wZ\43%C yAъ LY4珉'Z⤎rs^GpCvMvK m~E{]>˹HN_OF=~h{ZZ-XH䵾\䖁Z |vGȑ{R9kDQ}k0ls]a@Z;nLH|U7lJy!FX# }Z[ګ_!AW0ch ^cCFF6xs#Q ٠iF1Έ2ir !zo镚 .%Ѻ-Vo*i #ns7 ݮNlj5..8eUnuBUKS6fa0ΈXl+ yCC)5^|>J;1 kD`DK(-`BمU@+pi_bY#lBٝ5"dXlZ pxpհ>k2qs0BZ]1mGgwMۙi[bFjԶs2$Mpۢjå9Q5sPZ @u$uv[s,>& 8d-thU~ADeV\n.t_⸇.nCw;Xdf8`QYY^p,r]k #(2k܆OM<7+dJW%FP]]:*|Mҳz ?emΟ2ޘϥc0='gd^^T/ރb֢:[xdCB*\BEJP:x"$3H2;: Xώ(?jɨbkNJ,S\u+ T5Ϩ(_Cƽ}:ɘUukux+n=Q(QfhA&C*n<T[ *%@̄ZېX%.T &UvCb̖GREcCVQ|Ίi $cjb|ˍ_] (]S\&}[ϤGX`$:U`#)iSxC֕@9=2pR*o, h `+Z5lVȳ㖑7Wk % s)C1hKRάPF((Tk(]f9yͱJ|QYuF [,GJ \kh j fw `Z #wa-/3l+aLkZc]X3Ry0!5.ʾKa}lC:VOV7?EKT=|pd$$RĸzxUj@{ބ`ze",,|al7'j4 ej<{ uI?{HQHA:,`0itie%˔d;-)-f2IeE/ YI6$O]Ş,ˋ  35T԰D J _z|~[x,W׺:OQWb|Umb[̶8x>8B"%FٯpJvçȓ*ZEz* TT1f (|6B:y0x@⽽B U<~hϦV~ ixk\x3&6uԺ }j݆>u݆bEqpP;0:EP+F< :ШӼ4q5iۇV֡L<;Ї)jaծ?mc^m>l =t$d y"g2fKf8{ TbB@Fo#NpB[͎^yd9\7E%'iLMb%AIg$Cw9GiFa&2\ r4f8a\]q$R*`5oАE ̋V)>M\NniD 0q-\R^34bxX`<'$j'!`Dv}Rf(>_DHaK5 ө Q/  #!pH ўvO?8Ve0 *tɦg0!@`e-9ILE­ phe}?nj%Nmq4޽>Cj D[mH͛"כ'Q +k7kݸeknp|YlƬ$cd\nEHnWyڹdf+K2JvimU29L(`4LD} PՑyvwRF8[{;VMRR0/|INb\o:zڢ;,b:~Il}l ӓvM) WJ}Wi2TOo} {IK_M*.瓅tğ6<2.??#|:K:fj)u3;$"x|!TNmQ @QzJ @yMC_JFJ?3^{gʇ^n>j֔[l~mv+}Z79hvLZx%^h/xڟjs)rȁ)A9}J샲kW `^"( ۨX ?1GrJ~h% uSWn.Š=sc%sj a]w_ JC)16#>ba8X"3hP޴YPGɉ"qX-[MFkh )=#&qG Dߌ\n`K.7o#A86@ s}q H dL#3NfQfa)Ci4 FPÀMC;aP~~ "fN$Nij9Q^S z)/Vw>=Q ت&l*8SiO` 2BamEQ,֛կk|vh޿ ةSZ"cqZ(rbCTb*& }5RARxE^ ;$20Xۻ>-^dMA@-n4Ha<8\X3D*ZA (Q` kXql-^@}*QG kqo,3֩(zݐy kI0nGYA.+Wrfqޑ4;Zv(tNM_.oqg ο:V>{s>YMjyVY}ܪO3>[uX˖;^i'gݵdL-wG1Fsf~>t (fIqvePofO7SNMoJ7y!g$kfٜ$OXVDDSÀK7.>G,0U^<@`"?dI@8 6T* (y%3& Pwo%;wkd?vy~vw]m-]Q'^T|9=Z[?hn |{~'=0tXHB"%S;ɱvAb":c nGq5=T6E4K:mjqk7)`-cv)BtnMnSH\DG˔"E#w~-rYǿDh4ߜ"K kI%˾.i4B)^KCOr噓Osxm)=M'N_sLWx*lt@ax% G5Ԍ GVтbT mT9J6&=#ŏIցb`,R#C"pͣةJ<+(@B23 s`!jf+#F;{x4&hrm6&V<0Y+Ql\-/i-{v''$SOo/)ER݉Ik~%3fIQк ^I`1k9SW'bf-}LJfœ׋-͠[aњ&EC#J%%٤h]ʭ(FJhVS]U(TrS{Q^_(^З,xgגK0m\}od,(hY1o dT$ M"# jD u흫VGYITŴXoVc"A7=N<.@$ i~ќPNb$m5)>*(|5b'$*+VLVh@a6 W}7ƨ(::‚҃U8VRRl-B(QmAct%'k<=MƈZ:]9.=E⚞1RDϥ5?,J.E3蠘s9C+qTC/Up=42KPI916 9Df.ìbްޟ$8{$5M4T;Eύ[8fȧi㖇0I*Uk.өzܚ>TjY/bYJk]ɘ*bG$%OO.NFό~_Vv~`Jjzx4RHisL$ޞl$9J$\O[D9Ӥ?KheLtQ~]`=1hUrYW1eՊN_Ժ+RԻh{OnG -qߡ툂W&|!H-d=]2rKa97n ȧKBrN*^j?'_jZWgi2_jm9J*+> WǼK+]Мvگ;MTo"RE|t{$%Liv0űXY}Py1BCZtD36$,*hG(~n-c21gBLt ΁nW'QL9bHȆ,u cF8(2^1Q\m#+r l$_\wU!b ]F|U3rIK.ţ`Ar1G4&`;-}gXoo&̻"w Ywu/So}4[۴&7q|/p84޽wͬsij|5'XփdKluu%M^٩j-7W>!Xb50s"  n81f Wo?Ƥd0x3C$itq:~^H(;-R\',qW(Ue@X{I$vi'Y8zT&WXYgF~īFc-p Ϯ uVɝm:gL8:$?U r\'[Ul/Kծ%$c@]\ VgۼJyq>O? <+wv#T.ǿWbs; ENnrw i>cBPc(qSݩao98$g[)wSV f -tv*֌UaKV~p?XG4*\DDSŪ|AzmkSu)"maW~>uGo1ėZ%hz&j0eG.">}M](* zF?u=L恚T$>wgۇ@OY'`ğL<>M:L!Ph|'H _D-y `\J )\"cee;NQ dSۿ5'K <2Ƶ(h3n`TRsO~@P2!Gc9 w.KB-=ϸ@VRƫk9"EB6wu^Qئٕn]JMG .$1g!0UbZfe2҇RI8 4]p$ND k0(N&gTlJ 6U `bs}¦b~wnk[1F儂K'JW{~XnY E~bFeݠp`qp]I:~e\-!Wj0҇º\ @d xFaB!/r).he,IVZ&FҾ˅7ώp\$B7wT|7wBcϼ>XbW_b1 2p]yɟo75vNUqlդ/sA2kզ`b_y#Gy/~sPb l@JcgiBFD*`` yA274uVxzWu.N+ݿ]f@8<ڍD^⪎8rT)PrBɑ %GP22uVrnU Xa-\KoE;,'#=*D8۝Os*Sbq+I]pjagi/\֛ S2/?>=gwdޠ7>v,^YY:.&U [aއًdb=}?a/7ߎgײ|hs@Yv9**D xJf,̵?(3ZܵT&wH,kף`LRr׊ gZ}L X-UXg8UiHD2/o-Q![4QR HeuvD|>Y<¾و]ZPq,$LW.,N$8@H"Ll FVcvXPpV!պH-Zs-U{jzZɻחj Q :UӲ \GjpURPҜ$qoV #:5\~rJY$;϶oGp #2?+`@Pvfia @FD:KBn|\/ VF* v{%eNDEmÈ(_3d|򅶁V( 94?9yS?X`[K,Ln=#J\g) !]5AԔݾ8(}볁jHRsѨ>8 ֊&iOjGϨՆ@ZP ڃ?}D@r7wpB"c-_?6-UY r3Tq Ļ7ӛ`T9e6#]y~몌g~OӿqFCj iTs}Xm4ݝ6V!!D*QHSb:|r͐(PH!@ H+ݬ{wi%iKt-}3 ,MH?D)"! Q$‘đrh YD`!B$T",>zoO/9 `Th  * l3Sl?iaj 5k<#jơ%" V!the "#id$!6ɢY "\HGE'ץY/83w L9 RB#C;2iȅ i vx;QZ.:mݖ׋(e_K;326n/z{eG{Ap庚ЌfڡbWo?76LljKPt Tbl\ \(KSqI*aq6@B^@@eȝR EY^R[ZPz"蚀bc1y%Ԙɨ ۞ULy1%P܂ 2&Z5#EA6M jVEs-:̕Ky&os RNleZd8A TB@ UG?<-<0C}*vgkjkŇA|c8 2Z@1c(8@|&x&1 -V?ϣ7|$XHdY(lOmc{\:{qb*#]3#]WlYSTÅ>zAkM>^= nlg.{M/k2,L?$f.Zs\z`4hSgU]\EGتܿM>x*r1   *Ȟ.A6rE%XմWR{@![p͏ py1Osuӿo WdWޙf͗?ggxb|=qv=Ub4;yT0  H.'G\@HdZ !7100ds5nx 8;5Uђz i/Ӆ5Y:\kC'߭$T>/, *"{ %ňۿZ%CxUhaOvU>{*߾7$Cʷd7/^{gؙF YZs7pn=בjsLMOtF:ŧgp[<93'd\A݊V0m? QzˇdM Wae;aޫJW2o7~֪!pod^}wl#a>`҉~#NdON_}p_GvNYs3KkyW nvWo>YΖ+u'!oENG4VAcsT؛hFDx[" tO\Jt„ũ%,ҡ8&8w9VgHRO`b}\Zu d?M~7_nmuiIٛ/@c)V_#Y(rٺ>b$Kѷ9$_yXa8Esxa-,tѡV Iv1_'5_ZtA.n&Q,W;dVkVV a Y.WZZ^-(ߖbÕ.I8$b~b*uolpqAj\pVPLL2{RhtE{"u s=(MV\r :η@z Z2YxqxLE/6ZF˷$.,Z澌C\*~V4*ĀÊ:kL$5ܲK1R kFK l~c5M}U8⍫_XB8pYovw!]aRS44I .%gg֓`uy"Up)/Yfڗy,ʃ3T:C[5XEo1w *>ք Bk2<"h{5fBZr.n_a!kzK Յ*+XzG[8U)D# L$K4V JHJT,9JLPǭ/X-~]ܼ[+{jAMUU-X4zƀ$RiKWí.w?@6jCoe")Ua_7Ɗypj\`G5֬X J"p1H%IQ aK\vΚ6#$=Ξ'9/0J`07ė3^؊̈inѫqKlfh 83]4Bє "X/fe8%.`0Jh 2Z r-Vկ `Rs*zzk'mxF* |_xfEf`KQ?>+yxK؃‘#9F Ԙ ,dRԨ ŦKS's:G^*":j*I{S *QPo )&)8ZKMa:NElMc4U1։&P!*A:GM}sH(@LŃXR匦8NT10Mp q(Z oO>f˺8Hպ(q!uܧH6aU9od2>RfIFiغO i0F}ƼQz;s#U҂s6R5oHUw#rȱ{ӎ5It_r6z4ǔeKA w0R룺JiŴ!b\-@1PZsSbM1)-3FGE+#}u}Oc=I<+[S ^%4=byq Z7[4׏C)|]0ϦS/ C bƸ`H#9@cYR W)Rq3aUj bJch+J7-B[ļ.fyX:"_d=9͢;̧}co(`Z-H>~X. W} ߭\3[~ZQ#@Wf] |Vʹ ϓ uRB[/4XHĮC*=yNg24$JKp_(j)駏37W 7_+#g+XY)!ᴀgׁxpFfQ8`uQ WT7*QA}J]@1> ʊ% Ds c-oˤI&X zZJ^AԕGc_ a8nҋz,&g )QYS0V=UiGSiUTUB !y3jm`5IlN[ԯXA|{(;Rܥ<}_Ij}ຟ$g%P͑a!zAiG,5nOFXq4hDzCPpff2yzY]!6o/Hi  9{/VJljv~nP: b,{NT!;}e@U-^Psّ: gmCK> ;*C܏L'ۥ$;vL`W.WܧrGa>h17Cᗧo͑$Lك&oa6:6mIs/m&}ymru8WPg@u,qtW7:9wt77 HNHbNP?Օhou$#B u/uTfG̼42K.IRJ?TAQyZaTĕܔku $Ʉ3;'a7d3GYj[ҾSp.)G<݋>ъfjOf#`B1z4 C/UC?n Cj9!&*:cz=x,e{!,U *Eq}гI<)#|Xӂh֥1(ws)QsET(H(WUP}5B5TDߟpl/?Te;X`Ճm%F6d9M[&>i6w,ր?M0-J fQ)~Dz=~sF*`z>κ5ޓpK*Ҫ!>YrƂcJ >Lz\dm+SAvzjL QT=sԒ飞9@!B:ݢRMM_M?Y $ŏmm~ЕHxﭵݫLn ÇnMQzXKGyf^xblOeiN!O-3` wem U!eyRI\q\hX%S piAEь+%p>}ykqfZijR̓VKkWl #:c.o=0R}~:6A؆ "s \`W?Q̾eG_Rܺ,+0ƿMܤގ솿`q=Rf튱~͵ʵ$KTr恎ݳy|JsY4ĝ:ku8GPgzYT%wN &$m<ԮۻjlYd?e83vּ\P7@G3W1PX ^^I&l¼%? E!~}e<ؙ>\7c.@gl«NhCElGR9dI<x2"˂mQ}IB1cL yof?[gzxy(@$g'.Vl4O ©دQ ]vA) Jy77.IY]6~cm4#x:6T鯮m{2̈́F1:D dhX1hO2LBn[{bks=I$t!ӆk5[>u+bD)WM^6W@oӓٝ4bkFTsܲ $3{xs@5tq`v{7BRsolbq_̕aCW_f鿬!Qw5[##e`@ zBo[.ˣ&%3/&;npyL5ʺ.2Eu Ժ}y1֓wUv,n{6K "pz,p93!_҇&=WL_җJ 1񌢺ގ(B,uun1-@BqO@FsjsSJ:.i|^s,V DG0A`\+1Z%O8üK {Z$qDX788<=ڐ>CqsW8yw:M>6{!wk`TJB3<\m4@0$$h B܃VSW`"@hqlQPU^aW픲paRE꓎%Tg#{Wve(R_e2]gx*m t8x_s(@^ V}B2~ \Ag] ,+@_ܰP6hy$f";r#! fhߖ$7wlI0}loFTu\=Y]H^u,g1Bzا$Jb4O ƀtOn %>` -26M0aYEr!> S}Sac}3DbtV6T(O<418ͻƫI}9skw=" E&@s?y+ 9t2XZ%եcEB"bVCy74$#dz:Q9Zmg',!:K3:'y-Cހ `gkb⭑ }vvZdP?4XlS=<`_j\PJ[rL82Z}T}kmWDؿ4hjSX@ ;. ܍HhK/{&%O+Њ%Lc ?5`I0 Lq%K3ҝ's &qemɡkY9ԒѠ.'_"N>7|v^XB윻F4U_0dMDeW2(Fkx(?&2ԃхlDb xiRsvm-9!ʆo[`lQ=}{#DrOhiI[qq>TO,0P`NAEپ}>;2el $*I>v(\`X>B8w[J1S|`yF}R 5J'96ȕHt,VTx,U{I|/ۂAMFIJrEPx!;$YMϏ@{YvcZ`<}aFCyO_q}ُY@tk&T%uQl0NM^0]s='$ !S3csӣ#IO;y]b G5[[}}&4Rxtw op% iBGE7m4Nҳ2p)Gs#GJ > OWm$]0zZ`fi@/>8<@SNB{tSH|lb]:TbJ')<U!],GlV *#IdTTYvĀPkl?-4r7A:YAno4Y+hjY/ V 0~Aۑ;G c/0?JH>evmP:L<<Xp)Ķ}EIMFM;BA+o*Pϩ (3>X⎲')fۺe<pO?5 i"=xބʂ* 0g ]*O'`M^> /AN0 ][|[j 2My8-_\Ѣnl|#(-OS$Oh<ll6vc煉[}^cnI[5 !D>{]%z$:+hBqG@?ڏj7h%Bs6v$3hfB}Bl'ǘCOEӲ_d͵9Gz:H0;5nSL,2>w(N[~܀ 1EMR)O2J=lRJ=kJ0。!4r~F\GƇFA4`C2p_骦r듍LL"FրYY1a_@ҶeO'QXõ3J0gi*)O;GP0}h>L ;h4 Zq5 Sd_ aQ"ђm(`lB>V@(}wp dA;5\A0ǣdWds&$ؙ?6n??{ WNbV<[gcٌ'Y}:c H = H+%IRBcD YQ$4PLK Ԕ!VKkJ@FkMD@7UP*3VyA63^N\Rڈ }Wɓ Ku&zkR_6_ I#DLh3ӓٝPݶ=R A9YL@U@0bkAa+82#$, 1Jh*IЏd /t`(vf 7o>Gma/}mQ˄ NTǜƿV757{v޷W`qa.^7dޜJ={&SߏИ4h2Wț7σLF =?_+|՟㩹yA#irYo'6J@AE01dHY#cJ0XĀa K9!P 4 i~L`|s :2 QĠ06% "iD#@ #eq1 ta? /'P[AȞxU5llu~J:*$!," hs~0)&Cjsjn4;Ű\Ig"je,^rLhv?WN5eKgy[o978 '.Z Վ#EUFB=}U+fe]2\A:,O}-|V[@I LǎR}[+;yN BF =f u̢n$t!dTԀ5z3ѓmU_}N :v T' "Ih@d4aEYЬ*^Og$:%n⫫ Jhb'2ܜƜQ#ɣ^qFgLpH{*rxF/ژM4Wg ɑiB?|3Ps\nbhqcq, Nk=A!O'u$ XS26[ćC RI%"P:RE"<R@pln(#T'S(!4x _KLJ"ET0d C!B5P<9 r \ tm~6nV7cÕT60CP2-y?_RJ$7ߠ7}Gj2#EߋG$]eoAeq˲3[IԶFf9B ][oǒ+^ٜ~196gmIn U.6EN[MR҈{3CZm$ ꫪꮮ[J!ՈK ?|J- B3pׄd5bQt/\k\M8XI}c{aS [!0:ۼ0u _*ժvxغ]aW&څv7 hȲǠ^A<"[QMńTkf7w,Tzh!@4_d`䖔o~xS'jLSN瓸'CsR&kC|Xk{o'Gí#WDV1B"!1j{Nb*X98/y';<dvNyzw jzi0tY|+3 M#g?xiD17x_AgH<:vG)UK .m[5HH?ރiԻfjgٳXɲ3K udyK/R5OdM;rdYc ֋tL.M)OCC|$'gns@1s1m>REҋz!:d(qdː|)gzYrZ2QɑM))iQɐ~cz,2:F,U;ЗxgkB4qaO85I,&gZƸ9yG&o&E,WXryf>9]2 sQcy,V=*9q?BkB~2tsaJ9BU7\yV}bmh` Yo9&ihJCS!nmt7],zB0/9gՃ߰956=^f)FsA> Yֹe=c>>(Y%-c쳔Mm6waLVc %o{!Z'[e ([}YGA4?j4oa>Ip$x\X!' fEs<̖x!%J-ъI K=pdz|1,_Ŝ]Ó% #u0ǻ'V坻z CђV7tZY'Zޏsp>棕!B+DK`C_V^ -gI)Z}"(6O1Do/SCtlDOnE D!` !L6˧GD!}@6˧G" D5B#و.=h)UuhDJ/کymp|Η;OIl^'4z0"iG$c&5WHم IŮ>dV숍LCH8b;O-w>7K|ZR꾃D6 LrGY*w,ޣ]H5;fJG~+v8QLM-1 iE$Oƹ<+UfYpZT_'+Շ딜28<\Tij˔IfW,Ttu}׷ 0XI0? >F@Ӵ 1f*(`UEBqiOq5^o}:w)QO.GԢndZkHjIЦһX5B *x Bl!Ex ?hlAqxOYWHzLFm?}]DR c^KJ!o%xop~qrVBmwC!Ćphy-;Zg6./#6*r)* e[DW8ji ,ڽ"A{1 &+E_b X& !9ڟ#G]06W(oOm)섇M_X7v!u& vaZ-=o˧"f#|)n30aubXЫ |z&/}& u|MT^Ma7Vzس QJI*Ui7^V^zTEG Nk K!%LaeUHT]ODGbi.'JԼ_@}y9y3x酿;)無ͯ`IXŘi i:?L}[K~Yca˧n4c*wS N u9FrF>bWQPgy-!2Jԋ@ :KP DY*YF_Jx(vj$b%ژOa,Ȏ ZBFZS)"7{O(TjCE%XT R;/%7_ه}]=8I:^ `G/~7gSb5NeuM|xZBnc缡1gn[{gߟE nR V$"6:xh@WͿ #R.p}~Ɇj]ly({Hr*G7s07hޔ웪-a= qw]Vy;2cWP:?Wy4P[>VF^`tWL[=x$7 O9]G=?O~=^ӳ|*8~BcDpVYe[8mቓ]'.m[CyHwI[e|q1p/ q?.EnKě?e"S#]{z >miLGrARhW0 ɚ O9W"^/_UKF6=+&@({O_//7_ۋ/ }r>7d'Fx>C!%gOl$5}{CsSÍ?ļ;ECۏ6n$mkG_siǣo0<ŕ~ֵKXΑKm$wq e"d1<<(njKCoj)]5rM^Ĝ=x 豴cjd}v3{+]N.gfuޑ_b G?t#N# Jѣ _2()+J* Cu%NJ`s($&J<+@G=^\O1 ')UL9`=VxrC#8}6X$53S=˵ZiӨ֬eZP{ۚ#VB{ֶZ˶"xpWI1~ITkfHEY2sN1rQHOkEK" lp=e9A5WHp5:P=*}%EI|Nq5ɭs?R)rV[J9, Zz5 Jm7dydAF ϭaDJ$T]@5rQN%jTT"Tk"5ȲaZEP(TL$i9OjVy:QQ)%^dǢNG%p@PrBDG&n.d .D4A@Bx<% /U$V{C2uF#>M| 5*gurѪdsM'y_NF_ez~vs&ġ84fݿ~sϬXBz8?ex:~ф}W!oe6iwn7`wrvvғ}}+KGB>s=XLp\ϭ.s%/m$gqYKeP닿_s)rAKy;ۯOx_@ _\P!`NOa"jν3&benyM=Fp?sMyW2ߞL.|*rɞAO4XM)rJtN&ĥ2ZB(Qa1(e]=e5Wݱ&:̸$G5>Wv  dQr)6IrK!Yc Jmh%ARH^UGmKB$vך SALc9r/=/} pˌO7r_|֎wC ^EuMq exM`d'It€"Qѭ7*F'y ݊%&GJ?'a\~&HesN2L}٥u)=A$B)fmoV9򀦶qs-d5A@G/x.H4$kQ3jFfIĒLT&АV$C s u&2$4ٱ4X(s0Zb,<2r5P& NaK8T(MlkEs7IE8EIbևO]D+KIr J8 Z[&b 1.dz?]wE|1dgZk"Dq,[$jAs.LHA%NȺ>1Aq()c3o/zPMoA{ZSJRIGʠ2EF>'2GpT{.U첂ҽҢD\elJړsq>̧9}s'bbq-ho΢Xn8?Q۴wbL̳qwU5KO;wl#3{Goknio:=%ͼde'v#8g+(f\ywq~sX gN`j@PaѕB_F& hޡQHDʚ\6PHhDX-J(jB-l:VMyW;l@ 1Ve-Q@Su"()UK&j8ˡ͇JU+isM#Aruy)~wR[?SdRɁg ]~YA\*GAVwp@S܎R("| J*B`40!B0ݓ5c`0(ƅg2heƅMqƤ#ځ 3<|5>j#O|l޹--rAsSI@R54oEr ҁئ:g^=zJryVbZ_3Z>ys&Z?}ZNhi:-7{Uǫ۩^t܈!m_f Z#JMn\9<>ƺ\3$u΍9^o~Wy¤*A> "U@XEބNWwܶAhAFo70@ 1bGAa">ψ c,W?>jFjW8 ԩUTCb֦f]Y5Dz()#Pw{uXzgճHBĔ(xr)r==z=]LulL[FCAd:)vNU._|~MVCibt[..uD=V'ZE k ׸i To.H H.6,]\ncJlf Z.JM5ٔA䰥HfDlNgf~/l6|<էf_$BKQhG0]> ѓT2pq< Y?WUh =xVQ#NCYj5khF)T*PG P+Vk߆(Q`ed\Fu{RKsQ4"yn"Q~XbT&_uDrlүϷdRGFp:8`cEE,ﳬϲ)cO‡$HL!bN'!hG1襏9iO{Pf"(Md԰&/Dp;KʧpNE]~z_J5٭VS/_&/.pYՅg|c-\ו#ŋvu_sspkCj A}Y8[~-*2 kIzXg:֖J>g zsK(԰ E%aPt3dH \d5:&O:2T8 9KH#ڈHƘG2,iQ5όfR{C02*&A Vk!&1–Bz)lFOM5 ʉHX(zWRLIkGxRW^ZDOԊ/M>y} A++l14,Ҙv^CrBG#HtSG,ޢf {oӌ/'HYYP8й\i-5*\aЭg=HEKk3?d 0#5V+IkM]N1BO=M;R7zcrR29E٧Tm<-XRwEխ1#!]r%ȑ W Va7^|jw*tZ1O*J9txޕ9޵gNE{ե]HS8q7cuZa]IUBRKbLGGЊ)C$@*ᓪ``+|]W|"e9 {=lH=bZǶLb=ȪImVP4s<1.og_>_-+HR]H_V3[a)syvLzߟЧl*C )@ 8HhٕrIAb[Gb]!,O9OcAzBz<n⟟v!WWa Q'sX/R9.A)Ooq`B3+Ey0=[<wuI!7c@RþK^Qi} Aϟ@oi/C:#DYvU.p"Oy \,9 4f:_]D&|ѳT3,/8`A=fD+4W X=E$8%+C*)=Bhߺm~ _m'I~}xߤ郝-7 ѩ{v>-/"XR9Dt.|l?J(z=YCnVQjՏȼc͜}-C|,7s5߼nrz*xCJ۳Jlu Bp} 웑e1 u>Mzhw=7ͼl]Ͻrڬ6rYM6ֲ%9Ur|\pA^Â]"H`PD.Itba#;#"%1xBej7[xdDѢT1N,MBsÿp!Sק Qv2Mci"SNr x26yx,;! Px#=@d 3j<5 t1T(j>‚4pEV,i1[OUi~tnXp lV(ݡn@@ONį碝-C|x獣YLz҉T$& *k][ImLa \@m v'U RcDq#Th(]LEq2PqHM\Z&mA4c*oD\kȡ;.JqgrFs<Ϭ+*mQ>M-v/ Jsxy5ל7M͹=@s*kniDU)I5 g$6m*k'_j~p ^/=yQQ׉GNfO=9#Ō|6cc6s4If;ywDd_ف|`Pbua%**~𯲔ҖDԖi]V8Ӥ)dkePY2X5bL3Q?Vwt%ԱopU`Mo킅?]:Ά~} p#b&wQ6SWL VYwJZN5y-PέPH]3(ٜZpO"/uH^ Jqe΀JFnKhI8BuԱ 服S̲Q fj?=l}B\NwgLxw}Ѫu?lvc̹^ !$q>|=~c_P 9ww wr,%Lx]N}cJh:DSj7W3LNO<,ϧ{7_֮Zb@( Os $韣m`P8  q9*;aXp #HʝU՛sI88Sb\˭"m(vg0^~&Xac|o}B+Z"FS |K~b%"}U)nV^Emƈ6m2t"'#g+18l什o.K%3|ؕv,' Cp61 H2~1c31n^XkH4 pqNCpW瞴FKT1 "S6Bٖ @U6>ʆs8=%hn_S_FC VT^XcL(CYq▾ֶsduDS(Q_(f5eT3^ڌd=N6!3*{-[N1p^a<$: 6Eq}˃Y# ?6g_ s7 ì1!Ā'MYx`kχfЌLar{ |TV_g C $r 􇕀Ƚ6K"b/ l,^L@M f+L T?*:)i.?$[ 8#N!qqaazOsVwDE"'ekhxO%QGR^Y<>VR2*Z|5(TB^L`cg[񬑫gc3c'Zm[Hf}M"@@M qp*YE eYP#b /Z ;"dƃ"sĀ*<+ ]&c+Ff[Ak KU 0DሾO79Ѽ ф(wNl{p5 >Tq}];Ҧ]ó-aNt#W}fϝ_CeOogUt_tZ"(93D'v\9ŷvN:HIμMLG;v4HRo4z魉R3OTnֵj/wǜԇa49x(8QסeA?{Wq /G__J>ލa\r1g_wJ EWMR҈3%Wpꪧ,_mŢWz_4(֨}CHٻ˙sNi#Yd DtZwodڋ2b\me'T'NX1肋q.j+ {(Һb>0 ߕ ZT[~oxgO;|Y" E*bzKGn-mU{)5DUlo5-@lpH / I^x m"$oaO`3g栬j$-az7]j[ɰHue%iâ:obTl:Ԑ-E@E!lII=3!fCQ=io%ʇōv@#](yp B)"\q[jHbáR[n5XK"az jk4MWLװ@]sװTD1l\Ǖ:H 䇉_%USc=ZgCbfHN$LF+#Tgw>.Dƿ͇G,2Zd..pdƞ\#NCKeH?! cSExQ:5^NN&jssL@Ne5tOPrnuiQ??fgH~94^ %+ˢ[zݤgHh7R* {#-_/+TJxh)vQZ]bEXBp2ɕg$q.)Y2٬Ȥ{ie \c A,Dׇg;h: BrQE!/\TKU, qMYA‰@dI(H1  u:pׇhhb‰1ݏn;&8aѵ@|>\~L/4 &qM )@=iL&θ<SYh#C(SDi,3ٓhl@`1U&ifFd+I\):BYl|P.!57o($ }MqYen1GVbr/sG(Z-Y"yi9/|Tt->xD)TˢmswhPxQ0Nd%Yd"&=%뉴f0ٷ71NDuywYUvSc5]O6ONQٗC9AȞ(51È Ū$(E[)@f @i؝"E1hcMX Q"ı`@A I"p19:'dONu2e2YDONա4#e9/FWE^(2.*@h~Y XQ#D) sJ& Sr{"' LdFEF fk InN2|d44HoswwRvsgw'%)i?^MXZxmL31Υݢ1lm=)' DJZs $.R\XM^JWD&rz;pEڀv5V2cr@P*n@.9Sz 55&Q54YF(-uD ^T FrT5b MzRk|枤{DܵꠀH ڧiInKZhӳBKmmMuE@r%l #n Džrۨr5L.Iak{(bVyJ~9qDT>YPxRcZu"~)FK"DsJXZ#|A[xFDF&)ՅN& eH̜tL\'(-G8// T)8UߦuF]ȸJ-90jJk&LR(NXh).zKJ):m($i]u8 Fex斆 KzIE=&P c^jH  gA'&#Aa)2hp}۴xWI,FSKD.p9Q YpxẁL3x)2Dv9 IzѺa+7}a#ЗXQtAMoc4O}y=qfuw>&@V(ɭۡ[Z!@Z>o=_4Z*Mb=:3r1kYVr W#4\6ٿ\NXfƈ`ƐU[Aq^T \]W8*r-^:2b9(9PR1DqDلI~9W;UÅ m!TM Q.Aȸ٧><-ft ؋ ^4A*lBO8~ٵ{S6}1Ԟ'U[۫O U&"WoP+]ͨ{CFP<-B yW-mݰK#֬wjի7 J7~v-`7z32Z>+JMm" X얖olkak}YGϰOrw7_W7gix!f3}}:1LF?OH..2n o`咃 Xem9 (.翊P7&l֯<ׇ RRmf׀o,|]W;dcDƘ[r6 K~/}K+f5:5]jS(<5-Ilb2hi§W^].}~T2`wUWizt)}⢶6ގ+| fa9fT0T\0ʶQfJ5. Ƕ$CM-`t?cXIt3'r롕gljlSg ==+;+Z 1{TrvAkq1Ɂu!dG*;wXT.'Lo~K2j䉝ʝ*WG k`d97X4(hy"oV<~R*wLoâw1Rpzb뉵vn;t^%Wffzij6?nHDqbҬt"&nkRũ}cmı.*oG dp)l=ٺֻ4l_bme!IQ&JT>nHJ L>O Inzeu?9rx?P=Kadlf as 1X =aXLj,V{Kw FZqD xH4!{̬Zܠ&zרǜ& #J. @*DctB$f!- T\Mjp$y! \$rpQlvȐ{!w^.*&}=t i] ^Vmh.d[ -;~Ŗ4Zز\Whmze>Hw- `bfUC3N v/{6ԁ{mo)mZ[L6ؑ vx)ht\7eo87]^Ҕd9L#;'wFKoqUkw.Jʕ *_RrIĸ,K4ngҌX?~VԞ.&\IG(nQY;٬~gu֫)vUߗ(pX-&X"%C>8lso*V$QM1`%B:WpH3 ryN>Y¾>'&gZsfC08P )w>rOR3L%܂#} F-xfyVA(9'\( !ߌW2?SDL L5jyA,nx]%ɾJdPT]%*ɼcVwX9ۄϜ AۄDŽ+wk62%APHl5PzV. En%BxHˌP~3<Ex"$-4#Ld5*&щtxp ND\̑C0hc⯇0~3 [6 Пљ)5\$. ~o4Vb%ɱqOr6)UsJ Q[Q:_Ϛz,$O P HQh$ "! 036 $x΃,m`@4c;K dMBk=.wVzc$QDP8,;:A&d: T "3JK¼BsfuQ!\G0D S2?Ӕ|A0;;'$rC窶i~OqxWyFKjP.DRj&?CL 81$b#tBlDUrT -.PCUmg ]~[ %Zwq6u@RI^]>a$k`}QT峖R׿.Ζ,w\uq_r[>6y/1)7ڒJlp={f?=$kKHnJ7hɁoa86 ?#iD*ODyO䧸oWW_ ۿmk:sPC aϥQ*#Y"4*AިS6 Q:bAc1 FP}vZe*VcG#UgIPy%Mh!G&[33ي1ߘlP5@ze]9UiY Γ.yXP~*b=QB.KiJS;QYP;X mThb~gAׂ(y֐$-%e<|d\>;FYɆFvՒ{?+PB`E 95^^WSphy!]Cp ?L|n1Ғ]z|LZS4yduf[b6k1^%ɋ.,aitp(/E &J;z̤B5+ HBŁ~}Fhb''#rD*!YB]"EՙuK6/62R?H'G-Q/Jfu!E)%h`>yü6֗mSg߿-^|wiVȉZ* (t!uJxM}w6iiU`yK)۸W[?;a PAIF!fhx`K80S6 ϕ3:+0M~@Cn볐R[-\IR |u ILBn!$% .ѵ9J/Bwpj`4@אVg5h&ЦKbڔ58M26Vq*rykaܸ-:@zǿhVR$\C*Ӣbm.ބ,^x3TTq h rǁBN1J3C ,WD& R[Zه?~ LDq&A8Le~FF1#QEIovIr1B"rmTxз/ꠁ;@x(xm&Ɛ [^5GNwypTxc(3DKbnez:){bJYv"JAE5*;lā[/5:ֱUq.[rV醕Z#4uoIZPPJn\CjQ/D1Ay AyLz(-APgZgE >[mx3 < Sx?/╩?f7~Z~.5 Z,z/eKǯx-.2Eyi#HLQDQM$G]&k|ڧHU0ǒFp#0BH# B塱:%̔B^WKơƍ+Q)Sv9P PBڶEj:dmR!F Ob /ӊE`c^WYhߗ"=~+RsǸj Zq~ULPzVCmG}h8=N ǎq"эvCzuEQPդa_8aҚsإpAl}i8ȸ"k%Ž":j)Wr!G]|j#v}N(<(Cڒ@P^q+Q e?]*ίZImLXBYK6؊.0{К0#B%ke4UJ;ݮWe$yU1 qǃ?p%j ]S.;9Z9ղ{|m#ȍbA؞Bl> N'wnRl_5^`o'e7_=j ٷ<~/Yo{ ~e7VKz-nF-fTqޭ5&-q?ž^>j}jda*SkQEUٝ}B5V/ZhZq*Ǿg{W}DQhz8|т靹ڷ!u"9Y5y5AOY8 >qop~/5/}Fy((=idžI|T>VC;}z%,Ћb<%h wot?qޞi9MLiWs|έ ë9qȍr7@8Mޯ|nUFДF ~#zSLprzH-:y":Vƨm[L/cQPEu Zt#KKHU V0*aU6jJ_R߸hժjm%gT?5AtLOǚR{IiA%=һԾ=>=寝X=6^d5*:{g^[2~^40Y8|h#'ci ̓Y8{P3'Ss"gF^cBE!Tg8v" ҺguiN25sLޜ}>'/>wJ17w"=.P;be߹dCBϫ~ל=~.`{}K h|U=_UH헇)w|p3@QAfyɚ0ddsKp_r~O>߈?) kku ϊ 9u %ŰZ[0L1Wfڱ㮇)%\)´UcSdKB%T;3"Bqr$cRPprU6G0ha뻊F0[Uc.I{XMX|.;0Tv}Ժֱ{a;|ݓRBweҹ?ctQiQec}:Z#AD<+(*^ҫ,jnogBh{%㻄Kջj?_ݠ[p?Af7/{Ϳ>,cd"0 \Me~FF1#QD^ \WJc6uoo_w)ϟ[]{FUj+?*G«?_LjOnr;/y3ju#IUg16jzW 'fRƆ4}=7[=oH00^I_z Jz9<ķ~UjZ fQ% U8[m+3U^*(Q(8xq`pj3~;L !&j-㓜 XITk ڤUmA)o^L-ۘۮmn'"pgb~^8~!EcvD&RE c$p)DW95u^y 3/#r߿0@)D!T; =mbPRz@1Ek]ewi[=Ɨ|){ȍ (ф0 I+ I98PF$c)5g9c$ٻ6,W}Yg'0 ,[H&8zZ\KLR~ }o5h&bF,Qsoݪu N ` 8✷6phЂk&?ⱑ4i5'd EYvrUNʕu*֑_EAj, bd=o]ƅ,GS TGdJhUB3AЪAV i%YB/}݁^Ӧ'w7305Ѡ:BV6tKY\KE(DiPyXMд]R20jyZ^fnz2kYazEEwY kJNMP8(K,3:!8dT0;eo|ȯs}JWl_Su}4Dl\R"ɛd+k[ptu+'.]=D>ireq=X2D_Ut2q3ܕpm0(\\Y@^_flh#=2T@0)d¸z(zGn%`n5V$~ [D[f& 4u!mII MزZbsr_[f:Wf˖umڂqZZձ!}˱7O\J 4F i>t|=qP4tz?$%\JsնRefECɞ[M$ftA{H# LFtMՑB?뷚KHYkAXEnpdZHz{0 #JhtBÂ=jR siއɼB9VBlfn4}IA/Ѡ̅a9 2w(eÓ?·4\OSp=-כx(Ð5",B(b# NdD8BdԛEyz98LƅOwrhMrvffC)c]A(♃Y(=o98" 8T^yd&6D҂/ b#@6~$(0B՛p}t!%G^ "@HbBƓd4h5\,Z*n$)&e>d|1t6JŢaA,AɕD"x4.A =u_{foU5]a0ⅦϘD6BϨݶȘBn,5hC8Ee'"!5A4}F=фPn;Z5=h#N)ͽt/[STNgԉn۰LyiָU[ s6AYnB~L٠ZfڷKFgM<3mΐ=hNn{_R }nMmP:MQ'moNt ?/b*773mѪѭ 9EqJW,F7q[STNgԉn[LKje]G`ȞS}btbW *-6|F]-V^j6ָwZ`ȞS0۵[kG \٦'޵;բs` ص{H\Q+5h5'8WV=sk'a@EXJ\Ҧ'SĮV*W5䪆V={h;xo;?ZB-ͻzVx4M8杵֞'(ӄӹ~ <ĞxruV|n8I!jGr*n3;;>8d&G#{t$cUIُ;/+v^7:#W Piś_N~1Ph4=K*: T HP0tArp&C欀"6Hxf0kt&&(N>,ȝ~Ze7rwk?bNd*t^63/7_'v{6E[0hBuH\ Q,vBI vZ NkwIx"'S`(!QPRmΚ=nO#%Fg5Q #akTKnaLaLHKk;kB臽4B8s2lj<.? dOp>zocx?7 ÄQ#0s0M "DdӒ/^`VoxC2&C  (GBnDYQp$$X 4SCSg`ˆVQJ2"K3otZrFvi3!`̬-9fxlBC +&4< 1E$qo%4|MTE/KG\PQ,䅘T4!9'B@ׯWr_tv]7qpߤiN'^j*՛3n2kpE@dz N{BmsMsg!!Mi O}BtQ"Ͷ݋wqQ}]0'ơwW՜| :a2_Sf6ï0sst=5{(,z-$50T> s7>ܻ[i֎Uk<5ۍli#sdMp>:[ˉ/:>ֹM$C=2PG I,r x*O#JVy7yI"l4y})_#MfӠ;`£ZQڲl}() ;1Ahk Z)l9Lp.Ƹ) K{S vsot7O56&W^ .y^i0%.UjsCmq jU+K)-A7$BRnivSնlcޣK H"v`+ʓ`PYhlT; C`jYq {%3'qwUnWFJyyIimK{eE+㷞R ԏj gK9V|sn\_\~=y a~x.Us%V<29E/3@Ј1 `DOy9T#ݯJD!S*a{?IUTi?sgl?]*_JJY>[gjC”W } bO<6_[>Wq+:k`Bnqr܀DvWVEU:tX2a ÅuLD 'oY1>w*i9JP[5IwaĘ m&ft=DgrB9ƭOCP}[X1#F.x/FhLX$HsI1XX (Hn)sއ R2ϕ=sK>-%jE¦J흮P*ж6aZ^_'zJ7/`?"(WQ8 g:ϴE y:c/jJ`}uuE{~VwϦ;&:!VmUZJbBG(bKi͙YQASb+Q*jPF7`)G"#CQBTo=W9[^?]BɳH̊"KQUt^v) XfW/0Jjut,?2烙<|G*Hn7 zƒ8c1+ܛQmnP4`O[1XF][o[G+^!]m dX0`&/3 (-,_AV|(9<$EmKWVqP1LzSd|ڑo  g)L+x5Ǽ-2Uc/??vT+9O ʐ[k;GCﲭR^]T )mj SkСr4FzekEz?{D^=h*L:!7G`st^),X|`|2h"ET$fM1 "Ops4A [|HAS2$F{WǮ0S,RR!e2(*H| V.eESYDi}Ul6h7=7St Ԣ\^Lw)Ku8k]}oE_l51y=iˋT;oIlD{׬%f-5&nHWVZ#)U;1PyE@H5D%%?!Ag?0sE^-r|ʙQׯTk<֙t0ȣ'nNه*|tYaIPc)Ӓ>dڵ آ};2zn`;[QwQV5kQV5Dup4T iRƤZ&59UGX P+VI>`N4J|k~&A$. cRXk=JQC 9RM>Q1<4(۱۸}~Oܤӟtf* ǤN!ŝ툜3GVQo$juGH{4I%*C] l˚bY(e+&'Z18\FCUPyEe4~AQv؜#w1ώ6,!%m^TL`.F7^L :қCSXdmB1ѢYB(ABa6$$"r7'֡{}!d+txa!ъ2g YN;sդ[m`)A4zk"+á: `8$BfS}38oVQ]Z1ՠFn21IG3x(j MCRД 7ghT[RChv WbYMk `4ov;CkD-lPQzhAKz#vĝٿ$mȒ:Rx0HGV(|@8Bl VACskdCDJ"y0@u}z4U-]XQBgq}D)S5c ѢYk`- WV'X Ɩ|[@vݘLX7^FSh>ڋseb4l<8 {:idL%(KV$sA?޽yMos/Qr&M0$A$lc-6ڋpdq0$|Ip^d!Dv6DH80Z>Ծv% ZnI|j{VQI.AԞtIl:9NmKQ/80O6.8><[WC NvCho؆mM/ՠM+onJlMNSrfM͚u\:Ebb&ȉ^*%(U%Klr>UTC*0N;Ѩ괱oe}&`g:6HYsgY5g}uֻr'@Z5`5j#f6µRѦr"eññw;x|| 'Z0kq¬'t񡃒=d\E&,4rccclzbd7;(||kgMgg f0}B>0cbA@0˴&C˱j-?Zn)'ѫoTP[c ;5__iBk=k H;7Bp,b=/ .˙g㮋r}>oX,Խ ,'qveۆ;X={6n7C=O Uė,%աMޒͶl*p;.(C9\)DI`gv4`cm Ô49Є6tN$p4fJy&RdhD#\q bzd8*rpmy)3-J]7ӭa%0V}Av>:h&KL ZdPH1bP$6\ɭGDK~^g xXlWx)eD _0X+ZhQ1D2c(8XvH)|SɮWM`umV:]Ț,j[)++0ep4ua3k!KcvBOuC rP5!)Uyބ87!-]!'WjœoV+4rDVT _9@B9R֍&SЈ!wO5cZlx|6. 9Ͳ,È,8ͲYlϓYNfE Q82nϡW}nZ,h; _&PqE%Zrfm9['a9V(M6X _.~[~bxeΟRHepҘew8`g8܅f*/_:JM y ( _U ŐwFsMG&eY{7艹+EzBׯ_J sXzj]w)H:+5;'U4њD\'.z(ʵ|96{n[~ڙ? xMMƉp ǐ'ij!IV ŅWLHrv zLY}hN;W4w*Iz'֯=q9 @u{-O^9ٹߢʓ|ǚeaUɠ3;n O`bUnw2˫)=o_Sڠ9"<%IC':ba}`1bܞ{"¾[c[6@O(z`ؒzGvEi\>ZrZ>kA/=WB}p.a]l:+ѩZsh,a+): Pz?>rq|k3y?޴[4W[JT{Ks෫}wj+5oN&1 c,AA֎SMCUkK;A7j ,]*E}(ZE>o4ߞҬ,LbH,Z>X,*Ί!Ej&QQDOƅDG*I8&VUV߸5xEIh()ZB_~~Yi%RRs=v0b=%;@mv{梼'5&&b'' "!4{<]On6'{.ھ|1|l3U$6䶙ގge#dVT<7~a>6׭q.0N8pक>Iu맓NEz/#A_ne:8l(S=8 qvyύgo_-vN4bK`͋eֽ\Gsy} )L`]>04pj/H0j[N'ylA昔 |d%"9rh#WK.ޕ5#鿢,) #a{&:{1n @ɲu0HJ(RRHjG/+%⢐Dʖtg05]6ʆ2ˇDWKW)eF3: *8C-#AdEhyZT(+tBQ**^F?d*dtfcl4i:2$td+;YY' ǪX6/KK0&13lPBag0#cq=1 fbZ>t^d6F1;H2Ͷ@MxǑ3-54phxscÓ_..ofnPNf_'3Dvk6Fw9>ҪEAkmۛ2a:7YvhWn] 3;i/3~{&8[nޝ6=0عޗVƸLth 9A_N !@;Pqv*cwbpB^za4{p!csD7D'US\B|zVMA;yit&\/F&Έl\>n=3^{ JT=I3pKHKĹ[KvF^Y+(;m!'BsP'̱N.o—ctft&[ݕb89mHDQ 7.C9Jd: 4sEy,UT(\2:M5+KΆ40 FՕ#sZYf.SdJ%䂙@7zM^Վ{Mz s̔3Ƭ%1Dcg6%iYH-s̾9~%𩊗%S9 IsⅯ m}(iy<ݽJ02wboqa>PN'>a>46vk}{ް pZ=={秢kIٌ-~̀ף7 6h+N>{x2w_jr5ߩS`ƷO/VٜҨU1m޹yB_g FP!syws j6C V_1ȁPt:oGl׽01SQvbQI/"ULz,(0#2RSIm JzYs]`$%Z(S~:\7t_Cd3V{/"i|etkBsWj UtCf*ަȐq#DEw`B @ 5CD+]JV=7"!N܈f-j\^\*gn2?r8 n&J\0yF+rvtdE=K]ߦqd5fF:B@ϯ` oOO^r_!N_BiD⽙cg߇k;G[BP'gl\Fݗg}86OL߈&\5?_kzC/4.dk qrYu:ƨfnKuHߓ!c yOcƷ׸X YU NN7núVW_eQb?j<}ptЎsp^rJtŨPK+b-N6bK6U']Eƥd^ WתA<\!7H4+ lScqDQau 5 nl&n06c 1 HR&Mjl׽MU.#P ݺNv)ݥ\`C';1h=BǕys_ "Q0)qnck̀v*6Qobs0+zwJg9fỎ▽s#z5SPB#A;^mtV\/FKj)J4v~qiAjЈ**֩pmГyIU7㑝}-Oޜvۻ+7> O_gc;? as '֖ݭ]N+6U,ب^xiu2<Ƶq)yOmq SN-Z#w,0 !!D\ b\'֣M\lԺn0ѭ 9rnF780ct urHn{Sݲnmxȑ>Q,R½n 9kv @`;\zݸ\;PȨ*H[91 ^ ĤfCj?mNZ4;I\tK{&zO;0Irf=yÍÐr#jug؞]~lxb:a(ourڇ$inc> Dlׁ[bLw-:ސx8zSo_i[Rn4чE&{~*IJP EF.CI*~{C BB,M4Ө˨g374VYB,*cya9;,a弹,&hQiN`N;Nw;SZEhh PHi |ʂH]-BhF1[,Ђ/Eg"򞂱Ѝ9C_X9$ VK]^p(趕3ˋB^iUEQyVQ%Ú3\N;K D !}䠈ڑ1ۅw#x!Ij4cz[˒S+r 0DjRy^‡)#&#)Np$qMDS+C􁿄%Xg $m(G`,=̮\ LZjA;%Jidfh#}aOE^毥vйP嶑*أMW=?FO$`r.SZ8pq!D\ b\'֣M\)㴷薭hltkCE|jUH s1q6R=Y#[ֆ.)=ԄpFZ>u~. NS^a823J*p)BA[QS+s+|%)% UiA \G,HzH)Mc 0gDbVo(B(r&]()9bOes%@>Z` ƷIg+j334[`]Ӭޯ?,4oI.j%c$#f}djp"X|dC:kA lC-u \2X[hru Dsmtlʁ2 ;NXZخ{'AQ䗨N_J:\ȣmFoߟ[<N+(*݉x{kLNޕXc;Ͳ8(R(*1 쿧8eXQLp;r4 Ʒab%UtYh 䅺)]_O:GY&gS85bV)EZEYѲZZʁS[9۲mD:9Nm"׀=58^F)0%rY<NjsCI!s/)?S:M)7 bϽ#dQD%{M_C杆P6TcFmrҫ ]@{Y@:)?/X_1[B6dͥc5ŠD4| !-+ q.8SZ,Ub!^<ޟO{o>CnEq\G);Ũ4uq)4I*9{/YBH7* N"{τHb`\#ɞAcJ 4D0mLkeG$2*wu}F \Q& }]4 tYC`ׅJ}qN_"ݦF+7c#2jgz8_!r6&7 6cYVHʎ俟ɲG( 9^A[&5S_ݫ`Ǝ/D1kuMkG]F^5ӦCE-RQ _r 83r$u;mJ1ǪSNY ܵ!T+}pgWxHY?*ATl=A4f 3;r ՃX<Ȍ.uR+bm\m[I..>q,DK \?&} njZ:(Cw &Xͬ`"vsCш-F 9#gV~)t tM8Bx`"Rr;jVMM ɜx4.<Rminc@~^k3dZ |dc]?}̚f7T| ODJVq/eMEVr%sL<4 ܣwX{=,i HFlI o k<3|GJ. !:${MØ,{!S[leTfَv%fmI:s`εTJfv5?J[g .fh(.p [-9 6;XRC-ų8!p+ҥR?`BT.U3rG.%x>F (J@K:j(@]KnȌrX/qYuSYQf@ gJ%a#YܯԒ33pVDf␳^ez>T2ziPG-p S pDCe6ō F~p:KgˎGif){i vMwۯfcoÖ1-%c#BhBv9W'Fz߅˛뎣U>6/Rt2LH=;q^F*@zf颺yږ0\!sl$|UwzӫYcmP{7eU#eFzFFVxCg`[pXzTCPmSU}=kʤȫd2fz)4@mx'P;5c#kΖtyw{z֙W<`h O2SCٲ;WF/fa[!KߘCI>mk`*UǤ]cZ_E ]l=r=s$ʣHHlB (n[;!u;xwT%q^+i\ơ -(kJa!|,lP[ lbCT)l wk(5F:]Rȕң:%߿ 6)$7Ds&%S1^I_-9Cy}%n;w% OTz"ٮhr>*[ndA!kPϪu 1w# vwdL{ D4ZA#EA0UJT<_<\q|Ja6>o_pmQ2|Gx/r: 9j@ΑK n*҅t>?Ig(B#yΠZƠS*xf-˯u)e޻2~t)+nvzElZ&2wd&c˫j/rWW5ߦgIzqkHԇ[L@H`ۛ^Dng  1Uis81k SCI4pjL{zqjߣY}m$h=(V"P$SmAuoY[?br[KodGVVsUa[/b0G;.8`wFКsWE(#X Fk1$͗ѫ[A|7[6/Pk=}_Qo#e+sY3\\߯XI {ku>+ĴݬQ+|m;o#:c,_V(cfJ,S+9|w aQ)_I!QL} YAFG, a|bSc`Nw VH;w >m݌ڬz ܮj>7>FĕjssF'`ca'\(Jf8_KYphp;Sj2+7F3cB.ս ;c)a$6ǚI<"\{h ~[x v'O Q.ԶpF?ڌ12)GwzfM≻x* yZ"zjBD SN2/@&<-l١ 5$", K?G|%]2օDGYĔdJr38JsT:)cRj|`FlU^CL A6zk,b4J!%$G8}5{JofX_TJㄌdƌU.gqK+E~aݳDfz h6γK`(TE rQɢzp)Yt%Ӊu$];lь#tkD]ްuD;ϿVЭX`LV=ؑrctH!WI3#,mM=}CH[Ӹnn-#d-utPaGhKk?xDFp0v0c5 9Z_5!恔ĩ6K;XĈpA&>JZ/ݕpݕ'2R B4xlc0\.5 &(f=G)Ik DX"wr?Hk[ AS7Iէ-uQ7$9B`)b\)1:h}~kx?H|v~B[.N.0^/+D֙;d7;lÏ_<+nF|Y~/N|tz~Ǔ%w`GS?X2\IE?_wԝ'u'I݉{0%fsRVHqM,HsH ǬaM^S`^dHT3=3t|Qxrqi-L[l&y;@4͹rm Ezm6-*e>۹v9!+ݿo:wiMVz?>OmLL+kP]UdJpͦ<ѷ?׋$'}򌸎>! !H+Ub> w(5r0 "!R桏h4llӏ߆]C&Fǫ")Ixe %lΖAb'ߊ`t'+%BrRr,g+a*c9pC'rrpp~nRoՆRs[K|, D\/1c"fY'=dL$|$I67P "q+qX(th q 6NʌnIA})g|*hͲK-`4SUd+Lď DSsTlCRIl1HGJgx#AY=Wf'ЉBM" )9>}ED|:'mi,/O=ˋXhbSb&-"ޢ?($vɘFhE+qk#}GN|dQ$LGf2i9à,@D5wxmlN{9\'˹ }i.oOUap﾿~Sqx}gO/_O] )+bP}&~z:ɳb~8=;bE>{_ Ӏ6ONblǩ#5nq8'0H^ѣ׺q5[OL nEN޻ex}G~$]ϔ;ys7)srZr X_)HrA֚%ωȂf2\_>Myٻ涍dWP|ـEUz8*{첝KR*\+%A;J* B4JL{z2 ^YRd'_.qpĺzS!" ^҈#UVJkpHgE  Gf1"Lڜ1%HHVX#;=׽@큛$OSei6Uz_iM5gWT; byB3"xx(4୰"ƌ gvp'VsRV8E.slHq@ #_QL5$S EDbo!dJ%j}7V@`;ޭ"4dw/`C%һHLk{lg8n33VzmΘL8! ) ʗ@ Q(0.u {kv ăWWtQU%'@*(}P EpElc%cfן~FLlEϒG@E3dVs(o^Y2 ,h:M{]Tv!*G/xnUGuTzyrY`g:v~}1Ǘ[; l+L| "7lՍ'gkK?F@D%YdmWQzufHe#ߥ {.v{.'Yp#[x-f$/X+(4ooV l'5} vaaɻKv )ٲr܄t:䍭d ye52pN4>:Ss \ 6>x6 浈UuCl8 Nf C4CLvCt["5[URs؝!ZH^BD#{BR jD[hܠ"xgƳ<^~9{]m+3qvc#xɪ3ʼ`ƶ e605Of G,5#[OlRg~e ]1BЯj*_C eK0x t֖Gk C=+^wi^w[z׆{:jۜvۜBRu =; (`◽2Fܡf{ՊfWҢNK:Bw?uۡP#[_l$D(1 X<ZӤ&Ŧq8[|6qsmźir,90Pg:SH!PqkQI9`RV%2s)#X&c5HͱSX `줙=b+ s 2ACG-{s Rɡ6 Ս1?;wC'1waYXجӅ638ˬӖ7qK/mLx>ڂ%ΗHd.[N/a5ʊ%{~ n_JuSNX^a&.9Ө$N*ݹsX b)&l\5ӚrV>Ց=݃ KmlfPX5t{f^ qܸ8{( ܸxI6۸.dGrՂV[ۖu #|^n-}A&Z\Ze5i.R|NbXg6mK]1hH&R)Y`!"J#U?- YN M-XekjYkv%=p{rLڃ1^2rڠ "J;9VUf{ $Y$jށ.G)Uzi-7zOƛNN.xŀûElf7~|2Azf}zTԑjg7/:Njϣӂw;  xjO/ Yysne}aI70IqV,bVuJ{&)B{ɤ" v"sfS)7C!c3t̻O0*la͹~z~MFeně Z?9o>ĪtghU/9^ĦqiCLZNqET8[~V6sNO_?Â.)U#ҳ-7-X;gi>bMzA MZc 4L½^sk~ hFײ;:Z" 0DwWh.N_#"Ym:Jַ -B@\+]$[E_[bJ>By5h1/]BI (jW_~> eE4Pc] PECB9cZ1j!>+5UM `O+̳ B |S*6Ϊ}y(?[ P;rw~t ff>@bf@."n?ӛf8x %Q?C!.04p"`%(w}mUv;Vn՘&H4+x. כlGZḝfs[D04砜fhؘy7/G%2)qU{.~D6ݝw\"2.{'Q ZOwG7,_)w?'Yaݒk0bg6җ$Z;a}YZMHvO_L&X %^4<D'9#7MN3e*|n,Ùj[L&ׇdKBTeW$ HL+HEq 3A/+d2uz!P|Q5_Lay߽ qa)V7C^XE"mտhN9ɱ^/|0.+SlQf8x1[Rc^TJ$/d%%K#z~T8zdƿ.?Xg6i(&J{l?hq447>8\Vy`5 JȴkLldŔQ;+a&By cb0<(d(BS(BX1 7bd*%Tj!#ހ2 T^#@iiBF0uHGCe( ?5፩?3^ܻI'Rzas]&=<\[r ֤5{pk[ӹ=3~Bm9d$mP-Klk^r7t~Lv߿~<,+{g9Q7^u8'ms2_ȶ:gmdعFQJA:V^[S>݆BLpjS=,7[Q?GDHtjg 8EXV6$)6" j_}hA^m0X:p5KM6!ݺ39m:w^rz][s6+*MR*TdϦ "sI*4(ɦn# )y_9h!95)! vEn%x&SiiENc-hc7 FR6e0Er**QZ+"Te9!(r#eN$_%|Sdi]M1ti;sWL$,eiRJ-z')C_Di(Aɤf寗8gD=/K 'F#:u@ZkuyN-d(VySUuq*8, Ǿ&g { `%8Y85YW1pլO[?TKTNܥ0zx~NtP|R>egZu ľ"Z%=<^Uj11G{4779XidPQ(Cڋ9$slCuo p>_?yЦlxJ)y L.By"_ͰT] 2 NǐM/*%Ǜ4JӄL履KNOo{5)X1!NGeVBMzִ> Í+EB"TnZWdhE >7Tu誉޵B`ie_&󉟵 N彯I 1kdb 3¥#cĨUCuX5a0`jEMqK=,;.yutX&;3إZ- -yX Hב}$t(+XUPͫWA5Y#aRlL 4!"*邓>%H|];pdƄcq;#Ɩ+Ҕx'roG0n05JaA*Qͥ,jrȼf؞ݡdi"CJɋVԹ%ȚvAC0י iPιg XQ8lB_Ar=)a\!5pb,zɘH9QKe|Q%f㩑=m_vi8gw>^-켡[<# |(ZȭX|v~Es r-M]wpB3gn:O [!pd+<\ "٩Fx{>_rj˳"c2fBRϴ<3*WqPEx;s=$ P_6paM%87 UX.{4!|.rLP ѳg9D-V+ ^0ϭe\*~R[`Njd`w},B#lk~vhMb+3+Y+ j 5!~9gIf~wCB27 qIyK*Tm?K)U-lo}C+5 J`OJ>"D瓻 qw~k5zU?67.SOs³Lŵ'W]uI ǩ?gca/|1>s3wׄ,h9 "!C$'..PW B(l%JSd_r,Dz"MƄ@{j7/o%2g}@ܳ=`bnЈJ` ^cix70z#Rfc 7X:uW ~ATbX|f !55aIg2K@9IL!kZpA9 Aٟ(K/+u+v\ݬ²&I5\$n̞] ،C8G0dgDz!`c%hGWK=j^}-$BV;M+?6?' ,}17\~V~!l~:V0T[ۛ%wϖBz О{?@8FҒ|*SZ޶n^-U1:FvpE/^#Һ!_&'qnZ7'ae:cԲn[EDH[zXւ|*X$ӎFtfD;OLmp%;F3&縖m60}ea$Nǂdh]$,D%B@G'0A+Xқqy~ u) lHdMh\ǝV 3Z,ٮ:D;2!y/T?6$@c>[q@4teB_J 7]1ŁYDX&FӇQ8U?3=.˱vW&\S+q9eكLSv3lyfr2F=͌ATH;AnVg*cF*׾U>1<4PY^{xuqJwWeHF %ePIN@a)\gMhX I5ΐwUjAtoj+J8!g,Q[n&k  fބi-`(lZW w4m` 4GXng,d-7`+%?^馿~GQ%C{~ S}a":JPlhq'Ux.xݱ3&zN b97kZn0\M`L.3/ͼ,d%jtSYY(PM:W[,\7I Wa:,_6xOB7dB#t!Tm~TSt8Wllmɵ'pHn9"ǎ0!2(h#y"9& oj ZQfX 0?rxo03kRv_e槞y_ү6STx|9Z++ՓѕXF{׿npF; wye>'X,7=&X#qk (TGWWXᮢ!=el[3m`&1s֧Xvb*ߍa.VOP|wB7l2 6OU?VJuBJi˷Ƽw[>ٲ8D( |: ~z; ܌z5W[]X`qS6˪{`<ξ/:h_cej`v٪w4XݾnZDIm{rS~`Ln֒\BBk7>dPFQ)pF?jͩlNwsbĕliՅn2\KN>$bcc9Dc&Se.)=Xx% Y]{I3ˬ9#Œc~ʔ`tK ;MggRxG|3} )) (85}jk[­#Q>hY_,PSeOR͗t`$9ޑ\2X >{x0i<R3ɉꪒ0Q54XLC='2Bdx10WdHHH;K,T7E{' g\NykHWԁw=&jcDWDDm4!P~ĵԃVun3Ǹ=c7/иuP;] $h萴;(}!t%DmXB&(C*Z KzhNUW3`~Ԁ. .jn6oأ\J!%PN6!PWa7,Dq X v.`Cqq:_]ΐ~w;Bq1O/Qfr".I$pEZ La'e $gd-yZ3Bo``_3hz?@H0]矟 S6 g70fq LwU*b B)j) trQIr88/I /EaH9'9"Jq= ƺ.A!! ̹pZ̬*02Bu;'IcLķwq.ɘ.*1,`2ʈf <$D1h0 @M-^{$Ir;TĽ+uItCpٵ_󻷗Qy3;0Q́a9^#GA J/-JUPL.4%JA-*S\1'cdIo'U5Jf^ fcj"Fp6q#◭)*}HYm֮8~IJ5ڐmECr(b!-+IŖ84n4hC (73p{/7i[N+]#Chzltou^}/wA.'L-Z~ͮVK̊!N]wc4.W>]GuǒG#Q?B=B3Jfj(;%\?=*ڐTKTl]c(ɐ+SR/wcafǻΨ \"`^@(HB¸, \yZ.`O@.A,IibQFnjj,:xA7^r~.xf\Iv:AsrB5K%Isf|Pz㶔ܱu-IZH^u@ a87KZc4Eq.K J%A4<љʈ RWWa>||I?/hd8%"wُт;zuqn)j,ʡ+EQML%9,3EIYX@QE^L1- LCקby*GP&YNM.AA($Ѯl=)E?|1gR4 '5机\)epL~|&Ri3T$1˖]Kvy79)Ϭ$lBDq& /%mv* (Q+O6P^ig66 z FvPHr=}𥍂pGP 9l՘hg* Ky;hmTI r%NwLaR%PPjTuh kH/XAnHC#hM+Ewzfxy.ۊ1Z@hOd)⮀l]iNWDѨ*?%|"gIVAY"N 7om՘]f磥ЦZ2ՒqJ,!}[F敁g*ψ$2Y:3ت 0N#J.w40oUck@NORŸ 譣ϮS3IqAC)NU8!"*%rB(+$KsX iT}d?իYI1zC1!oߌgiFC0 ͙?qZ#r#!tH5;KNK2 ZK(sevq`+ndEDkPJV ^w=ǐ\OD?rhKԣ߶瘲.jhWһ9m 1ea/=#6mK[f_хfTڲנ]KJۥm09z v]$A l*4Djң.x7W(@Ŧjcw>ԣM]aQmW$[4Szw|dl_n6~)Jt+N7qD*:u.[[NX\c&+VD/K&Ed4|TC Y\Qp*pM K3]T'7|vkV S>ԚJSyPvGKA""ekО+9%5Otl&ӇO}dq`jF=t\/>'Ӌ]z| H/.) 4<^IH mJD\sޤ6I51ínhj)2!w▣IKe,.Sb,ō\TRK-y#a*cq0 / Geho ?7m[7^rLOg^-?Onҽ`oB\q E;In:$OGρI#t!ufFG7{ޱJ Vj@]U) 4¹Yk7pϴ>G5k//>A8} ҵbZ\sZ #2g}D Zh-k=-Z;өxtB-SuޗSAs:k0ee&de`Wz+ê ͽ|}sp.gM3vco'a){Jp"kYAmX8Y(" x8Il]P8X֋o-@c;iJ* x4hxw>LJK}Owj2)Q ,b.͍O47xirňm9]Z{*ۊb'FuzF8AmQ 5, !qIj.#?Qѣ2 X 3dWnNB)ki)c6M)MCjD3=RW֮쑿ަ)Ρ#twԩ8jBz]WV|4!!߸/ScNt#hMJ5(B$! ϥy U)8)8y:4D=W ﯮ$&.3LT3Ilj9M,eֱdQ~;?i^ZLI}x'"m]PpGуld7|5k D)SߌxA7~\\r)DN/WbK&%|}Rw,9~ ,aIDjDǮH`AS梘!)E3(߲O_ʜRTp1ŴhPlNAµ'DW$ǚ/ϔU.G.=E[!["Zt8X&'׿g.OGA6ml~;<㓍 /ʠlG~|}ڦMI8+6$ :T`/de/nzt2¯|?_8Ĺ&RDu˩T"!WyבPRC^^dwj1wذ.zeBw>L?oE iG>z 1*W 3-qUwIEK|ջgg]n'PG%2P`o;ߚ{ճrl|LW D@BFۣj3.de)i4wyFTFY\TJqkT)4irDm*YNFr_ؾb67gZ$ @&D%S0)U.%0&e}B{մ/aTGKcZhbo8~^HXT|yGTz4."Vн8[1y8@tNcdHroƱUE5f%m^xdRBw@Wеā./ܚV0g^?6GΕc9su@p0,HJt3XHK!//)t>WuΕk~D5y‚~ 1u;A&`W6ء&`GU59UyJU)yU ^e[ .]{U-߃ˋ-'m5 ٧["b-FK$<90)r]W[F#ȓEx :xi>m/"ctHHz '嚝.#9}Q:hNp[2LaarbI™u{7-!oG[xA{; d"ps /} e CY^M<_MhbpaW,8ʐ7_s<2NHU 8 h>%)g7>c:&_ t\wiz?;rG2Hr7WOp6p u8sZ7~5xWMy5.GӻɀfhqHpW9+Eb\*RdϬ%2Ynu]2(LQzP ϛ޵oV4vHžMo$QB/?-?pwYˏoCTiM,J#Y[XrA bok! J F%KB:Z\CZЃjU?<"%hAX0nDP1nb :ĀU;1n:Y&:@eԖk_z.WD>I#tɃOInf9693nw@GQ\6פs@.D 5:_vZH-0S2U9:>Nxdy=; `ytuhziI~ʣl`pG ІK*^^,O6J';O#LĻûl9,vFlL9N ,? 8BN~یfS Uv+ӊ Ъ^ZLVp 2f9]9-M--}gjKnMM,6KfcLBrF_nZS_.gi \9Hg ķS: ǰem8ł`%x2כҴq9ֱֈ2v3g'fu"C> Zg)B7i/C .ǖIvrI?c. .@ULpQDp|I Ē'PfҽGߧ7BZ\?[N[ctVsj5T9j\?b(DE'!8$b ]2^ƃX[RQSJd_?EͮmFYxG>Ӷo]M+^Sۚs}WH-iEU9A`̩LYs 5(VFTwkl͜8O4;OܖbVc^.G?Ěf:_0auE7t|=M`].Az͉8HD$"U"V]/0n "*(D!VT3I^YemEy(jFWӗqdg'*]۲^ZۜLANY)\,I^#ErCU8\t@ 8,W B'@ZhUZ TЕ!Ɵg<2 UNW A?x Sx0W͜xA^O}N50\B׀Օ+LRXPw$].48mm-tl'Z8R%vt PbJ>q)K61Ж pQ!ԜZl%%W+̓09K? +"b k,6Qdg86 +WSK+FEU ٹ)Э+{L%%WX<1JXLpj΢ HA#B'@<}rL1/L-Dg`g<u9ôagX&DQ s #xCcKxNPf "k' 1CˌbYPRhoM p`̤MeT ĆsLA#0h HFHy<@H`YeģCClFEVn5+Q(0&`=[,'| ti{dp3VHQ >`ƭFa !-vS63o35wRӞ)o$H@L#- e5L ~ȶ:31\=u8;7 #zGКf6tQkhΙJVJ9+ZoZ3(M%B'RzQ+[»uPpASu^&SRIT̙l)OMo!le%)7-ƉrB4L@VR޸[P7›p?z(?׸߶?{0 uch si䬌1Ce*>UYZ&7z!>_Nwט/NIg|7ue'mc@1ote^mOf^snH9 K)%ăd-@MkDos\4kȌIrrQKf6}:1>Uzj:n} QXbо06ȜH4rjN>|$P)8qzb]o3_Ad6XGe6=K!^+PJ[fu=;Dd2M gnpzVLuDۧK&.m i/·.4wz~|q&7BDB F),ء"`Tv3y6eҷJ!5Kr{.4ڏFWo mЊ0uJfnt\:9(2ZL[ۂGGGGUSmQ'Z% gЈaGW"#{+P@mmM_ԨfhN0&> ڎ⸾@t&/)arǮ}ir/S 43FQ*ջrLm El{/a&v&1"Mg D\aqzkŔ1TӁ F Cxm}z]%+ez] 9-]\hTLt\hm7)+#0( IARA@fZ`:L +0VA6j3j!]݅ N*MuGn&ƅt,`7&f8|BR]_7zORߓ.Z}[XNخF>XWJ|Dԯ?)ce%(zN>Bɭ?I1&kI6.葼9D($I'Rh$4W2[UBQ:Tx)9Q=>HP|N;) $ :$;?Hv~PO$B*t4uj@^0%1՘c*Vy4 K7v4fg~Qfh-2-}Ule,*_`s5 6hr[7y30a{OO0*}o>=?;7kާ-\go6ڟ ܟW:=sYa9!aI#_4q!2l?_}nN<8Q>`N Yp,8ߚS*i NXhjr0R_Лg#DR7dSI*&ͥ5#Қ0vՖ3`m`M[br1)sIжn{V,0jyRYV-syUs&c7>_}o$]ivLLvsc R?Z7;KQx,?{9NdWʼ>l~H~d̵4Pe_;[PkIY}Kh\#gl>X lW;itJ~228\lY$ec";&S3Ayi<.d~s2BpY/wۘ\ θ O@NĩƔ ^2:?t7>/T΃Y~d2~ &Yd|[;Ntofr}7W_'2w-bibI;Vn[Tk iwp>bB 0m> -!#0Hγ7я`ܭCf0I؃7q\I[Jt,& 7{,fQ~86#G %0 aw5dj!2d(Zڴ1VqF#Z6iDRRQw_o|PNh]}/oJr^4 nu%A;iAQQ$4HWmQ-i0D{wwmm$Yj!@p~yYi4pF[![$XR՞ٛYDb cR5n둽׌]}:}=b7P֊jVi[DO6\r/ .W&1Q5lEYd} d-<ɜF"HブZQ|[Y <5KwūfР#XY;[6cC5i^_Y ؐh#IĨIHDlLؐIc/:dDT7.<$`hTְd[Fu,ܵ(`o!yAj&l%e,zVCB60@  IUsdД82m 8؁_$,N=^'$&gLU {d}ξ|xh|\eQ؟$33)iUUrvz}tَ)l-t̞mXd B%U6۷Coko*4Vzh p0 1ȝio3of'氖M=; :JZ]ݱx \'JCZ&L7o,mixW[acl2= WMQ8" bLo9dqI$RKi0ƣ4{hc OŊGqg7#TAcS&ʹ9L]ñI9|ƠA6:,^Dpӱu^Al6HGlal>4x'l,k6Jm-^)ZZDO~I Gq6Er*Q]Z]N|JQ:[E-K,9"&D>k >Kyagz2,p}_ZmC"\z^΢M;$yS344W9iq~͔ڦ퐱0$. ԟjMp{ј}X($E6 dp %E痻7NAo}%3R]{ 7[7-r rK!@{2dO-(7ŽIC=(-(j&e demJi#.g;xWqUw[kN$8NjܒI/I h8{[%~yS<9Sd\tV!# +;Bk7rCZUhNwq"O5x@QfOC)ԴAûiһj""u1a ҷpӱR%6ZeHU E:#V_]]Z I0$Kd1nѢK/,z5egNzc7f/w{K.hs h*-MG-yeXY`F*F0_ůqQ#Vn4bΉ#L&1IZQqB/)~cCKaL%=m [QiM3Y7zy**;MjM/Cůf_D6]1. YΑ4MB7H:˼_.p) P" 9s>x AcψWc ))l ,!P(rū-$׿QsqO4`Ȝ^,S=xhӑ0PðNUxyG۱  \UΓ&BhQ{FQˣ`$Fi2ԅ[\i/3 n-!Rbg#{n>~dS0džኡƴ%Vf)2+⑜⧩3=LbqKlnzzW+㸭M/bDY" ̙YA/ xiƾ3`}2,kf^C3 K!tXx'qE&!7 Y|q֒jŇNc)Rȭ,`G,cI#9ʼ"ӫ}Tކ m:2Tiv"@EJfAr,AAX'EpGCE|4J8|f^1f,*.r1G}JGе2GfO[OKNr:bzE4˳ aZ c+R0mȡ(%F؇16ĎV;zD. ( O_e%&ahftw*ѤZ,CHE48Ou-ӷa.H(SY[ktY٩tJ{f%c{2-;ҩ-.2G$+ —*ݰ{;Jֵ ӰVRg@zd~@_fԄ /~2(?v\Rx#T>EC{ ":#_~Ͽccѡ-2[ ӅFfu~twk!}fդO8Wsm0I24! ؙNSFD}h[5}y)8I+Ic#*I5-dZ:A iIcCG졷#GG0B: ۩)d3B/;"H݋*䃚EMQcK.䬩Ѧ)ܣL6oW}AB&`q7 bxu oH ,I|Nܷ w\/q9vۡՀo  dデl}1~iOj[+ hvUvU$6Eƫ8A1~CkLE7o[_ks$d ^pWZEYL|)*:Rv3 ժ9k2Un`Cr:-UD(hvT!κٻ6dU]bulARXѧń"evU)i(Jñ0 sș_]]]eC( VJI 6hVbq|ō.vWpV!*kMљ$&Y(xY U} n"E9!6L: V+2)vVr*4Y9S/kjם6熞K~ڵ0 ߇vI!7k`9Z vsڵ{ҮRSk;fI%8_&})TPtR֯3.ӬJJQRN`dɃSiߺ6蝶VjQ ߌ㳵)b7,-Izi5WMGR }EJUWJ%tZ.˥ƘlYЛGr-pmQ2r wB N`F]fh~D \g '?ƽse^ )y9QjC6:) y[/K2 sj)*#RN6&?x2;q<VO_r;!ʄ\S FHL\&hZEh/By^y*ϋ[w&Q_mDxI|ns T[TA7^ ahG7:"a$ '`J<  8je BJf`t`ptZRq]YsDC iâ} L.'j5Ȁa]rR puSan`LJV6Yd/h oW'Q$UgJSZ.EQvp3:ӚY]=W *֤UKԠA*:StpgT0CӇZq:48˴jALf(Yr*47'`(yW &lÛXtJ doÛ;K 07޼REf\x3+H}$0#o!R5O7P-[ZZͭom/TVt;nޓh,iE;R3XvHJIj=`tŞR)¡@NXmH>|ECf0xQUьl?xѕO$^pnEcb6\tnQa,pVĈrbPՊٮTAxn!l3%x!Рp1v&! qEJ3dyozF͆ml|RKE쇱]}jĴ ͂aIfґ Avm'v{+XK`\j@WbbxYb,sѭss9㝑jiY]QIzH>]`BoPblH@,+ 샠DMOZ*ʫ8(⠼ʫ8#k{,0R7C}T 9Fb"2E-'-DVi('%665j`>4gJ+^%" ૄh$tujcsxC(Gϑ R iĘM LF yP萜MGI-Q'& 9C]2>)C&nP͋톀nr&]l,RH6VK=M.Ƥ \JIҰIO[ҰڠD^30NYV+;jER!&H)6֜.Tq 9 P#TGy .ƕgZGl DFy$vˑTq1LXwyao)о/pvnlޡ;{o 1+z hct:\o akw\˅] H y77j{ MNg h;{_^}۟FPه@ۃ^x37w_̟_{/O;u˷o}8&8/Ы睃sgjO7*ό_Nz^Ɠs:m-ľ/->i4I<Ί;W~ ,ߙ{Cb;fԜ .:FtѫqSO1܄̾]"s4ÿ_k|{;&Ï]ܽˀs7z3 ;{atSdU=bʤ_rZ]LXGDq{;o=zT 5!=Ab& YKj]S^/yKڽ[J^;'\ΑtFDY{)ʍBbq{RV1 sk8Tf"⋌-#8H8N?pk-~XJ,X($I{/ W˂Rf5L' MK P& `;R_Ji}L6Vt] V}WX޽y%^FOznM* C) HIVHQG"R$謼$e)O12"K9,~᪹|Ӓ]]$^BGD2 9|.B5EJeDm]1KFbT̙tE;j&$U|}-gIU3݋{;[MKjՇ /?O_x͉p6Kmc~vI?Ľ<{YGL_g-٬3.?pyuzJ;iǛ~8yo6]pA/\K/T zX/T=qF^ahhj<{JApӊaӇgߺJϓ_UJruB{j}2(>7[R˞$<4pMY)Y*'V^NFZ:}N;}h; ŕ:uZ?kNO=qa$}F+SpM$*T)>qCvFtĚ3O<)v߽qa~0?e2קVq&Rd_5u :g`b1=(>bp&e~Ű> 1DrUAZFe|J?Xf>yHgv>}ss. :CLq)346`lɹq+JQ1*A/ϼwk$ɿ[ wk_cYWzKgHQKۓ>r"XMu2<mN E)l>Gv- X<$ƣAAA>b,]ܘQA͇fd!| 26IhP1)^HZE˗,pgۋ.x21 4 `}+-+P?Eo\q&X 0j3hHC2Hv cI ~۟y;0܇p?a w`ýF{,0G7]8XJrAeC]Z[W+ ${$wܝ~$w=B]3Zs#W2p`7A&xr?mN -Hf2JkH *# zy#=(nL;4HC"~d5HS@uo)K4R,M07B(hM*6N"Fx,whb|,we?]|}nL&Fg'+ Uv9^7JZss"5'{ջJ;wVxN|ӷi|g9L[1Lُewn #E(X#Ur#ƥR"B]&Wr/&X0 ݷ-˳ok!<>MDjS_~Ø}~ujˎse?:'=枧gݣu]f%ˆ>iŘ $sv`RZ{Sb(G\U'C). wB :F4hDI#*F4hDF4LԡS!e60sdH]*CζJ(FA%j+i+Gg-'  !ʖ~ȥY~u$'#if i=YV2 7QՑsu>NwښFaMF)g4ga4k~?mw WzcjW zw.x G{ũ?眨+FL PII#ce ٨vf'tAD(װtt'Ĉ3he|{8h1h1hקE,ۮ6,_Q,j[h ؀̲9xX" I:́{D;G%I8&X|%l=(F!hޡl2& 2V[4o>Xދe3t g`?u5Nn΄ )p<}2نT &PdX%^mnC|c|eCda+25$$/`>(ߙ6[R PuKv{{_^|ь<3o y{W4;0loP'ݱNz"4)s.mPu "2}Te)+Tߋ 3g^Ƒ~}w1Кm(;LԦ?~c?oQK1[X+79dO_6)ԇ| "NɗcB=|H)WJ<'ke⨉n;RtEf!L=ɞǺM` !PHtJP4Ilټ u~ K]~JW۸ii/W@AEMTR<5^v렢dQ]&ʪ0Q\̿$m-Lr$^8UijECoP!Q+kP`&b}~TݸP՜u/?3f\Y/f1*g$V38n{wm󧏻|}-๲tKFJۣY*zpfMO̸Dnj?ݯ՚7swӕGYBR)q8äD"t 8s bTe#Ix,U#KeΛl%:LЬc+ W~z+d}|]sŎ,xɾERq b{ :J yy&rja(J:Rh#L:Z#q1J\W-vcdBBm~3xi-NAq˂OPqƲ;,NQèƁJ].VlמP ]8zeUM9oGz^ZdAugP[1TZL[9Zp]}Gˉ{7#"68qf};w0>}~(&VV:gƉ1/tneى!٫ )MviXQSԼnI][or+_r`V}6X?;DO_%\JT/Z.gvvdDċzMݺ @$ pDK7sh`kvR)A@32. - jV08R94Ozb2XZ!nRމmQ+$4vDMvvvE(z7.8gǶrƐDmEB~'tgC XbjC^kQްn뢮\<K>MN+Y7 '3,|Y ٱJgJ,Y3(͑vZqbHvJs>I{-ʡ'Yr@}1bTToť)`,23-@4mge.y8r̙i VYmd}p55mҁb`;/j9[tqA+R8Rq1ow8&NZhzpY8;9̕#twLs>z"U:#L6\sf9L1Z7{58kźsVbfu?jǩTKw$|2 dжZJ,ԔJzpٛ:Ʋ/-5bgYǙ<+Rkx4 GX-Ug^n&1 H=eˊՙ+ɒȮF6Y< T&hc.TI֬ZK!GزIUZk J#a&80qK+G9b9bv5 rZ ܨrVMHS) ujIQj[LY{~ne>Slf#Tm?EQO\!lCtdB3uu..6@cHJ!rFWTyWN_k }tNW HE%r6`wm ڃ*rS&alӜުO}^|45CK!NljjWpN.`G$jrdWԔ-%CQHx5@S5pAnW!:c"BƆVf'yעfq`R[6] 25S=Ac+fܣnmpleYaqG!›`8yF ۚgcpSLrLqĆRu3,-G9SnY @o}oNr d:9Q~(qƙOi'N :$V (O%g rm y}@lm`X`Fm"Gq<8dD$8ONxQ[N&(∗g Ԑd)=8FX;ؒ)v8 9 eUMEeн (YNi=ζuiLDQiLz[;,ZM 'E7TJ|:fa:ڼ֩:8ӥs]lFӨJZ-l[{hqGeJQδ5(fF3bȰHV8q^60z Fm"R!pDY佭43G v4 jCjԣ7SSEm OR1XKǑǀ K(0 FC3jv8 ;-J0`IxHUm/5͙K߆ǧrhd0]9WZLS})6oۢ$;8HdWxVR`4*`@9uC#ˬtjw؊KkxMP=p)+ԴAd/.۶(9Psȉ:+kπ94`4M?,I)R ,q^R9t,9e Km[[T*ay'ޖES.Òpfa3~ec Rsͱ(>p~}Z2ܿXs;짍j9/uq]A`eqX?S ]ۜ5BJ n* VV" ʡN3R3&͎q@C,u9ƈ,d#L h32hs7Ž\5[)'*k"r^{dA1A'XmGaJIzT!kr5w瀓7 i[4m[D1 ,Z%%Ik;:(%EVlJڍ@~,k䤛l{=8s試 >hNQv`N*Zb 8؇7WZ&mLq a7ȗAi>F;*LsT6A?nUn&}20_rްpě>Vwe,ۻ˅_ЈXɕ_$IE8;Y]$''d>p ݪe)Jm7_=۝+&$Y_>'y~7y8co|S*YX G"%:2$3eͿo]RyHI*=1eZ=>EC>8/!tf$ EoOw 1HYBu۳A$6ۨM҇Pnǔ/;*5>2ԺbOŇ+ ƫ/oO0?z[ + &nt_n,?!ppQ][囓N?_[.I )VZ[ Eh"Y>Ro %}!ˣʢct7]Qcf}٪$H#Ԫ\ hEJ'G۰O ӭ뮋vI6;*eD(VhPݯڦ``M]~TG *֦N[^wU/\MNzg^"\~)K,-nJXELbEpƻ⫫,2Fё0r5(}u ]uy&AW(J5^#w^OG&j$qא.D uf 8D{t{!;2o;Naiܯ'9h`,2<8i0<?:teUUʬTN N N NwBX/*'-ދdYcMk,+k|MeHku ?1&υ+ۓ%)q8; w'CɏM+ e='yV3OVC?WNq1fcw'r6%m!t S.LC*8"AB~r6Q@6dsB㓷(w(6ETfmxPIg8cSxر Ōzn^" vS&OH2fܤp[{#J@'lxpgFD6VX,j QF'h-OnHr-˓缯I5{/KU']Dy1(֮zbyE>G=6^sg8Oݮ jo:jÕqT&eQ$guc+7jۧo 3xh?t0Vv`xRW_ї h,s`VBZisdĝEypiJ v1 Hޖ/NGmi#n?_2Zny:{ӧlۧ^Bջll:6+]3fMw_}ׯ}%[*ar\RRJ k.$\(㞖J:Jh_}_=_'@c3YLϲ ߎTf6i2|r|=pY(zTLŚ8JfH?tmn~s8͛}x] s 8 h؄s_{qQ&Ā1S%(hiRksX# j40RLs DӞd n3HFaP_#k;fy>_]=l/hBg˫;L.,> ^N"dσD˟gWW"fz~?TXzd;jE0C%[mhj7vc+z lnE.~"XlAHkʏ yֶ"CKw,SlER=m ŔPTڵE?w?&p~|'X;M[)c &1QK%/1Aa Ur_xgqœ#:րؔ%5Lby NH^@$t^r1 0}vicaCz(P/}o{5)ݥ2SHjJʂ^,4.?WLņ  ov  @=̵(T\O18=`wVp 6{1&)@(&Sm1T $Ͽ_w|?!^..s>s9^_gqroN^ذ T@B$a1 `'~f*(,YQGR7{,4) #((E+>y(Xo =ktiD DtYLw 'a ]~-Kފf(MEy}ihCc۴W٢ԅ rZP)¸; ڭR2f; mڤj ԅ oy_|~')RYi[,3{[}_JI])ŧy}ԛ*0Ph*4y_ M M6U\DyJSRMWZj1)BQ#zw+vFן_;P>$.zn7C!Z5jp3rlyjޞ#ܶ^;rk]ӝYSgSЖaOLQNd:G,P2 =B[˩e0I! {hĬ&bb6,c( ) K8NpมE_@KVV~z]*,\6c={kQa| `A7=Hl|:R>m]cX|P{*w|<0(@v̡d[iB;Av5`G|QTEgE$t“rzKm;! 됦%:3]W;4{mEnu&sT[=wJOmHV'?4{_5{_5{_5{_oWK[%5Ljьx^Upn7    oPXx[BNW7Y|[xut?o)H]>yvqmsjT:vL(K=0c5WpibΕLkPĺD*~€ c֫=%PV6ɉLBRr]d8GTĵ}n 9u'uw}S *2h0ZAԺ6BERNtQ"rI'Gt [43ԟgݑ.r}mESP}rvT5pM4LLNzҽд( FiP.9*ebfY;\91sSBwJi)և.`BBzej[lHzcLf(5bH`b\OCEܫ@ٗt08ք\; pپpR8(`bDVkis.* 0Xm\F?M P"{Ma'~!%qƬ ]!:gbي 5rI \$H~+ HXs3"W?Hm݀2TeU0K=G} hHyhِ\A) M$ eBiItcV" 0*V xVhmC R ֑2Feߴ,ە\=v\^bLa_kqy u'UyP Exh-S1e`(]18K*1-+)%qLQ)L84.\۠ K*;<.: t>~ E0^Y.e# Ǩaz ؃B~HP:EB AxzX#3i?JsjƳL5y0[!SR>B V{ *HNcΦq Y<" !Qr xMK<ٸW$")a@" .]8rȥ m0[inAL4~p%lU]j%m I&IwUNJp)+g޳xil+VxU4A+$Uhرƃ4XYbXv3 =Y~ Phjdpה$YY[/%-^]}GqsřvQp^ӓ۠ JajRpEܛQ|&5o‹Ao?wH ֍t"}v^lmb ۮ`Y[Ri^XNs+hoL7X=qУN_ڠ/v ; *:¼/[f:j{?R}WCd\m5B[o֐"J{^)mބJuaZqcb`jxՖ;nӭT72Bv odW}U3>@w}qH82Aujx+ŸO?~?0!T DDQ<{tWޱ t݌׳Iӹ!GɎ!lt` ۞$I@v~QEYr\\ݫ`=Ņ\޶0bB$}_+)9 ^e.y ʠ,vP"M?8W݃G7 Y/%c$zӯB ϶ǴctɸL#zF/ "Z18Xc@AMj7PmyPf6u! ntO>3HW56)/'k]=i/O|@& *;)/@Bkn}XcLMJKp9n(0yedcG]`,HuON@JuTV̯ ״m4+]Z36^lξBZrL ied-hQTêԹ1,dj{T}uooWOevn`2;+B#yޭ\ڗ+hGkTSVwhD]'зaֽD&QRO>B= !9%2y#P})ř?yO;.7I2|oঀzس^I. Q"I#>.L{_'RxM}>w^Ƴ9\L`J0$X%^hkbNJ3FAu#P t\QUMkvF)Σh#?^lr7nJFB[lVVc]LYY `o`Z8 'G?X7E)xtsW:(N$1-Ai+b)/Ę2dt͋[Q.[zgP&w]> %=RXRLEip׳a)Uc&wλ_Jb_͚>{S›\yַ}pa1K:2 ia1ZYuJ)J-Ĵ8B\si\C46aV Ŷ;| M DՎxI%Zoh2Y;R h)jE%uv ҕHC>1C(&ѩcD2Bqs+Z Glia6{=+m3#06 ֕KnM /{q1T4e3 [eV qi3{A\;&PP9^EBѾήI}sueJowf<&26JQl9 c삢p5||%h)(T"]UGwJdU&[I iB ; [XZ ~%PR73A욕x*A׭FHe!ZyFaM)o˗ͧ`qQ'"D<kM%t@Zct;X _BɆF%ȗЧ<$>uc _i56t?N5+nΗM!T?;">~y=*d[r c-)}ѡGi*n+=b8P(I~}+%TO[sxVş> B=uWV[V!Q>Ϧ}awBɵVDZy^#طbSLspo;,X})QUA\L];gt8|}x5tYfY(~Yi5kc;1$BɁfT,PFt69N!MthPk'$YD2Q?v^ FYVʹËË{Kb|P$q|z'qEac gA'aفy} wBMoɖiKg&*;*  E%p Rث#A:s(v xßd uSN/G2, qtEf[,d8>>M /4d9Q}s< `; O ݄ ^*xz1ɴo_kxxz#<}ܫ}Rep$U?1pI꾀3. OA  je_á+%h`0Gs.\!yY/76߻`oH&0^h x80_{ocե. HEޗ~Xpyxv܂J7o>AYj@vl7o'A4/y;#_~x<pxg^^EVRs0u~c:ɒ&SPv'yZ yxcB׉;ǧޛ1dC6O&ϧ'g^.[vz7?ٙlvs]\5>w"Zg~~2t:)֮t x=3 2T9o F~#p@od,1Ϻe4yx1gOK砨CԒL"B>\4:}phU_LXM7m)V#,S1s g­@9O!'l˧Czf 5ˇ 2e |}-oy~<_D&Ys栯'2h!pl5 "GTyhDoy P"-oy<~3xx@сsù{ <$@ү3@ë7 e-:e-o~L_*kD<.}OcVTG iLĄ0F9f;A bT?%M^2.ƬbEڎ}$ s ^HW:; CR?>=763pUշ.}XK[uߟ'-o)~Kk\Z)1G lM!yXb%s̊UyEh %Ua]`?[0{ߙOU|N^UREZU*hi,c-Q}NL@թ'՟KR $ s&S病'C9gq@xfW'\Q'j )}2[}'Q ͍(&1Q:r%`qDǠS*I9#6PK ֈ3,sk1DK:t}nuŘ޹O]5^}t kNڼgYY2͚VCq"#|LBbj#db [X+.aTR"BO4O2±010XjFGoՔ*U}?ngܖz},'8+8H"dOm 4P}>M*Xg+촼*X)#+Tl}W¹d6G;!9V0KC2NAD#kF*OJUj}U3w˵ ixկ5c>W,؛/En{i}`}`}`}uC >_ g1ܩj*`Q-U1 KUUUUTlWepZ+qaOuouI@@@"E,VURU,zt SA%f:ocŜWUkD ^TBkM|wRRJ.q>"VT?%,y h+dŌ3fjY:t⠌|i(j-n )\ܵ(Fz6;eYF+\; KKS[rz|U0Ka J{+\<}9>ݾQ oOgng~L昖RuF!t8 =28 &/]E以B/R[wAI޵57r꿢[NhF_J):UIyqJWb]eADCq,항F3=4}cBRtאXs\2z2l⒥,Xʽ%PSgŁ;S^۳ b>s] _Lfe!Q*_uKQJ6dI9ʅlFHc &}+9/Ҽ%:xڪK0gd*2Oe45PI!Se e+e9AҘrޡaw/1~G@Vr|4~/'7%_{4V=jG{})-vsY3K WqiF࣎X>YR4VQEGR!5[3@g ;YK<97h BD+[!hI=I]0xQ8ALծ)[lQK()ZU\1.(%AÕ'΋]6, #WSB( ul$tiJ4wUCOEQx"}26aƸ*!B:5pϷ}^EU>/ByB$l\LJ K7&!`S`&ȀB0iJLucJBLhкدxTg`dY #M- 4>Dc8Qy3c.U=Ea-0r<0-$)P" W' dvA9(ߒfȂs*$!rG,_6~=\iDDÊ=T[ʳ'a`-xM(mn 8gBv75ojY;Y1q/ⶹި=WҶdl浍MmljmcSkN҃ZX5=ENzֲj1JԄR:#".e+"(&#챥ygG:U&(BFc!BN(Dz p 2Pd)$]icT_>N 0'!aZ [3J1%(z+9*6y AZ3Mp75 whr}P}㷳&gW 1ڙyi W `)$CNXT|_~xM`ReqF`3{epv19/d.ofO|hx?;vۏzطwWW|}Ł}tygʼnH-;e~{ nLᕆ,:~C$Y){?61ҔuA =cNdpp,_B E2;n/,pl;dG ߾ܜN|.g(DGQ(_ZD2vV5cJ0]8ȹ00 HܳzdY=2{xd&(ulГC'57"Q+TL(jJA0>jlj!6 Rpn ZqCum_yf:uǡ1#d+C[ʉ2)@&H`'NaCJO+]J6f pQdCp 9NNre\BUgs6[zu/.ә9ͧ+_xo<羽L~zua==>~{I^2y8껫_L>+ `o3U]a§]4\V;6R'V IUw6rJ\f^9oǏJI>_^_5Wȶگ:b^{3||+ﻘ?e;0k|t}7t79=*O~:8|z9/w9{h?=b*7<ʛ\nOվ.MN#0X~OX5='G L "4S aG)LNe` y>od֛%s ,:خ( noWfUIV(&C*Y ]J2]R`neiGY&ʉhVVșJtXa6YTz7$#0Pl _)hzaΊidDc̺qǁ>b ݘcoPCds`M/\nZ f+ *$ 7mcQlstV.#vTGmUc ɓԬo15춉%1pҚX9!$U;vIv[{5&-\qT',]>ˇ[7h)J6cʊ]Rvٱ|8c2;z:~mqE]|S|~3g_On|2y&hIS [(jmc9/Ӄgd]Ijo]=NvSߙ9ﴧdl63#䛶s.Si;iFS] #w~I.Yv_$NN$)Ln!G콤H.v:ڴWFҎ]eǮ{V;n{NYv` Z~q$޽Xpذb9gq'99:K#p=b7ӝP R8.O~޷FlնXXj\F и(3x;2bpf+!0B8c/'p`q VҚ6\IFUi&-Mz,?JUrMZElP dSMgcНD-/$8Hj Hm:XBI[jP Um'4jjLph#0Z]jQk"z,hH[?uݹM' y j[I4+0Qȑ:ht1ͯ $# 6MxP ;F>#Aǫ8ЊrnGuY*rl<?;Cw A~6Y9 qNzInaILC]ʍIװpES(:s94h 4a]ږtsfD ';YnO\߻NYN4^ڷ\o1B;%rXeɽgwg,j|NXv=xRv\6r6L-i)ھgŇ -ԢҢŃ,.RLpގ?;JnfL/coZhnYzZE@PRHa";;5)rScB,PE@k?rژRhaذ3kjwxZXkF:f5դlӓCGxr/^`eӋ_!1x槦"eJw4ʪTOVl3zRͧމb' IbJM fmFD0Fh`zNlj)G`Ǒ>(PFP*o4ՋFG?eo6JeS/Wl(":URjv uIo@^wz{tէ#>HuHA`ľFT?vyhN3fjNdXTg?WwLRmzZ?\]XDuŒNsDTr4P!j jxo(_.4f3!rȻ]r+hr'W4MªXN.[ݷngQ5]lgaݘ۷OWA$̞ݧg"7:sT%;vO'Gwu}sq5zGr"ߘǧ;!$[HCX2kz"ls}@eXȅѧ6쨫yZߠy$i+;)y$J<}w|FMV*yխ>?]^ʼ+uI_՘\Wcj̚u5#1 2mCpru@_ v4)h(Z3P#>?xbqX/W zNfދ 8IقjԘ 29ØPr Ƴ[3\-2>7&WOVp$^,V)߸(s^//ԯ3f_xyvz܎toSm\,Srq=jm%<^}-Ee,Iku-s°࣓{NT,\鱆,{%mq,9MhAR䠄),X~^:[{ĘcG)]M-k1?{u RBW/{q cm%L xҺ\Ӻ'"EY&";|hlXZD&<0Ȣ4QVu^J8Zm+{̽񞹤8d2@aLjP<蹶hlx=\ '|+AS=q&褹ZL R탩c t䴄4vR֢,&묕$hD G&`A(m̈́UȌDOׁнrzs x@r?jSל3i[ :&# DmNrNO@EO\=r̮Ĉuo~zonOzdń CĀ`&w{xhyUA[g w}G*&ۗ?1o߿{Eqby7m~a-}'[?qvʳg(t뿬N0Pɜ%1)9C1Wkf>{FˆlO /m@}kwWc G>*ֶ6q9%+c5HrE,Hi @j  >ETT,P\MpX,gVTN9t BvCPQ|l4:a(wG5EQSV4 h6"d-JAU4 `UT?tUx"=*RLmEG>'}%BmZi;NhVq*i65R1;YKNfZ>S-hlH0VhHlI9#~;; K=ZOoűT7ZgW#"Go2_"}Mt58A~9xzph[iRKϭmgT`S Gm1} Nt{LR\56߂],9z9#~#1]NjwqyHfUgEWW:>=jwM/rEܿrWrYeQ>e ՛RHqM/r3ed Ew& Ije/(wqO&Yy4,kYLfM t\RJdPkBhVw DJ26\;QV}'/-2eM`u%mj7:Hs7!>w]}p9H6l={U%MInU߅-(LaBxhbH&2f>F)Qc5p/zqtV̝hZ gx+R"r=S1r }Hu#sŽ%\4FS^m99UўlWdpPAvxi FA:}Ggǵ﷚t"XBÇEdJՎ{j{꣯nokY`1453?>\}MFF&=&Y>_JoaC8 (9+ 0]CjMU;渻RSJmrC"DyDQUKEqc1VYN[IU0o wKJfxJi,cuVѽW:v FK;ą~>_ן>QЊ4ގ9NU x~^>C-}w}t!}LDz83H#h An'l/TcŋD)dƜBPT0SM]IFI1Xb \JhvIŇr\$s"߳CWqȿ-+B#{O AAOI> x(h"+ c[>[}. Q 1Y9d+& w1:K#9mg'D܈S@ {+5TRx* ǔZ-}Y`=xBMX$s|Bl6x[F /wBA|r7`R-喜8gHN,$1_Y%"W2zQ$Knj`f;ZωU^9J+p]9E/w'U|R5'U|R5kT5!bFBtA&4#dźFK^$FzσbtQACD w!,^YL9%и*b]Yv}fegt}v8g1Z*I'.8p浧@3SA] uTFs\;a%_oq6a6aFXE0JuZՈp; Q+:CNRio1w^;V|Gɡ*Cvorȫ?ɂe F,4~&ڰ(bUecn~A*aB\t.H+w|9rЛx՛ׯ{{}VrZ]*@n3 tI7Sucg_>)x M!fzvRk%y!e|GV e0P 1ׂb*H8AAiv`]c[dmq53#ɻ{ik XwɋcGS&~[x܏z@rۉIb) p,?_*j~Eױ_0H٥*pƏ eǽsvJeR$g$h rpY 7J lyN%% nu*$A\$GLO'nS2`V$/v\裱\͗2O4؀Mwr{ "vٛ6%Om[k>SY5Ϛxƞ|5,<=';.'#d7PdSHq*B2FA5O6o7DU2e3O=i\";_tWVVp"ǀL{@jJ͐/̌KV)H0_/^KrBqVm?>_kCW`²c}#w O=FtI.&eX3BoM>7y%'鄬s-Y>JUF^&_+a%m잴: V+ (N f AP$I!-~v$~ NU1_I+yeXl ][oǒ+^H}î{rlIKBFbL2IIvR&69#tWujKWO%K9I6%~{dCI1^C9+785G\hzR&[xEK韓樝{cz0it2њ6jDCߩa:@+aj23L!̶~2PZ+hzSղ/VIJU#xc.+J* c2T X 5';KBwD1ZO 뽙,dMX^%MO,t7e:6qлϬAmu/wOV(L9ǭl:SLM^& Ĩ80&-ڂZzu-Hcl筳qcKP:"N=&ryNpf3_ZF+Ӳs(Tq7Q @Awcn=tq6C`BV*: 2YXye#:1uc܇cӋ5vW15E˛p3)ҷs7~3uY_e*矋~w\ET 9x0ZƟr9LchƛjYM-AD)Ĺ) չ*9Uȥ;5 Q%BG2uo Fd=lWPרQ2%sb3OoC&e#Z<|ps;36^fLرJX S˜XmɃ,-:ir0Q!ypAmmd`JO[%bNcG[3 J=xk)= 6y鄀M3܈v#8,T[GE˜*z~7\0OB7,CͻP-ĻR z%u)Yd\.Aܣk󲸄k zd),Ysŕ%pͭv K 9(2\ͭsO 6w2Кd3z=WE2UD+k~(KjzDG .R'Ⱥ's"HƮ:y gk!WtXƾܒ^O)JS*eoڽ2}72uݍ^~(D<`[σc ϸfˤ̷_uff=N0d*\td _c1 &܉ݓG4,6߽ߓ?8skl-OhYJ>??2 Mq. R&~C~2OvR~qo9s<˝rBs̝\gS sw zB*]/<o6㔜X$Sy~H=w隨MWCSR KٸglN$LH]q"P>B@Fi+(R?)L?0mUPX j9bA_D*N>PR fX1!v#e р 7^amCB<N 3]a-6-EN^zԜ.̾;D7nv,yȽHaf&-۟宧q $Vl8.ffec9sTG~ڈ:%>w)KTF9sng,(y2JE0 c<,)í1[ 1J%js{lbHI![JD\j?p~ Z`pv i"< SJjMLk~wD2Af'iZTRRkɔWWSݑweW@:O% Q(F 5_Ҡu+Bp1E4navCҥZI/ h΄VB$^i}(\! DA~MJъYZ3{^#|W`/Fy.g(^oQP 1q!QӣA/\Lf<n>]s?߼ !s iƜvUAg9#.aWb{Ϧo]XŅpCVqkӂ?=rpe%!$b*p >LC1טW۳nBJn{@M1|,?ܰs2rr2Ck3 'EO)_/yiG]hj _c$7qȐGY@`ozEy7o+NÔQfc/·`83yw]X:VLgWCa&vm~&}7-u{=O z\ YbyXIO=Qޚa#@v(TceqtEYt@ CXl-zˤǫ6o{- NMn~9kaJqj/ٽJef~>` iϗ_'[gz2%1J oN-?ˆ ;;(4C-1N r0Tf 3 y %KxK2$+1u(Y< S^:@mwc;Iho!WnNGdkrSc6(`4 Bh FdCsMÿ}ͫi4Yj}UZ۳X Ovv9{k~m2Lry)D7P#P^Rr _Te^)x#&. Ϊ;~?FR^U*cc U7a|FM,wb)^JaY*2rBa$N9E+t D`Zށv>gɭ ]nyFu]q"2/f͋bn;mZWr8c<7K+# m+2UBN#Bn% `1(8u4@:mF&8Pos0 @4bz(LU1Sέ~op50`tiunpO‡x}jw)V^sT=eӶ+O4 FRղ>Ҏ'/$Զi8VX c`b޵rm_%N\s1#Sӌb!a(XRt?Bo͵"hA{)MdHRɉ^*sy5(ɦ*cR(vD\Q:HpYqjhi' 4Uʌq*J%PJ&7!p"㰚Lr ,Wr1'0b='>g!h8"t'n26!2xoGax?m"?}榊P7 l>^˂Dyoᓟ7xy=g`׆'S:*3;}x>(˒x2^i,H"ٻ8n$ fр?bcq d/r|Ip{4RkGb?gdrdEt*p.m}CPlyE RX©эI!)w B|gY_PDʂxx@@ENAxe);VHxq /TCh{bLo3}3Bx&z9˽^rQrH΢ePS3-pR:$'po Q*guxz"x1З/@E1Ls365O  t)Ad^ju>E 6Hb2$Zß]y/C__IX{M AB_{ VAf' Z JYy&|CrFdoXdYYÏd.&yu{kVV7E50?_ۻէ*Yv52m/8fń)כugnh9L`Am:Xsz;i>50yfcnoRv0 gMErdT|k nw`Bj{5lvmC_##ޮ?}wDl! >v|2hy߲tdB2*Kx*Nx_Ә?4rkۚbxkBBSb(S 1 .|L/3YS 5-fhFF;3]L .Jry1)(b8(f h4Pk4i`ptNHm^mc;ਉt%[E1NBI%E'!ȑ̉A;slģuVҹU*}1!|$&VʱUUrbՌU.( @9H)!zFS DKk.j`LP#ҳ P0:߼U~O s#JQIa:H9jt"{ AD@K3'>%Bķ4:6= +r~w#q5I^-ߝtAS2c; B iJrP''#s/FXXXPrC#z کl!F1`#?e71UrLcfL?ks/1VIrWӸU5ӸCuDw"#UBfYYAXTD4(\L-:XfPJGdS2Ql/V J6vtJag5˶c' ՞u#(5M;Bi"iDzcC! > R=En)Pk5U]I*RrҪ܍o}Х+ybk`l|WE#a5~{}wr qX7ֻ|xp ,N<4e3ewsol~M #6x փ9լrkM.6J~z'^Qo~\jF=N b#W(p竌$e b#orY=۵cD.žsxs ~T"dWHDE>opd>l0nq[ju)/7X}ʴVftDBCst7!X]]? (=KI^wt|w`'=kdϑ)Wq }[q4"qiNZ|?)qO9HӛΜ1$M gL7L;&N7/@>\r-=8fe#yrv*. s[HC.h,2_4ѳw4F2;PT%Ĭ^IT)>1ş=xH{*xF3hS !W#J+A8Y5N!36\ݓJmJn%4gtF}i\HGNi@3Afޫif-pQ.K V{ޱ9J8ʆ9P ~ޠ"G42M6FGWf5&K9c^UV.|Ԇ?խى>-+NTDхل(g f2 {Ji3sIh>REjAc "lG'wLR1U]O*!I] |Qv(F(@˙f-( ik>f'`JIK,1TPs<7ʃ$)`m-4 QNoFW3U^^+ ax"$Hg`gV .(' *!38D:r>!4xuOW (zES&W_hP:Y]Ò@Fg U-LŦ,+bv4WAʬ4\OWPEa=ij2/Փ ?E(@JnejSsJy8ʹƶeHeJPg rT%9PmUr%vHt5I'G_kנ'\ i C$*IHAh (Eci4ˠAHa}5gV#QG{ȃA,xRxD+9w12Y:d;ґ(LJNR#(O (ndz zd 3׾"FGjtha3kty:FjŸhQ8Oډ7CJ 7MbT: hӾ4:l>5 3WDllSJH֣3C.ZϊMlxRQ-̆"wb6[FvɆR E B@"RBw mA3{aM`J^{Nlc:uJDaKH4@$9#C$ӑrkr%ݍN*qk 96i+)ъva80zn sB -D@)  bwʞ4#g:_m_eDMִf7wg~\ ACIdm.no|ݲ;Wz4(5,dzx>S ?Z%?Gr̻c[] !'%aK"(4B#ו Zdޙ7ƚYHDH-eD*%8k$q#AK..FQK Q"/X92 sfQͨ~:RvIDeEu߹qI d@¨HTqGPTb>Ʉ.Z sK\eOM.@rtAfTꨮF),*2yD$1#@ ]].̸vRY *CucY`PvM*q)ZHz*P7Xt>:ުK !xk; C4oh$lc)qm;B(9"b2|q7x(ߍdk?ubÌ{2r!Nj~x8]3ae'p Zx{eoV.U8ީLO DTD>ƵT2T{ޱ)FB?+||DL5i p*ODX?oɟ4#"n}xY 6' c$oplȓh:QƯmҭ!LDlIw_#+^du]M¿>[4?+(_&U b%I7BOƣR7oTLIAH*..%pwLXNR^3.&nI4O^Q2iM͋gT-wP'`^sah\J) 吓P&FۋΠY,])&(~Y|%N.ƿep"E\\|vW+8yS¼C&Ȼ2W tn R[av?>u"MdEk?_՞wėmvP,x}+N')(e"p5bA"_Wθ}!e蘜UPõDѹX}BUj=hH6H|#dX3eą; %hŢo&}Lզ*V|u" VF1<~[0[AnU$cTK7ג>Ƙ!&V1ȗ#tJ$9d&,KHAl)HRTR]kRřz2Q>]gvPLjMIۛq2b+me-dt?PkGb`|Ci8{RNS<?^|SY5}'MJ 7[tY,y{?BkG[ f8Q-OcZsՐ[J6x4M@oj7_ C]xvHmSH{)t^,9V@2*,$PIAY 04/da ;3 烯n=ϫljyԝIy "*v 3cT,1jBadklQerq$'BQddž3Y^<&y{>k]0NBUCS+1dFjcfW $*b$t 1c^zНwaWbm[LY<_VFji*WF</C??&TQ}0q6Va Os6zFϕr\Y`f yaFB!e[4ŽS40%ԇrdX1ۛca?lX0 },an-%-6'aٍ֍=%S KgVzKpn>&xеp ?aLǦMgklE{|qEs492r&#jë6"F&-Zjy'[$QK+u]t= 6iBGD#!6H!p̘5rKte} aGƍ[oI 0)nv<xa䐓@# '" 0%Dm 6dɁ@4;w=^Ԙ)N[sGWD1,MIsx᜸atJ &8$,VQhd(E J09;moOָ(&KC x01r~XĮ|B㷇hn+?6VDYX9hy0rqr]Jn|x{yX'd(C[̒e2OO~&߽y Q|BҌ/no;}Z2`W4G/b?.1GP!yW8_~ @q/<nuHkt0X4~>hwԆ 'ÖTN1vA爨>4qLi `I]ojV?ޅEᔢ<U>MSVI=s ԵPDzt4PNc%>?t5%}Z65#j4@oDѳdx .K_]\] b@RTf-;;.F.$tq1,Ruy_ޗsC:gDQ5ʌ>5bŶdk?Ŕ=潻XJz|Bv5-)5Kʚy %o@F/fҝm7A!a(q ĸoCaZBSWOD_S^?xےetS|* /Pli\m&?n\@]|iWIV X-*GZs]{tKBhrB?O3Qe&iC$tNV cɐ~<1yvq >y4|O÷4|O÷Vhxp xm g(6⥳XNZ[87J$)궅2R#p,e˙S:($rJJ`m#9A(8n˒2a?dyуY`.#-6?Pm k0/G7?BG 9#S Z;hʐ1-q9V+}?ꐽ|傝vUrN*|{kNכrԓl:QGO+U] u C>ιuRg2Cͱt~o~~6uD/6bQ  rδ]fP\E2FSLeC5SB<#9drYw,r٩!q_"8}<@%Ea ~]Idu& $q.ev::aZ5i]1 m` (skC,)  TRL a ,Zbqꘃm6R5P׶@މ5aWyME1HUAB҈ R4Z˱eRH SRKFt4`ZoSfӇjZ nt `Nk;Sj~Ƙ!DNk8X\wk3\]]ۨtĸrZ oKT/BgE:Nf6,p8iq˗r .5ʅZHI/6{Z<&}-zi5hn "ins7$s{a1ϡr\Z盫GSwVŝMr0O_U݁ PJFfg[O_ýaH @cj3q5mJM}'IEsCpx$o}\J?󼒇ltO{M^H;B(8'?ϫXX$.k_&oΨّ5SXܜ!; q f Nr"Eyl83e3tU^MTC[cK1`cUx) Q0 AbL3u@ BEXv$(4{A6y及JbKB༥W7h.vxc_Eh- ]IPvpnuR5poO ?  fR3ᯱ*{0a(D_'HieiaJqjCb+R@@^zA(>O3j ǽ5AkyJ"J!D{ՕWփ. rT],rm(@VMs܍Nyz(<4O1 |x3oC+x@?)0M*{{tEDDHZ ?O?;gJI~OqM. wޏ46X+7hAM17g3voJ7qSZ#W|P]3-%Ke$JZlun`'{eCU5#tf4 ,R>gp%i.<^So8;HXBzRF'*&Ob$eVT#Rb#)v${E 'AGYUKy$B70W[&Q){3W#ڜșEXq<~N]vyLoL hl^'Dmky `5-[8k5;O}& % :I#JE9ukr_szy[rê+! e;PaݑmiJ,]WC%;^;(nvd<@QE aRIB,|XT_J,}M8y5SqSwjIw)"ɧh|*u0DP9k=~:bf7GzV|+] NB-LS VB+9d5Y-)e;sjt FitQpdupQT;ĥQ ^{Mb Pu Z k\(Ds:-'Z[5d=h/*p2w4%d* ??36J )0C ֨`O/iP)'Iqg?6!Nokx͹9w5n7=xhs_wp_[/vtiQP\< OBޛ3y t=_yW0 KSC.d0z̡}zf]?z`/>!3ۧxU=[&/9Lމi=K1_\:AkT뾘~8_]Ktq6l8.nrKeY9H ù:Hs ȩI6rt$II_a!|n?VʥeɽQu+1j[Ǐ!^0uF8Q|(aP|(Wt{xS2ފ5i/M(^G6* R5x3xIQ(ks*v=NW"K!Kd8 đ_-ҥp"]$,"IMjB-Quᄰr#atԸ *$B;l1Ʊn8 _#V\#'b% wؙX#-'%6~#DiaYhc&UF4Q`;%xg2h}5V'Q.5U ^"ŎY<C-*c'b. vG*!'D՚(MSakڲ,Vd,A6؃JAH}8W+Qs\"qJp2L(y5kq/B2=oL݈d${Ck$4t/=:|)i%y-9r%G.D' #|2k\Vo^*An ǫƦTJ ʹO5-}\:IM.wZrpu ͬ#rtE];嶴^(B^/VJhq0 Ī]0/EM / E-lo\4 :EsHKAi^ !2!"_IŰhu=^USYf_J_F%]-WM䂨5ʣ!IԔp Fzt8ff`chHSQ9ͪ0M+\8ؒHLW PwއOϛ]7h9=ҌjN+5Jۋ:HԌU*YݹWrYpuT!\6ڮ%¬w=<q {=]%$#u=:U{R`MxȾx ޗiovP iJOfx缲 9E8cZMzecfVy4@B@RynQL5||ik(I#(z j$۰E I en5bn:i-q!9DɑXXbit2ʳ`U?X`< ǩ^L|lW7lO~T4˒X6dyUӘ]0,Du›]M8>x'HTno1 3`46ӝfZ!1>t?f^u.|~4Qu| 1iAh)5*Gς`.!qe%`[;f2c gP&0Z9Nx6Z9_?b)@X] uQ$Zd9$Ar:V =Q]Uvay VE jF׿N*us+c}R o!'.'Vk oQeNT蕖^*hc{O:8kɠ!^^|2hV]lůZˢ̱c0o>6d?79>Ed)^`N6Ne7H?ެB'- kNVmxum.B.Lٹd;?S<^9<(B3D!EJ(4갣Rmۭp0vQtJe3+W,o u``2'u5 ^Ͼ5N@HZ9aEZwI@ڀhα i /klBY6|ZD?yݟxv2{{g?v2y~ э-Fc@b:j9YʙTNp'O{c5 IK;]0e_)ИT.U&Ɠa |yo }&@B{=_5P%yXuŻYS.+4p,N[0sPaB`>s5!d} YL$Sr[8WCx'XjtCKMkb7vfQD?ae81xÛg#)8:÷7<DDiΫa@ AD~|u  ~ΦoXG”i_'i 'Z`uh]o=llV6+'hv(Rj\|n2@[qb1aYQ=!b7ӍMFnh Ӕi%ǏUKT}BX=0rN]aeZqc |0 ec{ٸ TX/a4K,TCu4XBcǍѲeZ-d1)Н"l3"x&SJ5p[ OqSʬGi fi0^]Há{HMJt5byh? ;ԫ4j$@@;o 2Lq>0 XaI5nQsFͱ[^S{(Q@cƙ_Y2lp?SpC0l8F1!J{MvTkZ5(icաBcZu]]ĕ2sU^,}0'<_C-*xGD 襌jm%aePcLfЖW1A} ZR6f2FHĂЄR^!ezJBX!ÀfQ\Yrec`n5īʹC$2B6; MzAz߁~8@-U@v#hueXj5ckQD:k\)%X =c lmMJ&$"Kr0D՗#0U0?ƋK3r6TOYj Pd]"\'"WaRJ7\ak%MZHB1 j̭`:/”V` 2[4]0U.+GuWd0՟s!fOmeQXi* "ȁAN#+)@`)P0"ă_ w6Mbj<~%Gi8hHp^^&T@7뼤Ċ(c:BGӂa.Oi5"3g6aE JFDLpϤUp)K*( f(kq7f!Gen<]W熬. ё my QFVLAȓ?%= g)X+4 Z%# [Lsw|RBAٽ`:NI̸l3 LXc<Yk4`@eB!~g1~O`f>ӻH]Yo[ˑ+^3*TuW C`;"7d@j36eߛ }P"C$ڔs|~&.$Q,ءJ4AcŪP 'w-m(!QU I\q#GF%+e!zK2P2 jWULE̿*gNJ(-Ad)z fScc΁W\p\%rPOw4\_;B`u:25^fxyҢDVJ+U-EZKg u{>jK;3<y^!W٭Hw3Kk>ӯ={SDtV[Ŏuo{ 䲋r&xd뺧C}hpײd"%8HXUajVr2\ i `{-]#o_+`iT.뚣E2l16 ~MfFWLL*pcB( b"a¦ C*HQQ&58ŃF_J24sDCPg3R .|g=>zn¬Py7>vP59_g=%LJ9aL)qkF7 MW<,!a3yɓ|r?US [$:◮.4]ܫboA۞ -2*"sCBnkˀ/w~s}2Kzjem7nJmqBc$6:'2qRSPkYYK l-֐T/`jO/,꥓-n:].B-t}hZ6N q[f|,>ڬ@uBT$Zȿ0s>-( +MG/6Hȩ 5p24/z;1A.j*1eƀ֏9! ).{[,OGbϥ;]_-C!݅VR%GP1dk!qhb7Bju!>[]n6Sk|'ߠ/N4~8>g{|Θs/K3+ʑ(׺ lM0="i18=MƸXZ3uߖ /Fa!E&Nɧv6 PSg5]p֣R {μrZ FA)_(phޗx۪\gc{VT]0Vn)$ӄk08Y-R٧/؎~ ( vbh ~%1?UL:'D]ѓvtKuu:Ϊ74R؉2a Yd\uZ=HS:ê꾓hlrd)rzE&٩^srf(ax.Z,$yC6xzxs]& jU7y>>5%8֕֞qu43^='҃/3{>?3o;g&/i 4_.N;5Yv(*/Qz 0jG*+MʵLd^>GciZJ-jc{)N2(McI0N1'm0$ts$"&tN,M:CpP/ͬwJ)N@qEsuiK "#+n-v}QHK\[-BKray\ rRuQO*{<'HAr{+: Y yON?ešDk^*"}[kA u477otj*^)dRūy;~Wd6:/*RQn 6 "q}B~R(HN>G`|0bS8*1k x=nճu6K3ȜbkPN6O';Ѵsf跞K9xEF_,~9_?#1;߯M؇Ә~|9˅3[ʘkR=&uOIwO+p;uG1ʉcVAqEEvF SvdktdQiQCG7:4?tiVݳ+PiwcܹholFk8f-7` Mϣ˸qG.vuZf3_?|88:@QD]\jYNo^4?l/g#>Yn:m䌠>xTϻ}JLDZ 0o(n9}Nv礻g7IdY$GoJ!zGd[gqVY'\^, H2)D7jD7`Ӟ+@´ d0R־h,hgX`  ;WLsŤ\1\ .UOliVzpqH%7S5-=wdoJB`ZoZo| -Ip;n'u;kYG.&W\;F ,W Z4ED!R$9&]m7j֛??WrI KDhXΪHseCsZ8CB \*=:}DfY1bB LAY GyDa {D+82MPb8(2E He8L.EKud"8 E63dۀ BhFph+ISB&nZ29T&a R,Ҏ"jegQq518iǵ5fE6.sKYϵo)lS6$j'H"]rۢ̅9Y"FH$jQ&y)z͗W+]d&& BhᘛVsRR;ט5).{ Q,d*F&%dUb3>mw 0*a!1~u>O/Cr8`!\/xͻGzk>~ۋsц^>~}%KKd݁ʌ?7~%k??F'?^OO?;Wx=ẖvz:3g7_\&D1?=:y+Ʉ-K$\ww8>(si"_*)G{t,wj›~nW z|Sz_V7|;[`b0k:-i}㒋vФlј-ff\Y3["w:,1Hۅaz1 BKDEEȑlJO1A9$Y{ -dR$%t\ƖP ׮YIk '2^HΨrZdI':ۤ ͜n7 fkȢi3P\KF-KU@R" cQRRN fĝahJ͈6Bc̮N2urĤ y$kSp0 'JK#Dәv#(3*b=cj5jj>v!S8 C}2<~v9982M<adQ㫋<^^^Wa|8-gw<Wނ?O* JF/#F̑.G:8\ttO+qH藁w\Tއ=v1$ؖ_v nJM6M%7W'*2xFJVE~_fDdFTB& -:Vy_Lgm Xe2Ho=84L*k22@٘n"{ĂnsDц<0!lr75r|JWeEYgo+`X^*4_ܝ~lvv˹aFsL$f8r,.xLNVtH?WK*G¤oȠo=;Aľ / Cf!8&.i <ׅ`FKYMN+t^ ]V |vcCЄ𺚭gg39,dPt:@2I{K`&";u)Fb,<%4/Xh/8 as+*x8-!hH8"f ,d^,&GA`af߷3+{5LBZzvVG)jhlq28 |SFX~oJy:=go'սygW6M?ގ&Aej^+ǹ+ My0/>yfטZfZ}^d1&k~ԼF "zwޖ@Hf@+[Y>+#kg1=6v'o]Ɠ7gO^R)4hp)$Z^AC[,_ݛ`/WsU{?CS`!$lBBd=DhEu_+3 }F{׺-yR-գj11E1#1g0Z@G 1Hרl&7:8-ۧ|gy׿#w5:Zߗ3ljtp#Y-FJƌFoW$d)+1:`thK=  F%aeT̪7gLj\ENMYXv\B"5$Ĭ"E 0:EC^!}H+^A`1Р~2ۢu޼WHx:uWH{4/8ҁ 5>-h~7pL+جPsInYfR>Noʔ.w>V狻qFB{Orf(9+40HN $D9%2X|atnL(rFPKPG1!X"Ik@R$))DJEVD8!1RfR@:$PpP,><(`,$\>^ۧLj4 Aٮ}2gOj'уqu;c5rTozv7.%.+o~sM=(?onh6_9x?V1P^~<'1H>/sݦq 9Zvby Rᾛ_'r^US!uj<  !9^wAĉD)sҫMDZ̟GBS~{Ea2We-K+Try\ &LI wyQ(1IO)M!Rĉ@2&fT阜$Q.Tmdқ8 s sR4/vp90Va7al'b((႔u *8KY2Thǡr0=P@)rPSEؐ+_@ <0 h! 9* CUG9z|nCXv^iД%4[gT9KG[Sd>E'Y*DFOs H(4ݓN?Z˥O`Y|%t_B4f(`_O-lh <dzQRe5'+֧T5q>^mk$D7>+9n'ce_z2C.Tٕ$Y%$TPD(!(JTPİKrBPvu4 p /1Dj߂ΠKB29 r ;V]fEr^BGSH+"ʘvEN.]eV<]Գ !G–}P;W޽#w̓.Yϭg[ kTsV ;\0f3Sfj~9ú"]nb\G .'xG>^*tLܸL DWL.R6bxeicm)VV˲_hiBIh8cgMD4-jy6;58o(GR))uiO=o"DGJlSMZ %5z?~x„ k.w~JwBrLNzK>/o0(3fj"Ϫ8M ٬L]~j)+WsVJ=]]3珃I5/V\j.Le)i?ο.J4UgM7Y@z嗦E&׈NvlmվA}m՘rNvnp\9$|DN^dKWM|[8.q{L(t֢M0WպbXs8u4Ն (>(D>˯3RIׁE|_ܫնZޘ)Myg2WL˯VK?rOae#n}csmf,A1v:Q$*"%<ŒQUJ죨T)V}#b rʞuEM6bgO?˲,#LNgEA^s;=aUz#@}aYӧ/KRN%,ZpHp=6fAԓJ1 r= =\>WИ ۿHO 2N_rA;3_T kz>l0]@27GH2w!'V]L4ϰd PG9yª;a&Ͷ 0 !Rp}X7>e4k4|YNMHB󞑴GjwB}#omb>8]wfǛlb&oIGK_ن ! „S8>9·jwqPGWb[T{ )f]21BOE>b!Hjb.(]˒ZضX+ZZVb1 5hE˟p!@Nq<ՒCL$(<$9MSbcB0Ւ_G7j`Io .\6;v;q}!HogYA9=<$D*\%3 PFH~`(ω)8 Hs. %Y dΧ^u-u0+<(M8<̦w]*A65,5*S<{u?J^_;j}% '/P[cwXw= ZM^U78f $FdD8K\4+DA8Q>aADQVXu\<%z'v Fis2EѥTXjEcvhU)Z{O뿩%z7Bq.Ԉ!W\6k*5*k$fC ko{%>4׌_nB cxڣgWKWsk#͘^j4MBt7Y E90"u+ZlB!#(ulHu@2=K)&<<, h>2}Q)Zۋ-DMߖ~}ㇹnua }HErLt1u.ls覇2X>(!)~,H.M0ݡAu2<^lO4XIu?JB3**R:Y _FxqVYKek 4F10Ḥe$U*H{R ~dEg"KV}Oʁ2uYpGESu2Wʨd#$ý jjdXBt<\2seBNϤɆ2w>j"̻p?IЭyˆw;M!Kۯ<ٛ}N!o#B;'"N%٢ jԼ\27A4.gˀ8nes|i+!zjgd1S}CRҐM2;o/ʽBvgק]NŅR3P(\D E2?LZcCF *qyH'kt2Ox#1'ŝKI9+)mGg9O\۳uА0yO}I *%K˹]Rsk_LЛy72$AHڳXok T5_jagHW=hzס hm{yWnA:nLy9WmczN]@/GV-pzOv3CԵr1sSD)M_CF\gT@k_zP7z޴x) :wf3AίPtלj }]@JJ ӓ_өa<4CbDጓ8yfTn-m՘r853H7"m?#Fc in OݢFZpJꤐS^|v.A (몤f#9uUx:M|@~/lD=Y)s{)Kҵq >,f\ Sz]@JA$ kM&Jdwd,jpa4N$0g"bD2 N3ռcc ~1xI63yQnA߂g6n?_oTXgANj;!7sFi+tS=w|H| |IĚ2{1㭘C g45^_<\C1D+x ZkRw_. (x{wHăt\UȦfɮ\Djɚ%y.!5+UTDB?6w{>k8 JLݾhd9zȷqUn>rE3^7<~Ln $I)3`@3@GsP97R W9søÀh:f8JA Q+vG .(;&w [ThOq:Jefw'={X\t=qr4W$+tM@7V{O~9y,~ZH ІVi"'4cɱ7TI/ R TBkmӈrڥ-6^sOlrfXό0>Gs]U}`iYĚgף ?喂OiuCMIl݆RrX6h˳Wf3= m8N]\]"T&$1 C3V UCoV Zk<ƥ.kZ<] 8Vf SYЌ譚-%$|ϥZapʉ(rg]:t2f04:}Ƽ}Gḍ=ߧ41ʴ̈$sZ,@d(lUX27heA[ؗdYJʜdMJTlCb)}XIRy>ń`sp+Vq^JE;CJ<[l;lhцVؖrr5Q~.`?XZ"8s*ΪdgC@s>v2ΜwdJam\ /\,VlH* WۖnqAoRv-]T۶'_iT8}fuT1 u/%rG^dfCs%j]qmlΗW&tMS6c_yHw*MR/Z#a_ _S o/i+R_S'|U5}r>hlb?oC% cm7KY1քޙ)8[> x!dnm't 旜Ma/Y8aF:ӱNj^9tL7xeJ*F^SEQ 뎩 eB+ض[WR핖!RC-lmܐç"]UaAPޣ-@6`dmF|5KYT`= ΏPXP sYPƸzQ)~bFE w)IL!4XinZiY]0NGY|3߻}=,WtsWA>ทJ.-'qIUxȵ ̃*)X`> p _cq0Ix Wn'㣮tӅ??)pHXԇQUF* JqĕEjZ殮F>;^BHc}$pW!߳).g~^ھ=˔J{w11EhœfG$C  |0̅(Dvşd m15pjҧ,B y"V;U&b+%u8Z`#WL O'<4$;ϧ1`&9>6SAO#;v- OC?3;Px=_/^i4ap/#\W-c8,7U8d}>(^p`-R/S%;E/IZe%@-Q/%Is=&74<F[)1(sPȚUAԫY`F0V!BRe=svzȖulȾBEy'Zh w)ĞB5==L#ϕ)*k k1V ,򠘢4@e- ,*ZzdiEJV/`_b@ 3Q d2 VAYLmH/O#7s|ٜc?MҎҢN:n9Қ]`cds6jt>qԂEƝyv\=ō`M㢟G2I@Ȭ:`$˭CnjunB LR,傣1)N#Jj+ gD pSp&vifL8%O3+ze`kmXdHB#Qy.qmX+rtpqWM턭K[6Iqnw_܃& r)K~̊45Isa7hod<(rkyNiޡtP!4D[ $ h9D"x$ȏ+H@B3fqlV~J!Re + |{yTn֋u)N%$rPYګVqGsN8e)[l]uY1[WZUoU %ԇ٢`0Ka,Y1<,\Հpp YS1 ̢n+/i$$sZ,@dXH 61$z6b> 﨣W\"W%$5hnT"g"FGyԔ CP`r Z`]rH ̥^,F2^1SV !$"MCd7+rkt6HŨHer\(U&RJ1<+,Go *1SidP=1$1z0*L.cRAxe9xP\FUB: rnJ2*=UYD%GY~XD6i_E\ц~?o&ejç_>}+q ;9[`8AwqG].CO-i|^`8rEvOi6!p ;cꭗ_o}>83n÷;G UF 7qM5F&0}(N}h&_Yk@(B # RYLK&p?)AQ*Vn2~UѪ聨/^WP+RZP4GuAg-YkƗ{_@IMWPFcEK):NTjŘ'OuT*fp /!Pvy<0cDqH_Q4_ l\B+) #0^}D8 X>1sX}yRW?Gcq1@D' &U Qmdhho(Է'@8WbI)< 닓`-J B>8AbS9񑊙".D@8n g)UI'-F$وa'` E-^WȮBы@{/0_uҿgv3BKҨ˳bs0Jk+˩:HFW:Ɯ P,$(%D4 V0lNSk!S}A?.0V?Qpwn`N1xM8wʵF]NfM }oܻ(Ff}NvIHkfjݷ[kvZ7fFz ,mOo5Ғ7bGÜ6CITXv+K\*ss.95u,}CGʖ] {X}2tQ5~Og8SB^;&Bc="\`ϺgB#cA&IƔ'i7*Vsm1R YifI(&-[2sou#,bw&n0od1)hRZj\UiYr$ᨖ՞Rp'k3-y^:`_)cfj2t<di\BCóR^,Ҵqni2ڮQZȓ9¸yLXiKL(•jԧZ'8MC^^ w_YÎu@!KXvx3U~h܆xjU]ZQߏU.R,u6 >t6f^gfu}^tBAv }u3vџLlnHګk8{mYت(+\Juyd=ű)g]5C㫮)W ;Je_4D)D$聐Xn:W 6:\$_IVz1"_p2+00Z3` {\_u_{ƧT>H(b"F54-yl<;ٶfI^%}Nz"yBu<9sȚ'NC8T,ډ-3On#6NH0/(#`'/+"j_(;鋓;FY%&9|ZR~)N&L6;썆4DQZ)wdor8~ni&qijfAyؚZQxKՊ WhtC6pb3Ǭשi#O:]j ;X0ßU3/ȏ)s玝]Di>gzV[;[(LG~4 Lg" b&cjP~En.Ŵ&ZʽS)n}`uSɄw ؠyJx0ptktjU}+ahM': UeiQT )pMd&d6wlI5] eWLV]cB ݎ !Jf}U)pAUSyޚ-Y Qn |![aj-覾7l++XA׵NRHIoE:bVsA};Tb7Y1C5QXE0Fy  =Lvcᓳ]rXL#_llj32/m̈́ċ #(5lnr<ˣ&m:HO;4c\2WO<qNrJ{5~x? 2[X'-#nA |cH#2D"ZX!0DPش.*q 5yқ+畠D ݊2 T%yDE]L' +hM2g0i]2GkYv0ޞʹ^UUzYx\ Ä`-y"$iWUdb y4TR#ܦeH*$F+u "Xv+UW]&3HseGEjW{ PLlp)%C?Ӄ^ qo5`a1 Oz`F} Zޘ$8Č.D}ЙF]eu1'=N9bF#6ĬB =D]p,,W ql71=<#it̓˳B9˶r$?M>K|0~ߘ@?iy' f:7= uh<]U6{fҴY˳?m2~#Çl% Ƈttt J!r>L=~Sid;fZW8%9 <WUϟ@`̳O~p~5IzzabL#tj۟v.,LM_ࣥiz zX }OO^|y?_{yz7M[ި?8Ov۳?~w^|gx_n<9 鱇wsޟ{-?{O,I6?gL3K4 ssڛqngףmAmeBwzCz_E038ze?~zt=D/^G_OҜY磫=Uvx<}2({[x\rL6!^ 㿖oy l@On:OH)_I"8Wt8}9;,&kX߽e &tNm:wcl⛅Ͽ`'yQI=!~寧 Aj-ڥYwQjX~ &TT2-dqM {$SM+q#9{w$ jS⨛"6odQ$ " Hf1+ȈȈg0x^R%$ q̟y(C򛫒1K\*5|޴`knId䣗=Gkćtp~cဥe(KK0gp $JXJF`c^EfQElkfl0BZtMtGnÒ]'mvЉ[󏿼܂ɶbX07۰N}X-[[e.~4ea)eʮ0xB޳9+qڂR Ɠ$*ӂ8c,Ysv^M \g̯1\cD/>>sfy= 2 v`үgCV<D۾5RǓJω&/5p8 '¬n"THp",+ CPڀ ]1&p¼yXCEpz2j1̈}}xAf!Q8?8!#V8 ]*faɵb`K,`wMx(&'sq/矏CXvY>rP5DDZETxdQ1PR:=.盕׊,-G3lc ^Vcخ {زz\6mߍOuzi-?1 w2䂊 Oy Twᛜb9}ƷoĬ]]MS`tDWv%7PNQP΀46q1- ,^@u A֤73/ryBNbΔzmr̻䌽얓p]˛wܓ[`vJZ{=; s{εjYE-fU91r;OsJ'j-/{qik8N=;q5 fjywMx]smOR蓲*7COO=xkeVgNLRZR[ rpZ{0SNOӪ_xa=FRb9$OsD;ߌv4FCw5NV/!?dq;Nr~rFx#Nsaָ1gY?iQ7S4lQM:7_0t 28:dW@&Zt:}O۬Ngp%w^;$Q`<wvC*_"HKfqXd`VpW'We"}gT5X`n'Cq02P |Ccdb.z-3@FF7 qB&ϮX)TQ؇x*dy8wʴg XL)6HIdkcs*>{Я)r;1Kο|_] f_P8yDz05T*?J1YI\ 2J2XgH!ڞfYk$R ߾|y}Gw>ZOQpkFz?Mmk n8f^nQG QdDbn*P_~ 'Q*/[M6Ěӌ 5CSd;znM!+yG=. dE^\6n3T8rYh~*o(g[xȚr'B4ŕzgr 8o w?j;nXkTN6&Wmp( xL^yxk _TFg=m^}su?tvԀ:邺MhKX0vmNv͈fSr-+·H5N'QUw^~{ӹǴA"ÔֻF9%Tkdv/{oy t,BR6kp7B#9[D$ns˜<1匎ܖ570qd2jI䥳K.a;ФX1;Z>>E:aUxh/]y0~h꥗}fbO=[SܻIT&\6-[K'OKCWզ(Z̭aD݉VbJo"DH9 T KH\(5:"zכh/STqKoUD{Lq^6*CTd|0e {Z;%[sUЄ -#-<5j{7*]qaˎ`x[p NDWJZ.?7V-ӔZ̬JƎ:cZNr.s*Yk|p)JvM"UӡŬ4f13WT f%k&/X#=YΓU9l9=Y/%T3n/|1FuCl[UgŢyq]־hP 2 +F}Ƭ$ЦuesqN4#%p_7o/&r{m0- WkN9b1;Ikfee< Zaֱ\=3!UrTMN ,(;2^I_=I잣H=_~9Mc`_=6:]_|37w7YSn<{Zkf嚷JyC4op4fnK6JɯKmJ?W$Jj#,.O{LݻysJРJH_On.,[0zdZI\##BP7 [rlLqBD,l'V0'8:ZZi>fMq8+CM_l+~ Jӏ IqZs0}*/-oT},#`'Fv9_?|~6yg< oʚ=?W{y*rBD@*iAi#UZlivLM]焓kiS"M$"(ubYtG#8R~hI0m*|]]]]Tv|MWC#4U=n+-k5$"f )gOE\O߿^|9OO&-?E^\|~EiI2I9Ƌq(?yxz=ɈR'12J'p x23YmI"W={'L)JLwNLe@ 0?F0Vض >`V 0ûmP0RLt,걶]C5(:>/0^W)A!VU{{ D ٘fk4(Atш ֫T3u|U0+5܏o.Z7)uZ7)uS jUMW`*GF+͜T{fX,T#dp1bPPQDI,R٦k52]` K.TTd)95 qFPWU+,b9nHӑi0 84lt;Xa6>1y(sړ@ u1ԉH1(9sϤd$ &.s|p|uj~px ш ,H *b&&֜%vZ&8DÈmQT)72pUrJe'aJ0Uԥ"-E ng Yp v,\m4XMxT rsТ;ϛ !Е49~ъswqvLx-Rc <^8@$('Ne,F_>iF^%sZdR$u*1uhې XEdDdfėS6*Pl;GXk.Px7T@MBSO: 9Ws=_:BI J*Gq?' Z7BN:am[4%?*Uw$_ԤQ1fp)uTN: TYdNUj<Ք+F%Xj))kW)R p`﷧hryt~s|{ډ.Tk-#--IYRgzL 1FCJf: Shm@S/Y 3G֩4%>))i5t+~ P3+H#Ï|'u&CG(vJJqR)Sxaݠ@qG4ߏ(wQ"|qLZ57. (T_i2%4gJ?lKOrx5T@;))% ʠHyh&ETpwQb;pcX!<pAY1sV,h7q+6rWdB2B": ~Q  "G׈`e/Lc=N #p J3gr-U$wh-iy ܀e.8O͋\#Oym.+ ] ~6 ;4Б#mΊ5 Dx޸$f;65=\"WUUQh tH`Zi(ےTp)Go5FxARHm;&@gB*T>8AoYӰ顣|qq:L+x=W\TYTȧPr\- "m#57՘j);x ̦1h?~J.3&\~ĤG1iqy(h|q)~$ 6#gI~sJTrzڢj3IM%1Wu~NZ<~3[Mͼ8鄩"r0Lu/?~ur3& Ό EĬOS޸#  2!n."{): }quoE-]r€0DCpF.~2/-vZ+ᙹԵF3N>o?g'\ܞ_5Y+;| F>/ڳ@(5HNUV )u>үF5T&0vCt.k.]52=(Mb+ 'tx3(#kKT$ Қ]`B*E:wy/#:#Ygpƙ =0LjPLtwL :koPK@3f@gZvPNzXoVYbᤌ֕ i.eri4e*HĴ4ZLTip#j)Q^d,e~B}.|p󫛵'y,r!t/g4M$}-)ܧ]m|K|w30}8{~`YlxjڮޞU)z*`.z7T?~h|>Pc}Pŷf5yT굧>qx.6p̏VPJ "䶈33"%ie!)P F ,xحhsڶ0j@h2"::Ƚ+^OfӉHųTEi,g*4aYsts2 ~rG3 zFdwZRjZANMH)a ^Z v#"wP;O{m_^ 3)竃X ;k|~3X|Q`*)Rh^cf`q|tm*Ǜ!]k9I \,#o Z6*i Z13-K8bߴsH΍yKVHnzuUnPpdwBqEaW}{-5AQ0;  tq%@QF 耗 c҆#5DC;6v]^sԞ^5zyO28,Džv;m"xx%QԚbhG)t*v C3.h"lމ{a;V= o4h%rH$%~$q9/݄6lXQȨ-@fL5JVvPYNJPOUn) ~6ik \fpN%K%OCf$1SeRN@1u|(5A|uv~q*c:T܆pZ#شRHmÖbKSDH.dtߓls! ܨE@*x O5&g)-J" ^R`RX(FB\S7kH h:7Egz9N:izp z`›ׅ =hFTX^\.)h!AˤR#|4ZHa~mtks*4c\ho_}ۑ:̗v K QW\4V^/"^T;KteV2GomL+8pbHé+ O~ueuJJS:"|] , B6 ZffX֌kvqqev}x(=)C>YJ +1 )S_V;_̊߼cY ynE4^KA=#OKM\Ɂry;7X8]/W{KHK*bwsJH.3%gf$$4zK~|scSVS_%Et=}Il#Ll] b4ڂ>gL}ʛb߰ CX w)qH<;ϧ.x;lwDyy;s {bE؛QN8ѺwQ@K&|pEzDa=A23`F];x]Uh!#lb9m4Fo`h62,͟YyQuNF^ ,JptC(NF #=.=Sb~?ϓr"x?lE uUfbM\įk7ڽ5%wP([BPe Ҿf‡ \eY&`NDnjvHɜ1N 뉱JTc213T5qY"־w99 (K.JE'H(.>% !v%So] o@ɼ"0e¸8Q ã&Deb`$+wʲ# pz=&[1Ijx݀4Guvf>r>RIK IYr` %@1oO9#= q1>ra<隈J72 ' 5Ԋ `@btEMrr3Nk HD{]e -]Ӵ#tDW>!17 xuZVpN:4^| @PD??rh%Xteć]^=e鏂'-7wr*u?Ӛ\o\ 4D+BɅ]̮~j&1@ͶoTڕ>=CoޫvImH]cgFF$8пޡ[wSLjIp(qvqu"-=o L"P*R`cQ@et@cHF_ +q1̄_/JWB:1F! Iv0zMѡnp3q̓%Qc8%Γ41TNR,QE[ N̂XȺ BL1f iQOszt[wS0FЊS>\` ~siZ[6=6C%Kp 15]Z֛U˕e]D;G[AZ\\;݅'ah(~mo>=}.\ܛGIr֖כgo<'y/sMsin'$bd>]osĞEdq;x_`b1B +5O)JjF!13(iͅI;k?fsKcv;DZyU+@Hl [ib9~36jZЪ31%RRGVYb(2*μRDqxϵU%]PSA7qlXG#L->S$ E߶2JXݶ24mQ-W,qM3Gy&LMqbwI&riڛrƉSp<̑GuW eYuGswÂlgH?K =3g{oޣ;M>\[SRT}r5'”7F>]f{\h?#s=Ҡs^:܉~}+b,<&[ɾ7pw:n߳M%U^IqF5aR:z6$0fʟ? {cTL/Gy0Uǫ |{>'0]8tWqҴ02W*iP/ŤW2rEe~:)|ivpČ2r̽W㐞H, (6S)|FaI&>Zc{;__K?*&MqYPKʔ܈N֥x:V `Ҭs47ۺՕ?n8n i#0 }fbsMǜ)1=7c;qyDnX;BɡQ$1ܣViEwЌc˥t7MV:MmwV񶓜As.d> ્fwqMd 4"u2?*+Btj'e5YmwM^"QXO K -S[wQDӞ2m bƽݻYbX6c.3|k89gB+_gjVeSKr3fɀ7"җoFU o| u)bDQkqۣuT t]yǪAzj*"A6(:y=Ƌ<ӌ}V[w.sx#JPjD FfӃZ1}eeNnm؎`p9Q[rwc qLc@|Wj(%Ր.qOh6U.dF=Mf# nܾ,Ö_ݛ ]!C1C }Ə&t@v_F9ĭtpQWvyb)h%*i]Q6LNO7xߪwu͸*-QXQԡ̀k=D/W0 z˵eeՋ iZ^hJb4V9j-oS[P@ )d<f@ d%ۏ?FzoŠ).H5& sGpAѓ}ìxP r]a*ͯf-`EE"'Ks6'h}^r:Hy'c}<49MlMlF@Uq=μRpӗI-dX"U1p.i+_N^=֎/z5z0bG-2 ߍmHowJ(-U\]SםgQOiFuy8"ȟ##ecS14̴D234$J/ >:ȭ -%J,ZR_$p/ߙ ?c}J%}+EVfGӣ˧{'`4ZG6(8dl`1&yze?cgaf='j:z]zWSp_lg>+3 'Lap8]8sf p%1x⪗8Cg0k̚U&,bC(QrG@SQL dAYf@޴LY dL@5& jڈeœq礇W+P2W5Y*.%d.N8oLQHV SlU()$YQNۂ%.eqd<y-( zxQc2ihF t02G, ˙s? [-]]T]͓TA@v )qAidyRy·f РF6JreƄY*E;zjj# ɫ*$B hY@a eE{9UJPǜ2,P'NA<؞8 1HO" 'O94QK;Y[5 Z$ɂjcD/,XW1_LKtjV?!ߩd-߉f\H%%"4X{@sU5(LY]"x -)ojXg%$cχ9RqZŝ&|C΢!?X9--ƻ{Ljm- at-оdnFDРO#Xna_CqەJhfAs 4V @j`eH5 SeowzquiNjBnަ5=M뻳v}NyaNԓS]?h~]QKM]W؆?m/Yw7~yl6|\t>yOIKr[ A"wݡ(K( */u'ѧt!GIikA䒴MrŴP~h#=2Uz1JȎEhEb͞f׵T^2[ ]ًoklj_OƊd%{YIw:S1ڰWbfW+)#9miLExL}ڛe{HgtMʅI o0 gW[;gz~qxTL/Gޕ6nd"`v?m'nfNg^20Ց%G %۔")QNmQwN:{ 5P=WJBa*E[S*'1'??bu4WL+ϝs:p؄ MRXH quҗu/r۬׽io,-~P.[-}Їm駅g;\[,3+ ;hK5^_OMj- ]l_-E\qWT>];K.\\yL7N"v{h3]؉' tbeь|&TϞݭjZW s ǥЯ&05JKAv)ޞ`N54{_M65!5j'?llqC`ըӃE=,@dxݩ+oN/$ŤƓY1j§D0V?pZ`6&לOˇ kDiҨ>Cd[ !jՓ/@uzګ4X}MkhsA΃L {=y+і>E Zo>[,G6j0jʴR6' FzzD3E%V"YaH,r"REdcD!5FmMB~qI2H.e>!!2kJmjҜǓTHt^t-H毠/ꔋ]bXɃ`N>%ou@1e8vdnt3VSNL181F$*x ֩C5 HkIrj 3ZeP'u9֜^rYHq[꺎 JkN2%D6MNQ1Fd_o6ˍP Bh""Wdh7 x5-%(eyj>4 {2lGS8WDcS@w=g6)z[#JFCPRuĜ,2R^ Ӕ5]{ZHh2<X|y2` NK?), b^ܱJSH)`PIn_3AoO;(_bJ0Թ pGh Q%sO)Dd3^%20'En.uP1L97sh(lmyȵL%znV/ܓP Wp.)RRࠃ!T8{;ђc Y!}$ZqT*48Q돟#_tHT Ć <Jc$ڻe&e4`p)wxlJ#&{"),+q T`E`J6h0qAyF*a և<|EJԊ]K&Ri}\`vh<<!pJ 0 s &᫚L >Ǹ,0I~5-9' I =Mǥ(x{}Fmxūvx,4zsi>Ov} >O >QU4|%fKIKZ 4`ϚpM(dƢ*\hpV+;Z6 h볂0x6#TWT+$?faa$X4:%D:!+"Nc0{8&c ?Cf/+csuwWӈc~QS ]ěgƓ\l/7.S3#/,uzZ|p3zhz*=OrOů 9r$S{ v!O[)algڭnuHȑ2EY+FYsT-%ȊWZYkȔX1}dp?LI.h#$="/]S$Q3.twkMg8.D2LcwLbDvD+d8; Ž \vKg>C^^JdM=[X;0 AHa#XOZuZq\P+ӻy~A@awK8Ia eL'(AaFU<5Cj 0`SP8w 0k*bҀ,PRkqϏ ȊCdrVBjDHLc]Ov*'0wyʉ[SzJ.X c{iQ#\qSgh\k1h]vx5?~Oq`2|?/x`yΰ(:noQ$BG9뫫 JXT΅g(Ŷ7 (oVQq`-r!F\݇Of< X`N*@yP-Gu /goڎU@ V\'1kR|E"%TbsU^%&+0]z*dO ߫`5yyY pD\| #^O8<W_һ8 : fp F?WVO ?8;L<ϠƲ< ;aY a9ihuDTȓ)Uug I"<*# iGfE&覎>?k'POᾍǩuy}3L4cT͝:M*$|zrs KҎ˖RU|\8vur X`}p#)a͖9w<ղjy]˒Uxb P]ō2pXSR@XX^D1/̉$ AUH ɐ!ȢƆtZ+9deh\.+Jev׶T?15pGL`'(5r)8\֤[#Ki7٘ }HS*,M!.(чZ܇o^!|&3!q\)lQ*h.A<1oSLyjTHJD-ư8G,l`$ŵXnOJ3¾dDJ#@c ޒZ&Wo{a 侲^y,g* ִZ<}vv1D,䌖eQ\rON^ 8n#ow y`XLf\.Af6P"5ݙ~d~E-&EI3AwOEԞ@3XCqNn. ڿKBv*gB4ňpL"/'Ӟ& Y S<]RZ (yiQqRy0x!]䮤sD$ rT+*9X}^-n"%T5N\8Y8 &bTk rl8g\(Cvd*Vܘ _.`NUaA{XRaB˼t00$(En4c m+[2g8 mN̐bfFea-,,e05`<#fC[՟GgDQY{*@peo<X]'"7*6VR L(8%g h:,$9`v2rӛtӛ8gyzQnD7a0׍7avp8j!_/5$Dg!1!:)  DYUx2n-4ɡo;e ]I"-ɀN㕭@~ekHWo"t;A0I$l8kpQ%*CWK1|֮iijTi jVuj}@k@+XUyͭGbԹEWRE$M)zҘ3h$8B$3^ zH:>dsSq9x|rfy%$ lnj G#[I9%Gۍ}֜?u$Jl@L*TӬjwۓ,|}}{|P%e+MKצ5XQH.rwZG0`~N1D[Ǯ7 $P\2Cԗ>l"VKfZ+)"g/4 5&eQT, >FZ8|Y΍Vz\8{ R)%Ȯ0Foʫdۚ Cxn! v`:r5^3xap>WIPjE[GN~N !%b-aH|pwS6`®jcAz< qWßR*B\Ik^Op4"eo.C=#)M/_}2W7ͫ_cDWwwvDŽ=sVpCp!8Nic͙0ؒ\ mC2Zl(E((; UNؓ&xjjN<BRi,D+;ŁL93W9y-&ЅnNMcx Ls 2 +Ǝ@*VbDrS"a)>Qq'GS&9v X-̶9(O,%|B@ TO|q.%鴦 @ӯc]"0Ln+Bv߾41k33I(sV<<5wzNfmgwKsegC ڢ{{nC懟9wғߠ *O pV40v D }C$-Q;Р&Q3=2K KUm<+ciJTj':V48mO}֢WLz﫟]Swr(%;'/d(Bޘ/Xϔ_|T=m޻7IdJdUfL3ݕ~JUrC޲]ŀ|=n_lyZGz 4}jbg r4Gmt` RI;YꅎΤUʖjA(ݑDQxkJx拼ri˅`\\[Z_5H,4V;4QEQްV`ߗò*]yB%ay>Ä{Ei(+[Ho[4Y;WjP;v&?c2| +Vm58ܮa%C`3B*g ɱ pH؂Ä60_j7+065=}[[=A묨QHk5$3u2e=R䷛Cnyǔ!He0v NS-ЌINLrSȠ%SȐm<{?D! B2By n:VoUŽG[f1ُ(TNiWXG(wFB2*>9V3 e2&# ދi Mgx{K\i?]ʬ8 Sf=BR0yʪݷ3k5lqMH@sHfl ב4=fsg ԤXk%QPĨcNJ!h NJcnXZYjXRLs\Q)/Ys(qtz%(g+@#iFc}O%b \IjNGy͢ UDxx1Og ޭC>;ٳjK$T KibugHt[u˧G ʌANLB 0CܧWDF;I))u܉A8yTMs7H@}ݸО5~r|DZJ8'9ĘL^aK5k@c45̯ S4 ~?kl!fgOފs5dHd@ۮSa8=ډНa^:M gCӹH?'2=Nbz yBӓ 켜485RFkD jJA(>WP:JRV$71%12 @mQ®JA@#1V0Et!c=+'P_/uz F>|6c>MW~ATl?K~eb7Y?,탯3KDHV .~ˋzXn+ח)V] wnb Ww7HxwMy5`%34l}±ᡚ)@QxJ1:$6qL MQ2߸"Ǖ~9\xG mfqVi47+8}.W7ח^9|q[BK4F o4Օ37+Pmߗ߬/sK;Yw׭W{w@?,uWǕRyg5dT&9C,g&eH\PͨOw&vd}@]E>N57Ҧbm__ GtE7m%wI*{u۵E{fR߱wp{?AXN hTڽJ)kp[]C|"wfU5irྲྀ)[VTNBrM)9ituavBA餾#ƻ24Ӭu9wK0D׈ȱ|xsw9%֌G!'QqLaMEvV7 yB}3< F]r7˯ ǻuہ  VTœ?& 끥9CT1[OK;I 'WHrD6D2iLJhB]I A8 }W -ew~:<3go' {Hf{iZ>]xʲ$u>%' ;c .3o )7 YufG7`4@ BS8}H+mHb1(9Ovl3;;gmRTFDYbjhw Gbw|#UWm\ &$ܧXELxQ}+U\PG|I!.{0G:@Zik }4@J8ۗr_w\\ӡ,baWxdSË%[<\7%'9f$X*(åIXRN\ؒFx%U<9O$#0q* ;v^ɇ@`@|b_FFyr*ED^Vky뿞gL_L7MUMB8_ԭ Б0ӝkv \s籄z@ wTkWIB2:k5kK̋Qgmە}r+gu2X~(wS<)bi3NBXGg̮SVI:QjF|*EI. aݶ+_L\\T(Q *le N*_,`UsBtB19ޛb!}kwQ^&o,<2N ћOw6jo" )^ M+1|M ԖP VU|}8+Wpy`L똓nIdHIװVCM\vw#dcW`'5* X]:Flh?&[3x1+-~VVIM_DW[.3~qo9>=q%#mSr3 xS4eh-G suoV}XmɄD_yhAuy[ǧI^P:2"3ZKB*.aT`rʤyV0Дu9/kIAS}<`RgQqIT-`]Il:tw͠rܔzPF+<\PW)9u ;'˔aMJzۤc׹. bҡ$TqRfM4a o&!^_lIф!\+#f͞hEfEt "(?+1)^o;w=iBJָ!1YD\(O6}xv4hiZv oֆ Tl졮ŵ㣷sDždvlPWt.pMKj-Ƥ @4WvvЌ(8[DGݗSJ(ٛۘP*{s0#fJzon$fFiv)1_pA{[WT}9X.7Y"EWDvѿ"濞A5i53eW;Q/v/x}'78#z}?]}|rq6 M?T85sM\SYS~?_W q= h@@#Dpq0ü$܁fRh}Қspjy~?#ONi< G>lF-0Pqk^?s4~/F߬FIBvUX0b.[( mtr1ґY bw襤1Z(?)e W$4+dD$OV938h|ݧ t`W'"z%74Jc d.DmԄZ&F_{&7F*.SRL6:)7CS&H $!V& Aؒc`C$eFf+&c8xڮ7WP=O,%m+neT{ˁEDLA$tr!U-ަ!&ivi'[@HdTd⧙Nrѻv,(U*o䰄qD2N+@#4^IBQ@3NGu+k' ݀qO!uu{sM͍TN_= w-g0;VKr ?R_e'ꑾE-̿ S{u!LUUscoO0R|x{y»-?#B?\xvQ{~LYe!64ݣw%h4A`Z6(CcJ'@]Cj+ϓ͸~D3\0%į3:<~h.=x,x elϿ<05qm;8 :IHPa!f' #zNuљ 7!z+6;-dV{p2k3=㯿I J} Wԏ. LpgJCIPr* or1D46BDu|Mxhq<Ƣcœ,cƬ/RVI&@WIHF+:Ag b`Fv z1ǒQt!1Aw7ۀ]'BfIc޳}?aO`GPkݩOXnP` .I7ОQmV)+7D)C4ԗI!/szv9ֽJ.TS"h:˕N ȕ$[t(RP`qD 0(4OJG1K iC_@^2^Eik{w٠Y)BiLs/8%)=eu \tTGl~~ʡhhHZp6]]"1OpbMfJw4Rl#YAA*yN%8 ={>"jg^qv9KϮcE0/4D?,?ڈd!"V!ۥcFN$ۊ;%|PSܶ"ٻ·Ъw}0,##vAah8ȭ@)'DPJG?/B`zvJH\2 nv,5R|y00Pbo\zr3MR6$rƅR}q_b(@D\m!=Z.:B,b2=zY(<%`bjѮ#2Ů߽ 0& j3y1E<Іdmqݪыx~:aj}cz+?=#Ŷ\E[XZYkL^z#e&;i oIQ(IDΘ0[LnL"M9i=F"Xe"C#PcQf~-[o ê~ ЂԴb-th-l]A)*ٳyڵz~'M(Ho ; !@2*ɤY?Mf_JG'-Ĺ _*oMF^XQkZ>ʂ>ZJj$f HH(8ᅍ>bdqȊ:dz+`y%K``t?+PɣG#R'r1M\TsCv$R tC4 8}=mpQdxhЎ(bF H0!q]gt! 6 X*$T?8^62_}X-r(m$ *VyT B;р\PqWAĢ ڣeS:dQ$6Tٻ8n$ 旻n|? ,`#Nv[l-)+lgza$-*_y 8Ǹy8',/2Ŝ|L mL]] u,Y*S>1!Pa@ӇeޮrͭL)RH9l+-4P \)͑M~uX+gn&J QT[gO78J( f [\|> =ka! !+3ɺ|! cEi1 {>Ų`"Xxq"͎ p,އc ,Y} .yswoLN$iG(P}9^HuXI7 5a gƢRj_ ds]!S7(!nDq 59.\1q_e;NMV `[XE\Zǫ;jAY.zm9fUJ3l>Rh^b+vG_\_pP RJ_@Hi;N_PJ_  63 lΎwBBpRK2G'a B9۬?ud({zRl+k_>SgSOԺ9([9>jh`7 IF[(vҜ>#c|5"PMYHM/Θ1P7guxf4Vtk?weQM-x= G?^!U_UD*J<~a&w*j席BP̀󢡧EAJްɾ̭CIBx fW/^C!}Y/ %#)B !r)מ{ U ?|"՜(LOς+K?fYO yskU(@ۮ~_g `73tBisnv,K],sȐ@ zV UAvWd.d!Y== 'gF,rgTaSL(k`j>hY tEe^cѭ+ u2~GawU)pZX DtBB52*[Ia˿ SI^,,'YJ)_QʡHjENWչiܿW}d}2{Q9rt&(|:df SK͝P̿ۜ;RP\)lsdW0$pr& zdEgJ2N%Y ],6g܊ƌLEp]cH虄wL͏fm8Z"H`gYr0);NUn]s`48W9LoL^XaH)+RG>9S8ݣu~uBuC'_B,w^uJ\H.<^q%E:Ҁs EcJ5 ;~{ \K!0387* g &i5ђ+)?@izmcvc ?7 ͅn xu|fS;gR3=V78]xwjLPB`V=ٛݗ$Tav.ixdow\?njq6xpy=zn_<@jLh%)-M[>ZEQ]iυs~׽>`s>sxmG3DLNƤSO"8,v0S]FpϚhC-kbCĬ]Ʀ5mP=5e16S &)Z n|D8=EaJB% r 9:"Cϡr`e8[0(тi 7~Ppk9FK  /*a<x|7crg(Eof$>,q?{59^K\]ˏ_\.}i x`=}g̐:ScL_'WG-}/4&H w \zYa`۞B\KѵQtoB2op~~ݭ՘q<nCgJƣ m絢U )jr4cWlt5l:+$(0:4XGn\K㱯k]CwmIEP) <(Y ÇoDhHRz]) Pi QT&3w). ]D̉woI=|6 )+~=!!LJCNHT$ ,%PV\9H1aG9SbG TPqFV`aH]L̨Tm{^ʒdTܩ75IƄx/9+ .e> Qs:%Dqeb7oM1^]>QCˇ = *sX!\76iQH7) 6 YLN|h[2tyאC QPe:AdA8& ȫx V+mLmn`-(2Jzd8viPq C ,@g1 {>my+'x#fE[gsrX61ީ(JC1/R b|Ei&qd)%o8FEy͋bj^΅"*s!@G$/ 4p+[T(FhaX2 8J +Dy;X`L2g3Ǧ_K p E: Jo4=P!̅ܨzG{g>*fXKQ0*]A\H"zƕrdH!Å1LO1 O:_(h"[]w ּ0!r;0YAcHm X,ad0@|Bj 4F*nQᆔ5έ!8gmPA4GVX u8l; Ɖqm~m-Wd|EWd|EV._eL >$BWL,0/"ݣXNmr✥X]9A;@¨Bs)qAAA;xBPbavg L jt owcQ#9ݡd,O/*Q7-'>tQ/NVZSޭujXr)WV磐[*^&/J>D㚥iG8z4MI1́\j- ;!,<'sd=a3Byrő;O|nT.(&l9!xBٜTzO$LF,!i0s 9FV`6g)`J$o&l\Ԯ(Jz~T.){Kϥbb '1s7:Rgk_fz48H.wa <loSp哟dr}9]bjk's;EXJguqmgXtp:Z8at$#6%J~5+.ymtzpDY%E?JlΘ" YvbWcU)Y"Au,Y댁9E: Ty6FS/ӄASWF"RzNWJztf:d)ݢvMRձ$UE][D d|* M Ҭcwرl Ic\rLƩ&luı n[UV߷t8ޕ Z.q"R=T$4V9R"n, Y2vprw7pO0[tػ6%W=} w.S6mYҡd$@VS<Iq3C2#%uM융9{EZ^he="Olbi3bDݭ'`5^%5r e},XeoU(%ZV*ZbjHN`VN9<g#M16n@^Ks'v;:Dg{MLl;J+GI7R(}N?nP&KڽE5wm\+f+DOW]?3$k-'{)c J}Q胬O }HW-7Zyہ[:< ha4 QkՓ(_=+ (c`5u| [% s =$7U`#z0itr#dB]Tjڋ|){* yJȽ)9DC/x΀sz.5~_&}=RG@'*sZ䉈g)(GC%~8 !T^M'_u|sEo`\&nf\t֪UC WN=dBdbBeEeՔ^-K3?E}t w~nM"-ڬڳ҆UK/zyƾfFW@J }'[黦"϶6rƻ`H0>$-Dɖ"8 jv`xv&T؞j>Ԛ4IEvj/96 kJ`%(#{ԫwXa/&n|ؼZn ||U#o۬6e16hisng]SQF aTA} #)2% :k[ ) !ծ#0֐_b'%z9goc7w VQq7|O̮߱ovΔcf+Rkɾ;=TDv#aAAU28Xrl&-|\P&Jp bGDK9 G*2ȾD%fg,6J8hT٧蔡'Hf/! \E"ˊ"3sRR`b"u:b>*m2&3kYsHU[cK;Rjj{9ݕ7(t#Kj@gѐPuVh#:] &R /$4+Kީl]oNO?g?^JطZϴy9L'/e~od#܌v[NMLTZU1us3Pds^/oᒱi+.!㰠.pHs TYPeqdB>:d}a a8^ȷpb9f,$@DH/W(R!e6`x}SDT]D4)^r":Vpِ91G),(5\"ZJi&(^ q5H.iDvcu!ˆ*ޝg5XdDb }PXIMNܞf6ch,gH1䬊p;%cv@E (v#1Hh50jE[՝\I?nKW3Y!5 OV{|)ͬpjY?D;B~ҮEYC>Pz{Q K#kħgRÜf>ߔ4FȾ rෳ+\|VH+eG,h:i}T6;jෂZ#; W,V 7@v4\?;Y@=b6Yؑ,d uPV 5Apq``*,XŊemH8 XJpSBsnS;%ԋj[^ ;կRTkl5_.ln.y^,W,5ϟK|Mo ;sNW/,.~wUeRjS6Hqu2 c[yf.8"kś/]Ŀv}tܶ+Z5mjy{Rd7= zxtM=E Mt0+R>و#vcyP%PQ e2F9crv$SFCЂ9j;RR-Y KMH3c0RhmGό 6E/u>(i*ÿS޲c0!%ѳ:z!2[[{IO {Ck'`KAS"1/ߝveW1OMHA*,FcrXc o뜲=%LИ$ܫE~k5c/,1Astè-8ݝKT4/Y[怴.cm؝ȯ""C:Wv_#VF"Tl4z )Q# bHJSKH*,9A MPؔT/trt) RfŘT&x1jtdV-fǮfF/{ cկRx4OgCVXa0 0֕|q;Q҉3&WEkAhp!p"C=-`l'Ai~΄f}X{0cBcQ+ ;z2z2H]xC8К YS!9V!:!<8ve:ҷX/ Bk x:ݱkGH U]fAZ,m}{u2[UGD ,:[=~ܼqբPo/,,5eTO~!rgn޲3Ѣ\(zyBޯ|{­ŽeFMvYbtv|s&h1#tXye _{G;;>R]٢No?-ݛ{u͚g&bRNS{}%e@4:!['672`~k m!i8J{/<O)!(x Y볳GwIC 0C6Erxh<V:8-uyz1\G*uA}6kBk ZP=l:?}HN5 ͮ8n;!04L?"1acEuJ剉gx*y 5f5MTIU5Xʻ,{T7?ˈ&DJ?L CֳgU6mj-ZѠj7w*(l!5aчϟO1M ڵPР`4RצDmKU2_ޮ}vA?DV/_J %gO BCф H9Kɪ$1Ɵ^i1i${HLkF煨ûbhuC4W^1.]YsG+>w zJ'yuJx-R7oBFu72#2 ʬʃ; \_icX BЪDpq!|̪~NG(G2I$簢EG/O$%X+ԇBH435o>kō3sGBG¼ Lu Xc=c q@{3)1ym7·fX $ӊio u\#೙qGBu5GwnuYvY${ $/ \y:`0`!X)Rse' 0F7W87 ̲ lZQ=hoF҃)B)#vBec0%` <2 "zVnQ1NyOweC#+5ɻ RoCs`cD Z `6`/ٓ[f+.Ɉe 8d'fvT(|Aqg8(^00T{2vf:#7_iν7DpɃꨙcD$Uq#Ad#1Q#Z&$qij+03kɼ e ւkH,(b 2|KdSp De< v*MhN8Z)X4JPEς [ mEXR0o0|a!|l~G0|ʫ^M}cIv&9KY:sZJew5шB&!RId fx]![»B1A.0#rަP;(")lvZfc6sŸX֖%[Afp*;E;`GFҀj fzKXt``AlA[ЩڒѦ*KVQJnl`ĕo?|Hd$s$$e-#,硋Iep-qB!:ᬈfr w@hWZk -&)d¹t#s2 nBJ"",`Q *t _nPq1l$%a~90쓯cL>no7p ggь.#f:{9wg3wYul=q^}8h2ƕ޺xgW/X 9jCfTn]i{WS>h1BƭFrϦj)W'"&jN/I68r(KA\Ԩy 1JκPu  % u,ʏ{`M_ԃJӁxA1>bT Ja:]AG -z~*9iׁMWaHNVF6#9`Ӊ]tkNnɉ]Fd ))6\Gh3XVasGRt& #,H `%@^3K1-&+EsbWׄ H\DSah^[ jIlH SiuAqCĴ"'&Lk̵BH:x HqH `E# Q` $"cL5dZK8ֵĥ.;`6{?Ou`nƷg? A<-K(5ӏ.rd/HYrw~խ=pwzI=?Gj$?5_\(kɡp- \Hxg.?]t_cyxCCFjD:3 sQ7|`-=*"}VN%O1ҊiҿꡙH?en]yÕnvyEɃ-Kd -O9'58 nŁ'FWs, % ۩WR)., ՙMPI=L4>^-ݻML4kjX%ɥ_[YjxGټVy1xv,v8 BPOŘDm=pDNG}ĜT|ļ%>" M[ƿ_]YxxG!g*wz)[z=eF/b Ra:BA&h{u1E1;51l-UN6}90;w}i99L6бG %p츝]]Ǵ+]pIܦXMtܥ9ڝJfʞᵚ2bA(s)X6R9IR 1! ʨ(Hw@G2bjAulop)Wݮ̊ND,wQ1$3jÌ_48u!N#`Qa1A bm6b=j8SujL 4srb!({PPN}[.vgrdҼۭOedjr-nf2ox$`˩_v\]7z.6 ?YxxG! ½;شHcб:ԚM1gwd+H˪3`e o:&pu8z>s^n'jMغRk87RqLnCЁ7(qF)ae9n:2X 8-8dfjI8Gf2:3k0NMVxFt{NU_;:אG1Q5i<_2x 5X ht?쉓&D~m.o>nSj3d7WEAkѪfg|7t{7ӿ$Kzp.+n@b֪|1Xԙ}P9iB?0}M$W-){1W2k [n*9:}bu=~~;]~)WCVX9/GؐEHx% ڭ)}Gv<1u\Eij&$/.2%%wcw6swnGtv"yY>i!*k\q,`_t6ϙJ6`UYzˏx޾z9phe&0=Y4|d">;T654,oթ*I 9r?b!BpJza}ӛdu*rXn sH-v '7[I٨%R .M8]$XwB@m'c R[Yg}q6<ٕJ|z,U{g_ξj&s&!'=$q$_=.5"D u0܀2w}p(d;3HN᱋gi(Le@1t&Ece[A:=x{<{F7T<X~j2I8\ϐ<y\Dȉ(e{@S4BLL;. L[2-) >%'@KY"d׼cѨ!C%5ҽ]NB yIY󰘷P<+@Z$9')ƼeMaT!S(=J/iLa[F541ߩvJudVwSi)?FO߾F3l [ݍ呌k?FO7l]V6Amlf>RN_ΈF&(zg66q76 gDb2ѿC N8yCδ,XNsŴY;!yF 扐6 5%!vpz(т@^}} 5W\uȓ:ӷNa>{9Y 6O:Dc yc\[ƽ;QJ.Sxۘ˜)jPK+)xQ`eHfaC%y;NxILDdws8D`WN ``=e~AtoW~+9&9BcSr}.>ytI+9@`4LJM"U5Z7:ޫEO|,&0ܿƋ7~$PٶؚQ8&ugT c.ʛz 8t^Yg BYJ0JWPڪ\ 0ĸDHxG48Iޠ! @#UlKy-ݣR"t.r2 8#wZe^rTY.Bcr X{$9VSG08(z“Hn~m).8&hTk"sjD4q#a\IȜ*duo8H. bwyO`^N\ |ӇeXo'~f?Ϸea٧[٧嵑#^-.hw_̋G c?ѷee?\lX1Sq `VSD߬ԱÃo:X7a^y0?=go=oBd$F?tC% W!HԬ]~O5gS___7//?'>f`ݬ2_>q%:d1]*Þk":{iV0#S0#{-[L%{Q׼cȐ#]@F\Qrb]zkZ1 ધiK׬TW{wټ8c-b{bHi@*qQno^}zo7c2<0\KT9*zz.rZ? # $t0c*^(k!ֿچ B; =:}wGhjYݍb0s]JY_) ^d]Rc')F9e !Z\SkD62ǹb LÎj4ILyJ1_PiJKI$ПhJ ƻЦ^R.sC+!bHSoqTdEYCsbZHխ P=O%&K%!̲/xI>]@hrG݉ǐ:i:} FY([($ahol/nQd\\ŝMïB딁Ydl",ONL9stXisngd5E$Gk|w;FdzW[Q9`!MG Lp3%y*$nx}5{4 5'ߦo@oOO6ܟ1n{²[ ȊWvŏܿ`bSG>ѷ`ӿE]>=K׹UܢDmKhLq95you!nĈN;ngԳS{[C#W6$ e cND >lv( 5/G*p2{a&ڪS۷r1xh y { k`$RFt[G X8TYVJMem1U*ȴRs''R*z+*qk2[E72h.^I@W\*:.Ԫ&#Ѓ&$uTPDP42|/˦뢦 t2Q YOMJMҝӈP0b:\f SR҆b aY{*ŏ7#?oo(Z`@0hJܿb@[$iX JW65]8),H>~[!u~DŽpye3+ϣ7E<]z| Y͖?^T ٦W7W~H$u/; IL~ib~MZ$/Kg(iiY*-UyZyG7ál:zyҶ At8~E0%׊{;8 "νϵųCfQCdez/Why4!\NƉ\SbT l~ d[gO!d-wI+JB( P'fj][T)`TkR?>UlM)KDPTM^,#.?r\6ͨ#*.( (Tyd]\ܫ)i\^r!Z ЀI)д .l ̄^4F?`\ypgZFB3ɔ(s[(tְJahdh1h̅8FãV/QԼ@C>byvɑQts4KEvK}jGQQo%S_‘wbw*T8uuqQ&'AGNJ/j" v%~tq*WWΖ0[6gV#^/s)B[mhh_L9m~77AZ <׵/pgPR~֫H_ɱ/;K8ˑ_B@@)łőZ,*oGMfƲ%Kd.v FQeA{In&c}عy,Joh9+U(p$X͡ d?V1OV?]M)_!ny8:@ R0b#HJϑ"HB}o=/2pĩq4=~,_5oӤJA_WK~QoVe G_@8belOF䎐1+rXzK8BV2L(E4/g4KpBk(8n SRO[#3jN1C5֊i޿K;[4}"+? ZY#2cMΉr9s͌RLj%B`J ۜRJs/΅LPl[(HZ^]x/byї=s3M.Lrxb ,n,`Z0TWʦ fU-hζ&- Ew0X89jh2͖:͔'DIj)|L{wTދ?̣OΌu#fRzu(n"R&2T 'Vc+|dr6I\^yJsiL89c ;XZC5q;D 2͙Z#йL/ʹ+v%2Ճi]P͸I~%{{Ye%</;r Df?Zw=6}XeÝ|nh ~x+?-m^KAO<{p#y2_ OyZ)z po2V躐װ%-3S_0r{Ny(⻕3:vS޺Eiْp)np&CnNM$SRٱn nmH (h}v DtbhNV]k艆j6$䅋hLi4x˞vXdAĎDi<%?->,P!!/\D)Ae=GVWA5Sm9Um?=t:}\ܛ-= .g`O_sh׼ ։D.>鼹w^Ze3h3Onn}ҩ<9 g^D##mEE/-eIէ2`줓|b8 ;1ZP5m La1Y 6 + ⊪9ʹ6Y%4hka'v_8Z; zI~dIc;?Q2?!Ɇ*Oo4R1EqTV"(t|+Gȉ*( C&Zd"͐FTcln*\l!&JlgPkaiHs.\6ڄbEq?kk,OW:l9\1|Fxz~MZ11dO\a*:vn{.pH7<9(Z\ނbQ[=l85C% 7jشmW o쫯>#꨿pSq=_:W. AUpEk@>c\A,*W#fR4#{.ԧLX"ߛind>on'sLGFl 'sPEorfT  ~r>v/Hf>%?ύgSJW>Ň]#z(%!SU'{'=B¯:Hfey.QMdQ.RV!`c5_=KOhK8>a$ roj,fKu)qz-Ph]?~zd!&o4(wg%`ǫ H*Gѯ|1'bʼnw"w|BosgVCat =u'8*.7=;f U)Y?wƱS0„R2SxeJǿ0ɽ}t;c%XJGIYzJ2Um@2F @\("VԴ@bSRRd"S bRrT}&U,B޵,!E1"e-0bOȲsn, {L#̻[{i8@*0u[i )h~u"_13.(,i%UT7~G`{X @. r~! ȗqiT4/w rL+-H09gYB91iJgl\h1$'v_+SdTG]9ݧ)/gn\<.W7/s717JjuYT}l-hEf.3/^!O҇U~xJ 5X%y҄0ap$B{5sXE"YQf<\y\bO:blN)*0OHFR8hs0|`$t J6]-0UWv3qMhHt<:!4uNS8P 32Ks7r6S`0l)c8-pR}JƄP`U}* 27>e_> sU DpSc ̢ X0,4hK GF_ apVWBP[)K"V:X['{tV-=^/Md!{އ!d*5f§yCHW4>dؾ n,uq@`d5!^D.չϫ+(Z(WHjѓ7EQ1A4' WJtZGY n sϐ3Opї [)s|60?]G5M&T "=3Q 5E)걎i k)o3Q\MC%4r G(o劭avTbk&=ݝ!}j-Ŀ#+I =$h d SA}$\d[g~?*fΡj.!TXTm $09Σ$ـ] qէǨ 庚jȔ_oSO夣LU+|3/[@@ʛ~'IQ["#MM򁥨kL0~eLVfsg"#ϖk@r>fxjL˔hvʊ2#{;{X>ߊd.uF^+@"1181FD*!-L.(f95+/f_, =8ڌ7 ћ|X蓙~yQ؍K6%n`7~?MutZ]rN; ϹOnVV0K UHwIdu Y:ow=L9Tb4ULnr>;ѣI Lڻ~?'?&:4n%|?TdjBz[slfStg~9w >b̵Zvuai+ U}cdΠJ!=gr5&ZWs>ZLV!>-]ZGO% s]0}wNV頚05xtKD- ѻ%le:$.$[B8ꚅmho;/xkT(=(wg^,=^ qWCI5iW*Jfc+"seH|Rg, [wpx~DP&}:EhNjw mHoxwhD(m/Ln/!?liaH4W 0n aG[%%8Iq'HF"*C:(aI%aRlI9'5)n4;d':֓f|N 4)Q 5|HG)FN}M5sZKINEԂr[@`%^SvhT ⥬˔'!ʷ> Z)kSQ`Y*|CƌbKe 6@GsAvt1SQ7 ưHJPt:%*BRZw'x2%Bw' !hw|aBz8tP1,?6uXV&Һ4Q*m4/3#˨?$wy%䕸wMirVNҠƇsg%m-.R]4M/ֽs.pT f|b]\% ~܆o㪡 AC㥥3!ѩ8JhAu?3'LDR|(ZOB}pYAvyV.9ϣ9<ՂT6:~JBPq U lJ/໊b^/2"G榅a@i3Bd|ZL')a;F=*~~T5EJ' sL0Na]*GBG;:`)GbE{ /Lu95c ]LN=ӤIWPNPV#Ϸ%5GPAhm"z4/ ֚dž72͑Exx2r?#@#z`6;%p<-** Iijse'^q83>/8r5gd2{N//$-CmMb~TݹSݹSݹSݹqB9ՙ6ҕ敂&1,WZfiAMrCDEAd}P qpF}6+/:,[p_Ż%es"oSWGXB+a%U jy Zeb0a0c\1V0MsR)RQE,5LBP@T2<h#vJY\J"YYcJL<9%(Vɗ*.ZB ~xK˥t:;OW 3WoO("!4O#ޝXn:[Α//TXj%Eۓ.}Ti}24>['OvZSyrpw|Rf ?ﯯ'AAϳ t,oC9x9T\SRJő(X",MxJ!I!/"S#  ǥI%干VjBFh]; hgL {Q2/˜hVliR O|q]<$+./;.OhȅQ?f9}X-F$:DP>5אB?ŦP-EV>u gq "E|8'z0TF %8hx]vnrQׯlRKG[ =] 47[!iF)0? D,G &K4.R-n@/bcWyRD?#\\/e϶]iF[95RIaM:Ռ/nMVֵ٨ &kԨIV7)li.4ѨtohV~JR Vx  @udA;h%"UĤܱ1(ZI^2n<Ɇwݙ\L QƄ͛48[FZ Tɽg%FuuDDdر#)LQ,Ttƺu5T,/Șb O:|'rsSۇ6:( @Ng+"8-D&ujТ0O dmE9v8 q {ЎCUTư1M!DZ@4$@M݃ϟX59ˢo*j~~Z3ӖW;m=޵aEڳЏϊ!ur/ǝe@Aa%GΜV0gӽ[j1bqШ':^](OU*S! zTzجO5֟+ ڳ}lq9;˞aŘ]R]W`* ٸT؈OEf᫡-0r5 VwfvC޹=Rk2g4KF%*`W L1b;A/Gq{ef9eSN@0t4Ȝz:^szze$e̞򯿥-Nz\t*{xk*o^d,aW B%̎^7B/+I'ũ}D|yGY³Q@"tF4gYpgY)ɖ-ϛzQyfg:s+s]/>uen߰2-RcX 7Y J3$}|,yv%ݸVp?~JA+b;fm!S Z*ޱ+GaK#hD֘m=ޒY4rQΒ}m"I~hر.kc~JW?!bhE xWcJiI:gPS:?9]K7׃}B>o5@TsKlf~_71.C?m:`ԐZ*PDE8t! wdJ&w?8?7B:Xc> r+o̲ϓ⡸MmqF&uƽ{:ȊΊN$GsǷzVϷ|BvwqPcRXrȥT֥Bj(2ƊHI2AK%QN~q8lή:qb-mnEg33NCq-'mve6d^byr~ykk)iǪvY+b"zK#[o<'gA}Nĭwo޻8ILZgښ۸_ad7#\w+uǩܞb1D'SOcHQCq(pqE44n $V11J5 TY*1ː<)`ÉD ib7|JqNq6'7JWK A4'\A!G!*>͠t}rk\ Gދ:ZCR@ݟ=zD$fBR 4Z'VVYsvIC7Ba<"L:ϣIn#UN 6h4gLM!䰎ANN:Fmc@a@jZFSfx ;T$* ^6Juǀ-uaU[C + 5]‹?M{l ,/"wJ$_D˧{~ښ]|o= q I؇߿eo<_z`Z z_8Cލo0g }k&ْ͇C$馏YGȫo3C;}p?h {|>XE[q)?ݻ+Z\ѯzc_z8RvҞu^Zz usq ZLt-%$k#v2ʭmB@>á%<3dh^8Ff~џfm(B0uŶ?t:0m9pm#ˋۀ6gJilxW_0`"gcag)krtL.Q0"iʝvi )WӮ-yC,VY 8lj.Z;+F+ 18=ڏCN4cjo'- Y.pp@Ӯ_R-0eNT]ٽ8%T |6nfW8+\4WVF_;ݾfn[訬Ao)mͧp+?,/7@.հ=KIľ+jIdn9ȞcM1Oɦ1OUIa܄,tی'1 ҈@u &_9t>DN,9?k j)ؚ(&r7KI ZLIi&‡r9aۏ3*ozJ @%An6BVGH&]AJ(LN+* FE!Lsh.\9:t"+rsyTŽj5X`8LA҆ ފgOz:bEiPΧSkV)1bcэclM0z!ц٦V4愵Ci:nL'A3|6cŠ TjQS $9۸(>֜GjsNr TtR%D2yG@˭>V,YE><USFG,+#*ct# ^Y]CKpC[2bbl h0cm+60g"CWd[lj~|? Y)n]%|xݡT6PjhgR[І; 2&p<3is A$6gqYh߅ي=J-K?~wt'"z-aph1BM6omsE ڡ飫j2+ThGK{: [X~+9-&-%$ԗyi6h82 t׾8 IB@(O? &uݎfCzf>~3x)館 AEf-Or7Y9fh󒭮o}nVګM%7՜.m7kV47[erڍ߿统JRۧT>NIbzeI/աwj\K]&vɓhdߝ:͇YkJlA7Uwm߱On~-lyǙ14v]Дú{g~i.3ÓX \zBصwY:.Od a hsY ]G\Pļi~gL-[8A+zelA7I}]V|RDH89.9uʂ+&]q`ri{ǤmvRg=BP4Kܘ fW+_-:ƶHV䚮p>$JiO`D$Y_R\1݅$,-sc|`5+/.:7QFʯFa/{Ui@M$Y G6 RE$Rz}~2 iBq"Ngz㸕_%{~l`qvxs<,)lŞLMb_e ,$*U* 4H'(HX1LجQs?3IM$n{7.&qML缃kAqv_#Y=5_C+"m'E- 7QĜI|@ ]0ivp/`*.Rɢ'[8bNvXFj#mCL-%@R`_T d,䁢-.'W[8g2Ct_2';eYD֫ *0y)'U#]VwzoUʪ/N>h-b{GRuM=fWE;PTn9締ay¿T+)Cˆy>3K8M$f=1̴+<fkzX%Jj\'T;9^lʕ̎ U42o5:H}&N8]M]X},,!^>?M|1s|1sQ,gkK+s{,wePF^D4TƠb y$9r>yᔩn6m;:,S 6T(~Xי oyc~C){ 5ԙӧTIT }Vmyhx >I`B ʹ6`#cbãHraC"䨀#ێz\ :h}nl>jTf`<A&v@I~L5ONny:>M㼘2,3t{{/TJn̰r*({`)jYUs? Pɨ^IԆ"D3l#6A"lR)R:8 .G,-F ei}7'5|zțjPÛ)GL49Tx)O%3uݗI)I$jEBBߠjNͪ9)Q}VAlpcR}@R]#uLO#$GF=A k;8 n!.$)DB1.EgaAGHƧV44N$+=w͒+0#`y BUoMSN9L>5i$V7KB!ArR;WTTq ){'s_s"o:[dV##{3&8}V&۷\ BCW7@6f2jsvVC0֝=1Z#)e7 ((,B *iQ4Y` N;FhK%6r$s4xirֶ."Upr\|^Z`a4|ɋ|R3 ].O*RK{^>f/B,3U-tZSIEW;fh9e=;)XVmX+ձƈ@o?TSE'jb;6.}QY]?ZioT-jdk te8ki<2]XYvbp]w=< DÅ6?d.ZN(|EA}S~N#ҲlR FjS/b8Z)ìR2(7P#ՁSza"i1jJ:RZQ!Bi9Ћau *~~LQY:ZÔ#B@FL0MwREXiD2ҝZax*eگ7fxbUMd7D 6H-9ڇ|Ԅ@cRvN[4\ 34cV%}ZV'mj;|rusYĤ.)^Y^w~Arm'wnX/π(~1O)d vʃ^&[Gok}HYIW zJ(Xm 0+>Og1@)=.|:K,`*}^ s! JyyR.Rv!8)% 6;ҍJ#JN>HMLhtCqmSћv?tkAt;yhJ`ҭDs[h+b&)ccnm1a}ԑnG4JrLA)ݺoEB =澣Y{sR$:H{p0N dg("/Irvrsw=Y ')f=[jjϖ]%| eۓǙǧHjj>o>y )O9Y)ɢ汹 ҁ#z=3{ńի{ Q-II.:&3ELr3ETl<%IT;ˡP6$Ofklzv&W~8.,W zuwaILիz{LUFgB\vVz:>e290.KqaL,:+Am" S0Q%".o87w*kd# {~v k$j9u&p&!]2iƊaC=r')g6֌<7`AK:7 4JY)ZEDK휗hVvV2%QvЭ04wEyFg F+vSSS~o@LDF#f@t!Hca# \pďJ, T2'2ne ׹o랠nxuzBY#+qO~6 E/͞7oJjޛnb,SwEW=}?]&TuV@7~ Th;FcbZJwֿf{.lL *A {3L]1D"@#?bL]P %B swH3ݵR [zC7wE斱Y?3|"Ă;% *4RtB8: 5M*U h,u\Cd}C"iRӒeiLR*IMVCIjvFX Yjj+pؚTlփs0JŖOQƴacU$'=S  kVM4$Kt[~g q~' }( J]0Bb#w>4wT4m4\2F T mblm*KoVlo95(:HkC1_p뼢fj)\I%USF>蒪]07Y0=h ܗ6p1L\& S^!L B" ;bY@PK*G왃/ raDˣͨE39p 0o̸+.5crɽ0dٻ߸q$rؓ( _n6 ,pn0_f`P|LYۙs#wl)ն c5UUXEaJc :Yi*!dF$ dZަsS# -B- :'*3A5+8K |LR&鶞w#9smZ8\`yz5Kb$`QB9i`I?⍊w[T6tu~˅=W_|fSϷNI>?~;y<71 hƞQZY¾*=~8ݙ+l%QJOL1tXyc$ y"LiH.~``$!;b`9# gRfHZ5^s1IԠw[tu+TtAwcNk*zl3Iè} 3lL죽`}ړZ{1U<Ҡ+yt`Ee =ڙWT(^۶za2M3b֓XճZ؈MSwEe4X@&R4^:È`N\`)Yۛ$K:@ 2N£Wu~JYq.s(; <[M"fduFΙxK&|w"n k̯XRK5=[%ctVo5 '.+oX;"*.S /(ӱםet,0] $͗vz-N}z ZVo Zpi-c @z^n=e%v94В84J||*z &&۠7ݹ*6JRWmU%tk!9?^-vJYE`-ٜ\/˝"{ޱd99 R4,i͗eUaNe:7kqkS_YTfQ},Z1OqdpTY='KeP}д9JpCi1jL&y |''OϾ&cKga&sr9Ĝu4'I͖ŭN/ݡO/`յFٙG2?R_Mr x'tf >w܌\[A)"*[-<נ%:*-GX##kdK>DTt>sm*s:80QDлJNAUDQ˯ĤiCfZrIڶR&Hֲ($䴭K <ސќrJjJD/QpN;d!Ik5Hf8J%Mzv5 >J3aZHFS?Ad:-o6& bPRõC H< ;Heǐ.U2 2:+RYt Z5Kн*w ѽklP!!/\DkdJi<:RX#Ԓ| &;$mMB[,A`׌z*9mfu+,c ݷna&v4L$zuFIN@̱Z?서+A/d#NjQV/_uI&D6YqMJG)qOҨ'&?]#ip>RJ_#edijP"uA-wK^_rLJAEz,\x<"Z-2elP?_Jlvo.'W߯s]?xtoOwtp]1y :V?TP({6 [݆] J#͒Z2/\6R:'Q-O7٠`1iC1+5h|[u]Vߓ.Y|vEoN)Mvwm}^q>l1|/_Ry?κ ,u[6d9o#DçdIڋu丮=Gײ) ip63F/K Jjh6K/ xѽm'Ыhx<BgKDTrbR8i@Y FCcZh\+w,OdS!|*<'˭(違Soي.A su;w~}N^F&9W1~W &GH_'oNO⇫뛵_..>|i-kVNެt?sao>7*)-)=l4O||ǼVO^:ͳt{v jDrj5cr{#fj#>GT]X0Y j{<'Fn{<[${uyܢF=Ӗz*%)x(tM8S6P3AY1ljƤѻvtRמɁmR bPRҲC &TWu~EJP,ފnaڭ"ѩ| v8Zm`L2 Ur^ Gcl#zp$mڐ,.X"(*0is}q 0]P .᝝}lgg;kgw-gn*Ԭ*@ƃw2)%Md<PVD k_T-@1ϥD9QD*k6ْA53ӶlS)o<0Elл] ()N!|04aɯ/H#2ǥ=kuR\l^0L?Lh1GGTz6Xu%VV1H[tr$yؤ< 91R#QkHY &+:&jhV\3:K<7<(xZW0̖iF̺垷PtD-m`O5;ݘvbE0by {K-[i!K0?^wp1a(5B\^ 7QoAZ 3 \M_$,*8nJ7+D0 n@Tn a!=jEN lZ+dv ˷(#H;'$iN OV&C$j#*4 ;xwj2COYb/ƷuLs1*(3S A )WT>1>F4^o0jͺM u{DnҪK~v6k_fOqp*$ y,rTR Lbr|h,1g41;YolmV\0u0^\rFI!vD'(f0Y1PWxwMKa Oʁ2) `H xnak)i)/B}{g;q8ŝY4{w=Ƶ<lr~^lF0qIXa2}#e%6fG 2o}Lm955EZ͞MT`,=D ^ ()E FZ@DzⅲZ˼JZCR3F6C3K8OF$F͓Ȣ>=1u\|voˤw|5j?P"%Y1+!S++Eg+nmo2*u\̲GO͏&cE<I< ҈@7vϵw4kCP.S,r1??X{.%&6yPdDZ[ņ|?t,(F9))QR'̛ 3 XVf?FWWt]5P1hkyA:@T{{rgmA 5ӠɀAv]Tv^5ƷCDZ}ASTgyxsn4raE4/~Q`9]Q\d# 1wlӽ+pϩsJEM!͹VȎQ!J½vpxZdV|{[/ޫ)89c) F{agĘ-͏혵`Z$/WRͼ.:w6]M\ovM^N*=6Ė\T#x4Q6+sּÙWn"UuItL򜠃EX"k"f:E"f-a"#7fH6hҎ\i eD4xtu!34(O]dp3Pvq-z^/7F&58:hzi^!fyA^80q4Q%7&7sHE QoԢiRsZ!L Gr&VBJ1S) ^{s5G*X!I.k4`q*SV8eb-J~\>ڰ{}\^Y5bq]FNp{1&aJIz5wCv_w?>L=\yF<_/؟c$QhőU~дunɛظ #\觳CԩfJ2)XglYl-M; ui?LfUZ!|=(b*U~8R![LjA'AJ54֪'/ P_މQKiN;he6"=Dž,+H:i7~Hlgeon>4ɏ!M o/ u 15dL<e /@H+)l+kʠ7Axlzʼnޛg:9R1B +.I9V,4\2Z?Rpt6c$tLzsɺ/2H=Y#R`6Ȋzʻ:y/)ݑ.NN^ -UypQ+trU L+L%u@ iC-+dӠgd/@QJ J[icZ"{00itbYr'D㲀p|e0n[ 7D]j;m!@a9H AHOn+Mp;Y؈*#\\H-}^0&%NEp5!: XXf%f/|4YBhdM ( wJV@iD I<_?eirOc䛑$lpw0kXUsB^$Ѩn-"5!P47+X#Lw### _@h[Try1.4LkZ DBťQKbo Wi`ht]<<܏&_˟g_ߞ]|'c{" I)ycHXHV.%Ztxv U<Ẋ'\WzUecEk%P 0J&u xhDZz' {!LfkQ#v?%ܥ>!Z6?9]ׂv;p ŏjI* {㦫2XӚsx@<zJc<iQ/i UO{X\.,ق6ӜO^~&TUCjBpN9Ɏ5p KC0c613/E͋WCJu'k*+zd^ĉ^E[l,MbTՇ:\<=I%>W">,-/R@^}./?<0iy7 Wߏovxsљ˪ $ ߿;tLJ2+~Kψ7\ٟ!cs->&[GH5{tT2<6~HotFQ h Ǩ*F`4TjcL{V!E% %TK RDPnԀ0j>0-%9NjfA X*' >Ơ?ZpJ*Jr6P[` UgE^)V(%NdgHٖƗ.gWϤǟx33'ӗ 7_2?A\iwJ[mgklR`Ǡ!7b@eP2DlYd7'CeTyM@7H+趑:Wiwj&IUng}i۠EY꤇w^w\5'-8ik޵R@|Eueǭx!MekGjoߩOb<*jTeԜ{|{=yyDgԄȗBL<\xt>CܐݎncRt&}Eu?泯fo_-\v KZ)CS.a/F쾌H/y͇AO:GIˆҲa纷-2)/r$C7/71фW.ӻIMpV~y*RuƒhA9O%W{-~`g6}3g錖W*S_{=ujΘhuA:'"0;ʌS:YNrNJkjϧY)DH=opKx^=.j##@8ǃ_x٪)WB{(:U`]x_?sgS ~ZuLZ (m! @V@AR\J ka4% 3pl3N{ Pm 5ѻBB=6;(-%,`+N6<@Vqa D'K.9(38D0`wX%Ls..rr9BA"j LOԝUTt  D `:DJ;kvLEICYŕN5wЍVRM DjԱM':s֠=W@ IC+,a\z9T :|QMqqiR}T;'(w OCobWq_.Ր-wpUh?3t')R؂)p.>໏.8ޔcz} *P*YsN*u»lQRJiJRd%0jBO%;e)[ۣr_WjOFRlZ M*y, HC--PRynrK"]qJ+ S!f'Ϗ=<6ѝZ X+M$:tqib2(&?ՍeIM)τ'6Li d;XWϧ4bJ#' mzCS{D%-mdK'-I?5PyLaO7 }{a-uѵD6ʟdiMaSj<#+x_e+RW^Gte_]duI \ֹvrmM&FT[M?-v4^Xt塝Pת|xb)mW0N_X밞 ;a(id\́aB*Fݿ9nr9Qd&pϾ=O.bEvNJXݎ y!J 4тYoPi,uh_7RsBOt{KSj ip(pr5! (,d!A8?{mʖ^rr*dpiTG:$eWN^R"m7\N4H.9Kΐbf8Ԯ풵!u(W;Ԗx- MJp*EKN9#)] ;J ʽ4mA_W'/ILW˖-/į_o\~tL6ē :vS, f2'ǡ٢LDQyӇ+Z 0##PS7d 'ݮqk4cB2TaX*y?$fUzSVF%=렅q8@P%sh 0J˘~6[yڜě\f3k*gp5|?*0+}ozE޿-'6FDgSٟP# !!%ur•%_(emz9~2_/>EʝV?*pM ȂQα!/ xz?ftW,JM&mp`Rx-W6ˠqñ#—OX֧68I= ~II/i ^Rb$/p &05[JjNdG]PA1`B)BbR/9 \uhV .YH#e3eRqKJܣ7&q]piV/#Y_Q+!%U:SB hI~Qh9_Bi4k`dw׆ <ʃ*Z2\tA tu"! ؽؑM58YCHJZĐh >X#Rw0j38! Cّ}VH9`O>H=$Z&b==eAv'[ۨ. x).@%BaE7{o-סվxX!wvĐ I%mS A"H0e>Lr?G)[w<էQ9O{|%p1>"$i=3V]59j@5gf$[5l-mΝ%>@7]z0H#j8ӗdӃ+^NiT+6= /zpQAÑc0+741R,s5"AQcH%%e`pLJZ uD%(XgiI)6)rEn([8+qsB}eCg+p1 r(I]2)NAtz+"qwJ;*%PC8P (yb)lfI])7# 29O0 po?E0l\ꀰoH@N}NZڴjIԮUl/ѝ3%'8Yfl`pv}tnWjb㔸 ujxbnj{N =Y 2_f;qh66,|Ad?_m2W zXI;66|+ 'xS࿑샋mZ_] xŋKE>ӿbe!L˰+?ܽP/*\]iB^fwg7A얋A侣vuu*`[٭ y&zM).T &F(IA@GpInW"FEc$2V "NJR}eniqsU t6yDžJVUS溜 i,E|x,odf*ôC97(kj#''Wf> Z Qr6!A X'EzRĥ֔`dĭUSڬ/֍s~}@T~k}SNzm&8Ԭݙ\G 1ȜV(L.|I?qL@;F _pq%:*8*BSU6KN6 .x$I"yW MzB5ۀ, khNSziQCC-żj]ҋ3Qq|S,<;75kH_'pAWѝg[ ҀV3RS;k?)Z[ǻOY|^Ponhm/fIm4ډk cǮPTU(槫lrq*%]_a|K}Eq'/|@{_42F8j?Q!Qnή':Pr)T[ P*\,-%+ Jn"<06Ђ{#䷷W21UyZ8N\|).;!VN+f7Sf #+#RX)=HeĢ/W!9c+,ش8ezL^VVe;bZ^>uOY1@LD]IK /ބFHaj*x4 !{KZ ZWX^Sf 49Âlvt8Ѕ ͅ4&5ՄxO{j֛cF4&S%ys2u"*xUsd25/kHs:)zjxǥJ+F[~ȋ9N@Y$ä^h_U'orrWd BЍ^g|F]5%BV`C5]vT˪0]8Ņ 8Uh1DQ&iɜ7ԆdE)`LW n4=>GpC˙JS1+n&e!Zl`V] 󏸮|;DU+сa|=NJ̖e {q>E8cَ-wn]hRN_T1RθzCP MUhaapS:YwEZё$[ hJ: <T5-u3,%~+"]NѽFm12O` p: 'XIhXQHTp ~g(үctPCˢ]lޤ0LХF:8{R&GR,5c"pn` L, bGWVHF@?wR~T)໴ٶϛzO+ݛV|&\7U?.Z؇+J) /y'8>ԸF;8''-)#KzZiӊ94jY`[|b\6|$Ɨ;kVZaJ~** e$ ςnZeM٬xux{f@Ib1z(\0t!ZqGR5/7rZxWPh)iB@T ]NcqM~7ߎ^Metz l 嬒ɬ>:;XAfm'V 1%]еJˁsvP)~APuMÜ Z,}l 6 u{l)Q)Vh(G55BL`Tk8 53p#1 E{=8c)T g9QW{,h)^dSvQʷ*IFi* V s]1c%;y',U˝ ~o>&7nye+:F1Hn1hQ) wr!93}wb#zkB5zMbTߠ@kٷFOuZ{z!=mڱ0Zg?soM;tV|/1.yFgR56]>ոyԹiʛYy(ȔzQiW+Ib 8ʅvzCB7n9zFI;M-oo`0N$gJn!&cXwdmĤ$MLJPL*,mJQiD\N2A6h &H1`v067Af[lǪ+ux, oᎥo I;Rڨ =w, r -t.$+ڣ\uЮ !,]ɯMb$|k3Ίu0{ń+Z{>nSw- ںkcVs#'VK6_Liinbzڸ}b]nvݾw^iKh62mm|Fmm\[WA[pC7q\mT> = 0i79ik^(M(Jwz F>\eO/>UGW47jᵶ'3\t~+F yݠ[y @dp f(qpw8i}@99oA:Ii~7LР;^(ˤZD)l \5AxƵu 5q*Z2ީ~+Gs^qdtWR39R+B~B`D%ٵ6NZI .?.3JҫSb:-r845Hە lUJ6x(n \f7*vďm$ZDíV s'=NCǻeEWATP-ʽp5LwRaΈ,hstա}uI9uV$2C]>_-iר/qj?m˫0/!d+ݛ?V}:5?]Jz!VFwG ՖF}UdۓGA&]_cf7Zv;e(r &%!̖F1%nTZIMiٶ@Yq=>k~ "(U)ℼ$k.pFp?,M»ht7_>-J 8ŜEߘvi^'ҚfU.1"-r%41.[>%GoۜjNw3AOx;ۛM9Ztx_}] WGԔzŔr:^8*$j Zu.사85]^;5:n@^I ,QT62J5!|X*A.rgJ M?yb9fSJU56R@eh;sT(h,c#\qHᅤ_w ZӨB F2eL K]uAp3C8LFT:-זe 㕴V-IVkO@0[( JzO `w,s\JC[w{R+Tg?[MKCf3Ts?\?_/QwoBrr}fMOqma?/k;"r|t'4gʚ8_Ae`&dT Ro);Ɓ븶 OEހώDY4Ӵq®t*Ά7{>xoSvb`Ri%9z]wt5':I[b69k#.jtq%3(Ōbkx*mc|^4JҮ9r}w1$"r~?f8|϶4P}Lu|{p}T#fYW+$ f͓X4?·577VضÈdK="ES&p,ջFnmj.d"ʥc J'7`=(앢N&K E|dk<5*Hu~RhHv"j81r&`J 78>vҥkxb$tjXc9V{N<,: ƌE!5Naf,p& Q q7<J;y - z1AD`w6vHA `%sl=SS[\{o0ёÂd7&S"pR٤Uc-!xDFFAk2bJɑ2Z2rƵt lZۭ[V DD1,A2Ҙzi0^qA H9/XH.vN2f"Á&4d9WLnm`9A'b8˱RiN cETڦ|˚Z h6h?ZeAo_?JbŚ/)GheM}hh5R̭P5U sKg1+ Ȅ$0wWO*~VsWd\p;KH0|M:)a״%kn%s6fd򁹁7xNjai[7|O U# |vf'7gwfw4KOcR\GdBWVoea^ zcI]A#REd}?|WME9)6n@Rxi8m Dziy$ILL%&pO%&,c,]{9uiNZh.s`D0>G_ijHSE; *ī䘘ю>!iy5!toLC%;賔}5 O%}Gᖒ(g'Seo+Zvzi_0KtWMYX=L}EX{fa?^L#fY8ǘrCK耝iu~Dޮ:?"]E/%Fl?7 {bO6x+z{]5^M\P`5raJl|Jn'{e0#[JBYvYXuyQ%a;}t`%nIQwGp\1%3p]jM8˩󪖀X !rTٝjI稶Х>& TD#xva##u9NiKu}~ܽqT$c$\ᓊ3Ḧ7ۈ8h@D F" ϘX# k+=P;ЯBhqZAn1{Krɱ{ Y0L"5R>o#j1YiI&[ZrUۉ(8)S 32?%W! .49ˣR(N+͟TH'\^?&rT|>ts c¤ҹBLj(_wJNxJT<: -`g3dSWJ6M +1p;#={Of +]RY{$ sz5&f%[˵ )Qdi1N[oSk7iCq-S}*֦ó'k+Tmv+lbNnavcgWdX(u-5);h"L3%J[-ΟFv4O{MAdžH.~$6fSʟsJWLg_H!:ݺG3pn#+%EJBZю>LR~N[#o0hCS`2~c֜]k BRyNq͐.ڮerU9bs578.c byLw $.b3)%(%3%ߖQ0AK'mWFٝeTҟ{~_m<re{E"Hx`y7]Z8HX+ߙ쳙M'lY$Y}k \ӃKxxi9-40x 0fFH:e5&>gwK;ʻQ9i{G0%0+̸._6Yhϐa-Y?޿% r,l_# %yU=)dbc%樻["ul-yuѹ;%{媺ZV ?t'SXݹn'*pbºYV) / ^[LsAŭjwɻvjW`DT,|dW;g:g!DO7tv2ً]3OkWz{=N,+w*8Wqءip!{) p]0.&Ԏ""HÑl+qV G (JЖJUTSrE^K敥 CYAd` rd\DS*jq-T 6 R X9̬J'=kȝ"\ diak%NBH9L;[\3%B"̓V!G "&zC@ykN@t%ZHN1-j D1I ^Z (0G8P5!1 ^1EA0Bw8ڍr1(V1J^<>UorHo4yǿI`EƛoK/̋&lM~\ADHgOo8k/ͧ!<ţBL|uZwn|Xm(N(Q9s0/@ VpWลQ,N:\')Tiy[I[-\(w|9IБNe ^c'<&c`oP-y@Q[RHSsT~# $ &8{L ZYa(XJՐRfnmXyDZ)E$ P(n}J(FW [.U3NcR VӼK3Ό$wu!qQ~dY PjrEr쩹,LhbK*tr,Tk/B˶K+P x"ppv;2bcXk\9GF+5b8 j^ț,*4;er䯀6,\k,T#ĸeRdl/ @YLC߰ zC 8^3Aq.YNC ^ݫJY!Q[{TkZ`B˷|*WIx ;.f4?jyЯD`efp|20\&ы}UJ׸Hc r M0'BBPI.x1_p! je U$:ަR 9.cTy:ad߳222̶u n6YFeԒ 4qi59L}@:J1.(4FN8P;GfEq=(DbPA:#_ %|d E`qJ~,_YN٤?ժY ./FRp9-*WaDs,:HQY'5eY6Ċ$},jqXiC00Ae]W(-eZ YQ!̗0.wKn~~v9?9:WR_`.gxHŅƿ\PAOI%n0tPo6o]8a6~X-Sao7RcLA#mP+/LGĀs2JbQqr?OfίI:]D;O*?vS ,Jޟ\Zxȩ+{F.TLcK2CWQ]|ҷRԗxV q^=7`X.K[p ΅ޟczDTG4 RU K'&t?e.!*M2kd)DI8u-&GU8^+(WDj4شGQ-ci!pVR`" V^%z;β Lswb{!>C, vXj~DSM 5jvE`(iJlHLbbrʁ^M3/K k_LZSsrʌCHۉ)$ rQ䈾mD5jB BpTcgtQ*J 5z 1Q%.z+`A˲VPF*`# #PӡFK H 0\ʳZfi=ZV I~ Y$uZkNRrvsCE;Ǵh1ðy[~KuR!I qZ8i(11Zh#E 2X 5rG=^LFPJf nedBÆ#Öx>c#x!F9}KyS…Qa50.Bp)Lf4:&WxBTP"J`$B2FtW^ 7no- ۙ9ή1ciʹ Y xǘ!Ԍ%SҒI#s;myqù/&UGyH%55~*Oqb}?UTo7j-^Ro)i0sMo/j,y߷ m®~O;}0 _gM*Vǽdg>e RWuVڧe;S,}E!k\"Y7nmlJΤ4nҒ[[ rL;xuanۻhwBqmeSR~jӻ8[[ rL;x.-l6wk\օ|&ʦrwdޭ-9M Pn;ݺoD[ؔ3l\8}һw@CThfn4hj$p!r)`LwOqlMq$VlC&*ĹqnyYf=ϣu8Kr{`uK$9;>MͳNǛڢ7LPzljj/4ZtS^ #Վ9jKv]Ŵ)q)kyZX_:)/[r ځMupơ])*iqIf/^[_ Sw*,iicCA'[zD 㜗dZ9ESkvaȜbxѳSG :R(ʥuG"tk8]6Vg,o-0ĩ bq:@`zJmy#G|D҂%4hm A\R1`,FxZ[ 8ØId `Fʺ2̷07Tz޷vzfe}hs*T'%˕'n!a"Տr&H}\5'{MӖ.~Er=YϿqgtR.-E5a[jakPN;-u1hY\] ϥ>:E8ԏ,Q40Gk`óJJ&S+}o@qh. i @{$^J ͘Л7^UOGw9l Ή97-tfOSaKsv}|3mwEv9mn rI|~hH+="ݘ0-7[-sc|ǑH;qyN5o NۅI;଼8]xi"itaK~Kf=хШunlhD  Dy= hv(Ck hݚUF3zFY qo`j#3 *@tT9oMC -`a ^HF0eО9<$~VTΖ15L j{h`BRfXf{nVT;gA5nn[Y8=5B[#f(Gzf4Es#PE@uQfV8#vx 2|UDpk_jTN]ܞO4F,ӯI9Fe 91PR::ys'dAߦ{^xx1Vm߱Dš2M<;}npؿ9gZPI*cڭB^ @e/kMheaiO/H+Jk+ ƪ`6Gt :|a?%/XF$/YKb@% {Di%bMkOTfe_(K,Tsq<AQUJj9ߨ;[X邀PVo.Mn'ӛ>,'*WͫկR[֕ny-xǝ+9-|&SNU[[NQl"_\t梳4>M$@Aq)QT{AJpٻmW4p޶cɸ_4nv:'3Iz::I0DTINf)k%. `q%yLSC-5Jc#"1ƊC ʸ+x_>n'̞0{n W0adt+! N"+ ]?,e*BPoHwD1om(H>B]iAOWz5FХԥ̈AdQMDbdQ1cE[YA$xM’8Ғ{Xb3h ζeB؅;2)D3l8L *)a، `f+=fPoڗ8ٚ8#I,8:FWĉ fx*cU*Z*\?dhbjU4V`F`  JEHME\2+,q&E u8U cT Ѭl~n*H4 L11s0WbA(Tǔf%`ݏ5G~Xs`2jR/:e- A0n[xn7 ADYp]ee, !#*xAETCqwB6g'19IYҐQcRlL,`.23BX @Ԕ+o9Z'Cw8 P>0>nĄA8mQ2mvJlrβAoK#I}i-Iօhզ*?WZfrq1b%ҚoLN2rg&g|fr3H0S$ݒwQ&VޙuqPD.7s,ݒ{>Rz#L> ލ*Đ"!st9a7IhwTIJ@~ID#1 OKH\ %&ISC4FQm 5! n% 3U.iR! s=Q s:d}Ѕ^[Ir3+ ZW|-[[9h&%4S;^0oi!׬̚`Fn|ɬb^aY{]U Yw% y~,c5}_vENU[kJFTy>@DWDyEy3%Z,QЊߞo\\\%yf!5]fPm 4lM,%!Fvs? NzY e%ev+6{@)ͯAj/0I0 8NDLl"ddDPlyjb),PS 4.A0dxLGiWh n!~lo㛶[~N{ŭR{,ߐ4BQf!PmEM)Tdfo/6lnj}]7RD½.+}1eeoBwq7|9-z0]]gЇGsFeu`;v|oK7, u[QMPi:oaKe>~&P3 owĠD[wv'XSch[ڸu9"ͫY9kt߂R\ |C.`CY=h gXQ] .{rDp}" ?VVyИ1Uc[B -5Rd¬"ҤpozxCCV+֜hj]g_gWmwMWZ[4[NYo]ƹutR ^`ʨwbѢSv_zzzK7dTuGU-#h>5l\<;(߮zJ6HhXiT_ht RhN;Y"BI0"9  TP0ýod>i[^~qh(Q 7vx%ܢܢ(SZg[%P)LxI3@uAq#DoZ  _ ({e(@cgiۍJa\@x74g9Fk ^}Uj]]Qt=*0" 0".T nM Ai#6(6LȤ&-'XIvۓ8͡%KA#^VJDhZQ%xQ N0DVZ4d1ܝRyp[q% Ʋa+,H|j߸ TP?>Ȗ6S?۞#c^ᣯ-%*UJ#L&m5k.U,ni*d|eS6˸Dֵ9=+W!7g!XYjXq5XB=R=D|D^v0ō} H\z8)X?f^QƊ, cKRMpp (YIleJ o8%clY>$#Gx%`K~} ХI_3bfh%tON67I2+hޭi&+3YX3c\WyܟXRoa@G7,N]e`T94GB[>a_efw&FA@Y b(ŹHq_+ݹi,|dKq;TOPӓL/NOt=*{j/V ImtVM>؟t_վ[=lɗ=7o_:2vgk pwguo?s՟ӗfǯm> O .UvewEʵ/{<{ŵǙ ~)~zSXAS&9nr B.khw#Nj'65׃(+MAAP.Q>O'WI73\qdχ\ɿ}(L}g/k@n]ǹYok㢨LPϳ [fe/ rKmo1Smj[\y=Q֐1 wYnuX>vQ<1|;ZqqIB. +x-y]QN.|>#tUP ?I0)q1XQњpf$,NS0X;Z$̽ Z=0 **fR/]PT?ii6L]{EK}WA xI(OE3$cg|{ls h&)tֹ:RpALtݡgU~HlSxNLgWi?mܤѶ"C+ס\OL OLUh* vw(PγzjSe8Pc%J)',ZKQ-T)(K]hTBc櫸K6ozn|;(ܭm^\^f,zߓXh1Y?#g?޾?s%kmWE!%^ CNn&݃Ŷo c{}I$۲-DzurMb& 3Cr880F놉LBf+yo@/$H٥}M !.6_Fm̅E7Awa}> 9GU7Aiu鎭3x> bRVkD+Ms!MGriZWzxoH!n[ܲ(N]2hzljw0T[lˤ2_ºLS'5!cWa^[(̄stds.?(+i7/.UKӱ}$y͌Xmp-׶km DE(1?k-[S&䁖Zfi6P-4k3Z8TF:̷ ]S|:6LJ0z&)z%3rf!KEO󯈮S+I.dڇ׵4e-[N 2N6vn[6<:{CVV"K}gMqx[^ N6vn O9i2j"$<2%}wM vˋA.p|੕nw/enEH.d*q ٿ Tc)b$p2)G9`Rxq28Be}WHlO9CG?':_G?'90 @5uPH/O0_PHC X z~B#L1~9: G?Б`" &M8 G? Cg#{@>/}_g7=}q;FL_}7/~a_ao._OmvT]*.s dہ]GxŘ^.]v=+gP^dP o=*2 L~k$ϩݻۘ03w>z*)}h.&wFK=?w<|ۉl~h.d4UͰߴ3KC.4o/ A >(1yk$]ȉvxqSs<0?Q˵D$|M9Z;IG}JT S';R&@m_ z9}kͪ`HN0@ t#>7A:s(x|T(PSô:a00a&^ouԅ #憢~w_I20PN+tt?%}o^O UeU盛?` 'p] L0)lx{}~ҍ0.D?By8 F\.HG[~&9`u\>)XFodX$chTK{nAk[[ g-GlN'c* v`rwK8TGlYj㵈ia\vuV0]ܣʱi LRB3c'`OpƑ)Kk)&Ҷ ƗXn:&9n~zwo2 ;=Pʾ~e,OR>oQN "r7.]OJ 8LGBá9%]6͈mc!MH8<^!Azz EPo`Ku"__}*-.um6GTmoF -v&z0׃ ;ƇpU2U_ oȁ^<RdM/pZ΃%w!/rK`j F_)kB[vH?@! Beqƣ/n'͛ YYϩd vqV6ld[jYJs쌋HlFXnuB8tl&RÒ^rFqDf͂I;61Yr%>$Autl,0fR,Aܵ؂=Ȋ W*^X Hez*h F ġv1 &7*Ƌ 3T(xAwg (53&mdmdE; f؆֎#ѡ|1e 7~q Ý xc5i|cv;OG$ٺD!%+O`&lbꑛԤB5U._7I^u_ 8+E!QvҀ&&~aw纮D)P֞961}1y9r O*I;%:\T'VzhAeIs?oRtؕ'-`C92m(۠m%k0#*Ln<Q.]޴1$hI>\ZڀLlHDy]&X;&tx={Dd)yEQyr ,Y5}\ޫ(83k|<:d31Lm:l-jŋnu!@`EI WL ;}Q,Eə U-K^Z{a}dTcK8kF-88!ۍנJ-RkjER߾gw 31bC/%bkȼ@C5'Ǿ ٻ涍WXdqUGR#TR5E)[Q3(8DrD =3]IOmH\#aE]oІr-zAE0SD% 491>hC,ZHmhe%Z;ZJ +jY&n%{s/_ X)JP8Kn_"ěh*nBGŶ7qh#˛F `Mn(cU&u<(2鋲 D՞ªoBhK@#Th"ov{Ѕ0^ոX)J.JFOסJFx` "%c+c +iX"LYR) ͻ;xr,KS_DM%ۄ[d;^{מ.nO vR-+18.þHV jML@6]  Aо <˷4`*_o"%!ǔKRP839e;ӸsqK3 TD'(&L{Q$\L[A֛9+ћm$N߿v+zz:zfܟ:ddz ־ʪʪʪ*Q8p'aLJ FrRSba^B'w?mi._JxbiՋӺ[xbZijmk5U.P~yjiӷtdtEes۟gZ6RS׃3%RgB^T}B y@+]Ac >*P0 T F|_JP)mY JYgi sc<Za_ѢE+mp.ZQKڲx6>2>tv;^&bQ Ů]ʠAUVCOX[\\j]J׋͔XO֑xSfu<9)V\^f +L SW Q0kkLh؍[7l3WY2:5{.r>˲CGU OKhRftyI1I3*x{^?OR js n\5*IWK%QE5QeN ?[1(e sD!ZcZcc8鹳O"5,=htNzcgsjwy)Uy y}=;/$*^ZY{34xyN'_,GUKL@6j#?v U IB*HR*dX :cI?0>ѐ3]j7;2Y^9Ui#1X ;K$K25GޱlיYuxV2$qL؜|?TAe&V-T9~!|ǩCD)L yad?<9'DD!Oc+ 3$cЌ@~kٺ8@\YZcCX=/lʬ A M4=Ԅr{5ؔAcK/#ep:~-Mi#TV4!oIU]質o"EՀ0U5Nrf-"Q/f݈8iK11;A R\YI|2;P,ЌR.>HP")QZs7p5Ce.aHPUFNgG]@ *oցe;;AB#,Yf}3ؤt pHF@V H雙R1nl.O"߼+!ѧKL#rO DD7 k:Lk@Az0xŒIC׫R3g%1I[s؊Ӂ%?>ܰ 1/aVQnfN,UH\OGnԵ~69[!=Aހ5ұFӗ<^Xk8 \ucg室C< :YAGx`l4MC lz^{4Y^ZRH!.9gyrgyc9˳^Bi qIIIQT?q/t)1ȸeHej 7{gd{?~qu8ZX_4i1`1XqtðCJw/hS3h0( BL6AA{k;d42w8f5f~'d_uk-`KxkkӺtW_5jt]s Ñ6549o[ݺ 84ǴO9H>>(Ԉ9ƚ2R_1|eLJ2'M5_D A"nS7c!6:`>Է~ <'q)臧#~3ʧÓu[I4HJrc|8j؏#Rޕ2UU8江楴'ߚG\N~>?p0&:Ý~;ۿ?>=ٟ']hSi"&$e6p{QiOfEU˺؛#?%TInL3 wob{ ?msxѼi{)G\WJ,jxui=^k&nrYTQ97{:>! NQѿ{ѥ݄MmKW=^\st@|?]UGJ]TF[VNܟB>?QgtʔYodJ~ 1%%O? 2nGrA`0+e=<.3B!z󷼎|SҠ㧽 & ~}5%27u}h )'f0J~x16v󾯧ǗMN?~YŮ][iM&&or|o_n|0]URW|<^ Hԩc~GSwݎH}!32%a@ ?v1sƟkH6kLEdۻRuTo[8, Ybcckf.XVΥ@#Dn-C~Z opެJJ3JJ4*p,W0*3H΍} ! "vVh:#@e3{lfV,^;ç Qcmi̵Շj1>jB[:2ct<}+I<'D G֙ tH^~ֽaߕd\l^s'wWѱ٧2Dϗ3\g_I5c)#Oflv[a3UiƓ4"hU-<@,`HV$zi*kq1B7A'@DA1ɭ^ج9mZ&`-i6#S7KDskBTIX7W6rNjDѰ7B)F]EDPE܀ !*aL!r!t=! Vv-ekn&|Heȗ2'%U1HPCp jˉXF12>~7,Z%2xQBI}_zL<X_A y+@s22`2N  8B|bԡN+leރcr{ԝHg !$8.'> bHc`5G4(PӘRi6ɉ9l dAN5(S[ޮ/M+& =12S)ƒ. Tp_#O2<`Ƿp|:P exAW*\=A\="#t^sŴ3 (C K' 4c")+\O '$.g"beWGv+d]Vg]mI'!0&Oss $ # kAd9ҥ[o&2@Nza9Fx?_Љҿʹ~o\'Q kݺ9d2dL`LZ_ݱ=9kփ+HpQ^<68͠0Ɠ(9u: nx]:qχ8T=r*:r{5nX$Pj@w/AS7vn J*ʇPnKϏ:{{W/Yڙݸ;vמVv{E[ <}aSYM̏W]صN*`/=|GqHGAOu$8ޚzPiW.*/VY'GyNdML&b*Ʉ 8DQ 6[#ԒdRN䱴 upx1eMF|r(K^]2c(K1 !% Mˌ- YE̛C*ϳ$T: }27>2N#D2Qmh4G!8$4B |s \ՙ|BG"rhY0@K\iֵU[X2TñK@OzCEM ԃW"3 ̘!!-]y<$!0vb\HOp,XA הzaC5Lbeg\DuIR5Z}k9&ӊє0'cP|Ӆc9!xd$j4Ŝ f C}{I <9 @!A fq}$0džg6>1a?IܴӴN3&9N$Bv-!N{w{{TkmN{ݝ0E6r Z$ QMգW <}PEDV%' -St :ɚڞ|&/ s {߿˓V#rj.{B & ĪcRq~kQgc+cyQ&MJ]\\]rUkVeh~GnF\^4z?Xi™+*Ʈ`Q󡳉V/SO?Y:֑jngVpQ#iu79/(VDZ2׊pe6}CBwtBR iiCIbNj&e2]jBxJM;ڗEM%95rFߣ-0wQGqq)HV9;Xv_ݯZfゼ=8g=B!bzq=ηhhh3Jܜ40f #C0L:(d~ȒIQ*4 HaKuwoF,.fe"d'M(^Kxw/;;h:'E(GpʈEId2[R4!R;Xv)!E/j7cC~vLn` $~gcqYnX[c)8cx!rcn NU1<2 N}9g* M3.Ap!.!2("neHF9qRە땆(C5+p 2Ӊ`Da')L(1Ch+D 2HY΅Q0BI]ĩT (}DjFX$ro%9O!%5r7݋# 8砘,Q "@8#)2 U6̣?ȸ 1E]YbKa+ =gs3l FN׃awTnw/1ēW{,q.x%:.<@HJqFIbV6w<= `$YkBtReRCG3 zhShLG\?s~"?yy0𸥔${FfvB?MieZ`͎ȳD . C ofh] 3&)8`R9(<7 {΂eAY \֞9 h*Zut%@EZI\{dbP x8{4/p!.i\{l.őFiyiyZHe@{|)1ud@Il_9kV[gֹʟNkQkC)|!:W +kI*+W ++9IiNRVV$d쨲D5攩tW;Xo|\օ8|.mڿ)įXGetVExpD\H#L!dmx8~bo"́18Wy+︂uSdIdBӘ:NU{|TeV"[YKޟv[\LH:MUm\Y2W<CRKLk]>ʷe3Q-sa[F"ǘb0FZa,xBp`#-#IH-A5\z8j2g^JLN]\1N%hpݘn7gbz"4;e4Iщ]{΁{4pm7iA}ci|| 1?:&Q Cmx8p;N#QC]Ƈ'/:S=اY$N|xqDXǕMWneYćh1mj:yI ]VyNwn\gboڭtGj]'>DSkv2e## L8`VxAE*dAɩ1`J[fTJ(#@2^MZ"T c pEDsLAd8 68Gw\Т0l;YGC%ȎkVEYšUu!G"?DijgHG}$2#9MZkO|\\)za}0ukz mHҮb CUw>:&y/G/p~jFw_qz޾8xwrvo#'=v;|_ߦ_~˓7N~;_j;q{; k`M;Mw+~|ο'x˻qoL߻Mx?]J!# ]dnzɝ ~:{?ig4xGtHi"ꊱ y^4ld;9L=X.>mB,˹L=Vf_= EV(pm##{)ihń4H0@bZH; ˴ p[ C ]% irVȏq}q$:Bh'cNt4|YW"ײ<+ N!4CcBc;'>=:"]yk(Sj.}nn^&EgƝA/l#o܌-.iPI/s;ݾV{ نlSv¼d/_>o$HH#c@ce 6XڔZcs *y\\Z.*ĥb)O6jp9,&, BB B)}d\2`pc&3, ) Wd(yAsz=< L<6m5׭$Gk,z6``IIJ膽W4WfLNhW;I-F4tД=cށ; =.ꁳAʮNv&gݤaU /Bjv 8Ƭ(H QË[d:6v2(H ëO@O[Ϡ4)g+PA&{w=9&Z>Җ$v7[wI!I@Q WH =/2lf ^M#/SeƹYVw ɶ<E!hqRlƒ*a/ aݸR?=;i~v$v= By* ?{uj C>` ꌯI}r#LZk- 8<= =J8ջ}]xË%RmBjtݿ85IV3@"e"}v'EdER|6mw**, x胈RW~YwHnD'aFGvG&%]i71 ZcfA9Gs10q<fP,!0$(|^XUm\NrEBH-4[Q+4[{~ǜdIËfô.ĺ5Y6VUYvT߰}| 鱄s#"`Fʣ@@" .m[+ǔ~x&9q&v:gr= jdIen&@Iޠ* KY#Yޛ<"3 0| _U5 6D۩\`v7Dܨt~è4!jrC=: N^R:sk{Ixpi.}/fv˳#J"#JMuPT 'Hk_.~duw!;'sfMlH^fUH]hFZkBJD$a*̴<&8>êpW b֎x̭ey.}.V}3"Xjр0S7;;^(5\M/>jh ҀXpA:܅m2<Ɖ)51`޼lC,iJ:L+?jhuiv4yJ1IsREfPQ+Pc\g,?6q̐#قn" hSɮf,yG:cev;8gru+Gׯ_rKwcjC΃{ -U`VqodJٷB6a\찼 iMSI<*,,A`Y`lyLh;p*P+Cq($KMx4_,~ɾHGSjuyt,]O6QDbyd-wON[pSҶ~IZ۪7nmymPJڸ9teVˬm*.&dK]i|7瞻eYٗ?B Z6|v +lz#ɸg\l+km[;CVSAXoNf 4Pn3S%=ɣ{p>D`<v&5S0QzNۂw0}gf~u?i`(3Z/$շM: -⁢2<56 >vuݠQ\(oN51Q(nj匹A( seLJe$ȑpd#ϊeZYZN5ݶ'yO^jTڹ;( V '%@LPp^"C ĉlոP-dq`i>KV!1s@rͧ|noOɶ, hc<1*)GP I%)HB0, @#TeZq*,mrԭUjRdsPT[gG̢kA֙vegw~/W/߿4}&p [5X~өvK;ڵBҢ~&Hs.h1F8dϨe2SL4)F+ƉH5JOcHZ\E1 #RQ} .x!WW( \W*].+,C UL\rVY"<8KQE* /j7VNzn ϽZ&D!agj3P&(3l qpS4 FňN%IC$E]HQ"Ahj4pC<lA>_<3,lxx9)0A0!}9,x?c@#^\)pL=|Sh2j#Q/N s Yٰa1b#`/9-fbˣ1ӞxZO82aC{Ocm n L3ZE~ßɤqIhӈw3@!OƺCjN|Rn֠ ''V Д22v.c iz,K'32šW͓B3)kF$07%IN%[DBҔ>)!Khc\֌^2C/b.aU%/WN`sPۗjܟ8+|{~/No w[W]DPj1 BR@Hc " b &")IP$bI*~#wAXdPn:i)%uUv+"ZFO=)Bnq:b@S򽗘PR?]$$C;OzI i=H *$4NB&(6L:PBT%ZceLT)$) $c2FsN(4/ؖTϥ Ez=wNFaG {ېM~gxA+IepC"2YU7^h|~/6pYjLri]I'Id'cݍ Q_o|a` k+aZA< C@5M!X8 IXsc,S$&ah44NG716ƀD ̑VI$a$sExplAE ƂTp*Xoƞʥ JȐ$)Lx$H@X<\'ޙzbC{3~CtWw:G ߠr+޵Nnċlgq ^{)4o)fUb,o/Buq6t٣lp1]7 x.132U2C讯fvN 6FVm{p=[Gl2?q~ ?e7U}/+5\imO]0cp_^ڮ6Ϩf#4zeG޽]rJ(V6s*$+ .K[ab(,¸@D,4RV$"㔲0V|X5yk8K@pOG3/&nhPatrTЗ4m1'YڋOxusSIY&UkduO8-]PڌsI5 (d=NPe֌h ӯ <3ggoJz}zͿ]ލ tû X ,R"0b:R=6u3" F<"FGB 4I)WL 0,V26`$ )cFb(P.'@X0#J40(wkkSדxJW2ִth  Do_^j};B h#)Ktr]1\8k@ۑ) ?,H{l2}wH2/ "`~U6i귻:сM1+f!#q L]_#R!k#Ħ^W]W'54RVRB.{&f!#- 'F.ǂLr»I 8+LI BHe"3)! 3(ܝmZfNA3``_7wT$ >c1bw%doї~_*9>/S+?lcU1'&GnrKV0 ܚl溍3gSD]fξ t0-\]>_sN!׳Ƭv{F&va+Z{%]P`໙mzߘ mfY ]X/eGq\lm8'4(<􂹮 K?(-V]8 'v.]ߥz٪*\U8#p(Cim %JA1.7aXaШg\U꜐ߘC8?wq)cǤy=eDR+4'Ĵ5s?'dB rВ3OB_YUKZ){G ǨBl$ܿGQZ\dzlF@3CiSHT"x@qУյ*[y(?%G/nz^]+D{}!|!1 m؊i=$^scF5M%M?ɩanKr].')H<OjvF1q'3Xs|` B1A'5u6"h^gJ8D:pL{d7F=ax }B2R7yfDC9u3q{hXT҄Hf< \ĕJH c $Dpb3╋Ge@e- X!s8„J(@E$ ƩQ`hBO]x#UTft2 u~樃 0# A vH.nV9YB81 67b^al$ n lM}E`6:eos bfI+(6ǞtՈ'om'iGB!J~t0wP9zŏaǣ8ݰV`|4"cS_<,d1~c>?Gu"z7t乡wr!8~V3tbvxAe[[}툵ÆB&xǼ'qr0q](kJr'(x X?U3m}ϝ蹨/ݨ7Ffp|1oݻl잻 \@AWObױo:IDז翚 {TaKyjEL?\]{d|_2rb럻:??kw?6׿]?ޮgav1ΌNf_^n{w/{%1P<W[sm'}93-P|1sj1pZ&tspO>zÑiǴ#ù7 99b~ĦyP . o]|4뀽}kdvѧ^5IWiwq9ҷI?{c^}ScNXS#&lw:~cށ ~=i.ȣ{:YㇸzgW59gY7+.>NAfW&v6Tv/5Ҍ۱l6d+H–`[u‚-F====`0?]tۭs xZPԅoǦgV6fa]x|(@w=, @Zc5:NV)|k+ H۶㍭~.H[^v nmn짣˟rYɕmɹl*ࣀ!܀jr _I\7-oڰa ײ4J_ی᮵!ȿDVo<,) :"͂= ;Kcstί_/w~??r(~-?Gշ}͟w9ˀ4{˼^W^Ⱦpqs~q!6iPZ-뷺vrwI۶ &+3D7n J]G>YYѷ/烧wY稣CfSk~G*{9/26)Mx<Fd5+b t;s"q,v_%T[K9) * [AIcPpS|EE,DQI@;R0!a(0LDx͒(Xk*gKiQZՕ17u"B{.gsg䨮OA9LJ J$ O(z9L),&[,QrqASCOr1C))yq*Fb"td8^^g>2j+l^rQق&x6j \޾]j"S6IIlRrO+xRQ]]fÒ* @JB3TȰf-SgH@c3l򼿩:귂nőIe1ɬq\a 4ea-B(^6lP @/&I tZz[ {|1la{aw.ؖiQQ`YGuLt2: iF|ԉQw::y1(nV" 2*F4ԧPcva[^K5 A*-4#%u* u}_Z ٹއYd^L'hOjZWt?Bv6#2$QIU<=~ͪCt!9Wɓ]o5 kDۋ^DIZ,XEI+=z{MRq2]zwQ8pܒ40V)RƸDѢ3 څ vaƏ.#4,b4 5VN( 99 NjUꌁZ`!yŞkV'.­('zj*\^i߾yar=J7NùP)`4DV1kDcRPE KeLUcQ8:*o^}hڨͤ9pyugߝj ~2EM<_WZ\s]{2a.KLGޙ&h {1Q8$T70h#) 0b(?X8B 9 P ))[z-ڴL8Cv!B*UXlG$ {Y'5{]!BmU NX飘@i0Lzyzيﻃ7^ux(um`l-a⬊O{Oj,~wd5|,`{=E4:; T?Ih w0b3BNۃJUe8XiS8xT-HIn֓bq~@2(e&.T\! ;?hƒiwCNSvcªRZvEL $׭+&w6wO\a{1#q@\&Z_ dG什3.fn͍tDZHcЍZ|ڧOzv%xj-g[ M0:1H:`F4EH;cA M cPy8P@ n.Os.N}05{}L"EB/rp!f*6ZE٬0RBPŚGTp|-սPB\ݽu\78[Y(1u94fg8D X,4 Z`2=RȺ"m0se(OzsY ˙A$_[C" d(lp0GaXRD25"hitBt$%b1:I ۃ7}XAo~Țg %i^5>PxS .Kv#C`Z[TtL#%1iK @qmY&B! PNCMHbs3DM0UF:oi>LC~ZߍR/apn)ν?_uʡMϛS2X! 0h@&:r\3Տf<ʼt7u&P@EIr Ҕ-@΅TD3K tFNY%C*J) 'X(E2|>bz򥠎 !#*BЀa%VJpM:9@$8)ǰOAp9K\ҭX窎-D t% "ۘ1E]ZJ^ eŽ(L,|;I|V< oYPIhBTɰw65ȳ˃ɋ0}oǑaW"%c"L).58%' -2g1 ߻jz= ǫcX5Zd3E&.ҟBL\}ðӁ9yc0fZK.Bp33;!rN`3(DVg!&%]ԊMm^ҁU;.DsYjYVL+cKW\Lw-5ݙٮkZ]ըe߀ntZǎ߿_vLp,L>?>=wuo~6[?އޓ,W|7cg#x$X$k"K$g߷l겺C-EnUU=EF[(]~/α)b߻)h?68YɄc`-׽$z~1h0?qN_f sb~fqo-jMúW7QZhzY+$W0`q'v~(|5ۯnGxߍ[ŔTr{d=.A 4xdSP/ng yC~l|w%X&W s!xcO뀏6bf7'eT;Uw7enpYeS44;}?}`fG09_v__L=PGFVōǜ)3B[?p0 B<ݾ|ƽ?:z8(L}3k\YsYwJtt5b*~j(\}},+poQuyp,*%U| 6}.Gd;]uEríO?Nq+s'D# s\+ġ]0}$ O(2NGd|U=Yq`z"{y0[tџVLc=LkG';ԽE7AwSG77.dj6~Et݋7Ω(~y ΌO(Bz2ş]_\w9AϜ?U#wh92 ȍif\_Ph,4Ƃn*AJȚxR&[T0ڢG򽎡kX2a<$qt{Ϛ^[M3F7(N@8֋-kv9 + "'bњw-dĊUunny ~iS0r3Kob)PehqXYi$!O\D+{ojڭ* |D;B/)C5VNS!!O\Dh=%FHW.}42pAjwܳwAB.Ȕg1u5ITri;94 ǘya(Rqc5-"Rgc7ӯG2>_`0\K<:u]Rk+xAeϠ1JI%ab*RJ&ZK*. _cl-|Ot.f_`5&y۱A+x+/{ ",}foiLklU{y;h6$gSr}3^ aSx4mRx] ^ ;xu.^r Okj%G]dž戀lbw-4ɻyPw$b\7o~' THytn9@>5\]XQF)ă QfAN$ɧ=BjvVrՖdUUoR5EIbσ8`LhzP+*ޏZH2JIi˒%j- SÌrI˥D",%k٦,u3ޤ= T5Gvq!8-5s /bP(txa|[XPd\h|O5)%T 3Ka,+{uWrQXs Pxؿww2[0{ÖwRs0%,);S0e:+ް.RLt,Y!vQXf {R'KH)E{R' ~0)sr+wLɑ'lJE ה/姨'=b0mD$s)"2T?^1 xV]t( x6fMO26fzt^ý`T5WX5$Dߜn ig-ujfS#^WVˎٍI|IzoAYq_q)x*3&8%u˲BçSM; co6Tyfpb%@>Wxqp;?'9qr9X<Ǿ9Go8=`m6'xT8 `jdPM`cVXnRy:uM.P;̪:٣=To ̛k]"Lp {"#b?H%\/]v_]Kmʙ!Yє&̠X-6lL51Xư.G3I[adɈT|N\+ikTJ2Za-u}ydbA81 Gk*ECIG++xQ'ai$cR[nGOJm7W7H̑nY>җaZ߳8{HrP͹_枚}Ӷ/^m2bK ^߻QWt\ F6p|е5LrMClk;vV?l}xVRW՞'+ڻ35J:nsMH I aˎ# 8 "^8.aeBrֻ nTUAN )brPStMVQ1$?Q??vW{e'a4=UvKJa49U %%0Uڛ*^'o-5Qj#p&E]R5Oc2bC%&HU8P+"ж1QV1kf J$rgzưBBP7V&"SkS$g3oV-1O_1$;i'{3'Ia%0bE\(-j qY3YOeu`rWg; QribR*7 3ª$)<PPwc4f'Ġ"4c)w+A{=4ԟ*@KDseVU]VitERu#S%Pa>)&M'M au!QЯyj(L] p(dZfV$D%/%)Ճ|,3OXtf"C:8d&~}Fy%4D'Xv8HN:XGguKOe$&.$Ug㔧C Qt#ٿBC/.uba;/ O1/mE!%DQr^"ġ9z:zL(`7;0TʨtЖk-sDSmP0 %%)"<گO݌Rcx>#&"`|Vȫ@<ӟa7^bբ KK}](<y%+QsGt$-v#'+ָ%S-eXqK6Xj#oQbնE1tAGwfphbhX#(Pyا yת5x1.}]$_9Tr!=u꘍\s\㍨/K o6c3K|~X-¯yc&|g$9)ܕA/]sw޴ޭ-wk].{NhH݈᰽aqsv0Z jd*)+Fe@3`vҩ<8T%i[Pm Xun+so˫<[vA6i?as@20TAXQ^# #:'ϯw-5-.GDK|YG2!w!C2+90&0m؆q2M6f=h0@*jZ[r4ʦY wZ֙<^ic2owx7qσy`9[ߞf;6#՘8% %w;=ygTRJHtxq [Ckb!B@} ,׏ 5\]e×  7M<*oa'6ٹa775☨lێc_U1{4<*Yj6SۣR1 MN4O-MǴ@R]l}9!,~\=:r^5:bLJ믉0) r"p-crNǠse[-S?uY+hdpƫ[4_iT4jTˆw<6[5 lK7U yl#ϛqSh#n>LbnFO嫱UK-Z34w*C*aZj4; 7옋TPU"hin oWl"i.fد`y?r#͚X㶶]8,50/ Q `k(^֓zK-\9H.hpĝc&p@B(<|s;=yoT1TF,P'k6lP*D[*imHC !*f *#"5FXw#Zu"*_$ G{ha5KtJ*J4^t؝ͥRk*6\q6!BZq!Ӈ'@w*]~t\ռҰʮ l Uje-[DC%^CwtD-bYMۮ8k,hY c;QBe2DRc[5; t\W * a=շ1&"{^-G=GlaEDYP袻[ꌰNTEˊݥ]F.<\`lSȱO.N^SO-v:UɎ +MYW;;u}oGɮ񘇣wfvcfmڒ[\ -?yx /x:GOA^)a=P \XaRaCҺRjB.Ĵ'+>5P3C[7h@h+?O5z0cћ G- luLl QtʳEs! >y|Qi Bh*[Vi<))]d>3Pokv94mnbuӳdvJE| S"#{֕댰f]y#)+LR{9_9X]2nVe! N -DԹȾB{Nƹ{)!&h .o;]d<cIة3BS)ڞ]D,Ln dzfOuFhy[SSD8B/.@L'*%s\ʶW(+XƟ s9Vm j|ZJJJ`dW(m$*3BkѺig6  Aˌ):#CDRշ0|QH"3`1`VkK BbQGSazs ?]&Pj'7A/8fFZW%=J \ב7Z_k8h)(R>R<7drnV\nb mAl5e x8WFxT!r@]Dw4DaICfK rW>6EPM#JIn!hBSj&;{#1@0N`P@QA,(,DG.*Xj*&%9SPjрIh"]qaR=>,G `d$yg4Z*Ms L6QFy)0,D[T͢I. mɍ,ژ%\3XQPip44QA.ps%dP D%z3N3,)#lsd&yQ(WYE,NC\%x4-$%.~E:_;Zti q0$:E@eoCyJI@D9. PLE7phqRj\TnvPsQ&PVњxcyqtu!^&2DyW0=MRB yJ":Q#jh֨]H JHtІe{Z(s-uZf\1/-ڣW):DcTN4p8 HR^p8 m8sFd,H#;34QO(7[4衁\+Bm_38 qVFϲ8 iI49DD*0dD(F-P/FFJjj478&C*tQɻ@A4 FXm)!N\B!@PB޸b &MRZ4n8l1 ,Ҝ-5_: aIpe6\'Me[oqkv7E$=wGGdp3?~ƤGYN(-NG XF O.v^w7ݸ;r?I{طy^-]&z4nƷ%Yld.wefqRN TbP@5'~2Yݥ\7V,U?+UOuivgg7{woΦrF-uܗgˏ|f?t{~xEbQ_;=_)~ዽxw>=?r=?6_7??y_}2Ɨq1 g݋o/S\cv~W ~IrߋEZvnQ6g r[p@H4命W>ݛL,2Rcy reD᜼`Az&;Հ `5vx+- ;ow4y?Kl~?6`$(x3-bp [DCQM.6nr,}e% wGbyei :F`u:*,o(\>.Aߟu~#q3,p~ss;?@g57_5oǝ F/YC^?:|2){.0<~FjvkŁƣmɔ BC?2ąB< Ɵ.r4v ]t|:)K&Y"De}bѮfp:OǾ콚#o~2q_P4۫RϢw d(~jPUQbKCP֑Z^W"܇[YVRξfROĦ$Yt;Y70OqfѲĽ{v)(<ơ0NJ<ƣ aԎ*m#nlq/5j^ 踖gߏ pabW\+$4|OJ3WdxJ)5^z7k%cx=( /7^³!8>_hd|vwP_nTGmDۭCT΃%4o˾WeJW/{Zc#G=o??K=?Vn9oϏ:ҳ+“y=koGЧ롻!uv7FlD&r=!%h5N&9lV׫U)/wO,CS YW/Ko_,Fw?v +y:w^>qq{R.Wo'/|27_^g+rn⛑o*+z }s.(tP@U4*"o,zU @UQ%%2AX\/'cDcCup`5mQ+\5:pe"΄:kzu֥t:_|S^vS)7 :)WX7%j\T6]vY^#:/.N(򂸎1?pj5OWO'H:Z%5\;,_.JU,7~?=; j\_+ZTN)CHq8ΞpPk8瓴"(,&鳓{W^Krb2'AMZ)c5"oJڈv311R 9J'23ب@κqM㒷:|@_fSduGx.[]ݎ] ;M=dbk?f_?f_$ӵ-L~֘Zh Ptifҙ#2sf]@Qng{>g,{BD\q;  2I:76B'Xyd6?Сf], @2[o`mEupYjnf_mEUt@ÔR]4J=?e"4JXy0kVءiE"*죭"h/nfXhj xBzF%XFa9b _9OG@E#ud / IG}4pBvў{ "w9f5x/;)_ȨMRyUu8(sh7;J/o1YAs<9!& sp:IJ$- aáe6;Gնtf :e*yRKH9K"@n9$輡e,-MCE:rE,-sNe// ْ({@} &|,>-=@tҵŲ&_:gwV92\®s2u&VS<#+s9#Ñ{!4C8p䞴IVW^o55֜xr9]S':bK°0ۺ$U t4eT~eb;){rzfvBicKI'3,;2:BG.M &,{{" qNޟl~5>jη:CG$UqlHfF&&R{R=h\-5!}p4s6dE LЂ Z3PyW/zקTȓd.PZh<bzv Phs'V\#!h,WʺbvIR%+s-4DQ/ת̉$KItn2+k0Q[ BɬRp2qHV i 0YCi,UJLd#m"ByHn? ulr#|^}qYY&yq0SVE0$zB$ik7ATu{z:r`{?`s *} gu%AT^ysV4Y>JUo+T]>Ë́ݷ&[Հhɗ6M ƵTӚ%ڲmvx@f{})]y]T XȦ(w@A2C=`u(!ⴚ1#:ȒC\d˄;;cLb6a1^ IIdRꇸTiʤtcϼ1,K{Z56evʻe׬@gv>,1{5eQS&~6V(haG=0f.6Zt]漊RK0ȹC4Jg \Ys~8r9)rzo$Rȍ4׬cu幱\3m:r)ʌlu1Wfdg@&1(ŭ{`"s+I_YPg<`B,23^I-Ɯ&JRlrr:j%oMd.cȌ~ >9 e CR N,ă;Q=֌CglkkнPZ-"42{459m !b:j!Pk͡s"fl4NG(:g) 5Q֥0B2*Z)ڠ{ dBfdV'8F4f%d&24gW0Ģ4 01ܖH.!\Cr:i|21DVm+!XgXJrR<.#mߠI"+3rZ=SV<מ*8d{q UIyn̸F0D!KK˰ܬo~۲^m;O HikNKg$\#4bnNhS½ >D1z}j`(nyC uz8~oRKFf8AÉuα@Cӧ9EPM``& AY#DR <@Jf 畛dZJv ̎|m$Suu~)Nu Lu  N%5륢鮫c6.VMWˢ=cr=UH@b-iEMH*5o2jg=퇹@A }6ysOmb*Fm,3!Yd\<bN6oy5NĆq$˵mbN\FRvl^T)"fMHn$3: %Ltc%[*X9]dH5Sw2g.#tlΩtEKSPApLΐ5=R5d3.ҷNt0ɀ 1I`dҢ!&@pF\ P͠@ JGXN̸l,cĒwuZ!%i^#ɴ=ji|c r<%NXi`}.9[gQwZD0o_'UJ&2ee{"KmL$zl8zGt`jZkNZC#4N6^7-]rǣﬦQ46wbGl]͢oo>k=ƹRO_;/IM'mcFp!)\b#'>WYKq5i|nRkuuKw>{[Tr%_N~LE#am$NIDizԐ8yjLm8ic#_|.̵5?P|d~"Z TdnEǤXOA1wګ.2]CZi)ho%s'{Y԰LEAyQDASGOFSqqF mTܠeZ-. )X*h* v?w74J&X cV!_Ќc #q>jd@z1 1`,$rI9ҨҦIc4$΁C6\FJl +(v} p42}8 %)(3@fJAʨ%c#H= l*d$L+"C,J,r¢FXtEZdYrIG%S+FJ)3QjGt+qb\2^h{6iιHVPB9Z3p31'9G̜3CV! D|t3jmmśsGtf9'{*]H2ӣx+}E˿p36K!kc'XYԿ-We| R UðٴR~-ϓbˋM|w)Eo=z/hSAaT9VVYOatd1oآ4l/SZlJ;+%-3(!?T/z{֒[9TxP`jGsW7&ooo-*vUd{#îJIWwdm;VsvkrZx+{0Q8h`%+fl\qbM=Ěa>&cAgoVXVevX?501fh3vk&#l7qpd1v(U&qv jbūbqْa5?ͳZF.8O'mo|BzK0?O;?hu)ySf:볒*b~^)U?"pe.rZ*Z8䭌&~6D ?oF ]axNUd=n'7 ޛ~MrJ r@@-P;eD4szCQΙe max*od~~m!n{-yE@>nܴ1bEX7gZh-P$ "̙3gI3e4=[׽n5κ7`W`⠑uoBRZ?q<(܂*78Ъ7;N[<$u\ sVȖ1W7.xLT)UVDPBsJ^?q.yĴ 몥A(CM dKeđHE:wGL}b+:e$ZKa:[l5I,R8amB uƈt]ZJ\yqQ:4zp^W'蔞#ژ|)|z5Ki.:6SuD%+t8-Tr]+V* ZWЉpd^+)t:iPD;J%EZoʍa 9Fy2ns R&kzE3;>3ҒepZC],&>:W0F5BFդH?6^'k<ڸIl7.%p8{/pո9qpvnmkcͱNye "3qv =3~jcXNj1@hdb"Mޕ 4V fG?L{iq ޡʇǤW3h$C{bhNΰq{h; "w apw W~o]۷lj[{4?=uvׯ.7/N_9۳?/qE,LړE܀tug4#@ʅ]sm\{T(_ aڷf8o{A4|q/PV}un,%Kcpo9ӿ{r[*+*Idn?ggcwoo,q.+\8?1vkHœ"K f ?ۖ0]+CVĩi^: FU)(K^' UHzB畨ec'dC3QbCIj8x/}0\OnKktL} 2+ DpŹR ~~pnꊇn ῄ뫐_/KJlZ^iJ*a0yF~$vMz2vyI}<Csi6]CIF%S:Xx[]׽vý5< gzv܍=X>"n ○WQ8>;7FEջ1S$zw禚T)Bxf4:7W 紉z^ͪNFIA5t%9rM?MAX^_td>^ '/P>q9߈NޠIi+E HasLQp5pp AgBۚ39w:wuIr"S] Y&[\)eQ|m$Tɍ3|'}\D.cdz=>s7/tg&_xӀ:7giD'UStlIG_'DfUޙx}ܙ8}z$uzص3q F)qi,p]Kd쉍mX[.8&`+9w6ǀ0wi w &.87,LX+R[Ym7j$p$haDX*bP1|J恦ƜStHl 4Os`¾8Yu>s"ǟnDPԬ8s(fr~Ln+x\;ʚ}`܉BI}χVM3~P>r$ek!ʘ-#]l[+ǞRc9&[fa4Ճ 2]cL)bΔt.;11/23US`ty]E~崽a8>{u$=.cXxuk68ݨϑ!`!qYkFa KY2>vi2a!i(,$kn{M/9OA"]3p gfh<ᅱكVpy&RcZq6FP>4 $ 79έ#3--ܜa3zzIF֝9vna{aO[ V?5v&UQ"U3J}-2^&iQ \}\lA0^%*yvrtq ;9L?@_{4ĶͰ߅R}'c/fn2P$QȨ|`u Pp(l_Ûcpޣ;'. yv\ŏ<8M4,Ocӿ ooTVZV2TX,J>w&dzV*!P٬gI䣔ڒ4Hw9Ir]8(l5\0Eal/ td'й˅+{:w\Yg7Уf齸E]181^ Aul+Sfm|߸4x&a QvL,e|/. )xl>F`[^=HX-&^o~S%(LFQSN:B#Fv&ͦ@B~_Tb=r%#\_.Q9'qXɒ$ӫgػ{9'< j83D:EGDGFfH`+`B kǬ'ۘq>A|`3gd @(¤$=0cLǯʧl+<כtokKkc!m0l+Fn$Ұp$<kE3E gɘG>("ԋy&1AeШ',G.a-@d)d,)k‰,֠4kXWET950OgBnL\k%$WYbMQP awmk IPǥ)5 FY-^WF1u|cp)iTNnպ|@o_\T/e1huH+ "ѫ-G:4=9klttup}5X~Mq``+V,Kc+^w}7ͭثb`Jrﻉe ?PXg9pZ޿GaR^ciYW3aW $UsH^lk -fZByGa@#&cy#PSX"/PO;tF0JcZc$9W1 >%&EX CE 2H˜=|E6ƺ]_Lcֳf}Z$JtAζEQJsCb(`Jy‚ wOE)oPE|%4*ך2{R.ım8̈3W1k3W?+vᐇ']~Ksk} s=E cQ d:yMFVcƞJ3U&ۗ7z3_j%&Φh0 RIaQxuk} w_n_~z%^o -KPKQdچ4yr[N`X)=Hq@E&p~Pw_+2sM^)|j4*X~݋>̾5 YÖQC&oq/WIog^&c;9R6!%1&pGO9i9_xXZ HQe;jNn9, D} 0b#`s>AE < " Q(d2CXJ`;|3 W䔹fڙi Rpi8@SZiZc\_bC}BKIU[V|Bo:u96BcU_~V=ҕN*rd.d]#76w =L(w)e5+fBxmi;89nQ1٧4MCrId}AbsOpSSYϩqMiŨ NK\8ikz0Q %H(.Šn #AiԪ:C`9!Ki7+I(D?5Utwi9"X(@ZacWE%6ݓ #8[b:[hpʜﳸ}2+lW_FЀ7:L{!Rֹ-gi3bߴf 썲H!7A4OF-.Av).e\d:2)gcW6dvrtK4ɑ2mC^8)K^W*DΝCV.SE.)IGH;Jmp gR<%ޛǫM١t+)"mK)CIun΢Lm}㉛'.=x;1O nrkKԑ6*)I˭4-7rATzHKFRTT f6UVaikLJ w44oJmS d?ZzMEqn0I8i9ki`%ӟ$^o0iof`Nym7g<]SdÐ5uqstp|7 a1'f$p@",0:[VK'1k-6xZ[xkkӺ,:Dae0[;'Λ&j$݈vu~}vPQ@r-dlv&Mӑd|Ұ bx-Ti>^?4pz,]=͝-kC+z:[5@v8OF_Ɲ?t;o'\EqE6\'؀o*Wyɵ}{μ'7g˻w;َ³~6ŁmdB;kpQ"ÏS߶^3 /6٢#k|(wӌaNH1l`: ]]5{}re[L;FB<£8ǜ!h7҆;3Wlq{A'-qfd|ATЕl%vnж{qAz~h6W:7 ͌Ao؈3{|K#3U25rnkovw$70'>~闳7aD,/-N?$?gC82*ˠ1L:?X#z ݍoA)|tx?n4LH%pu>vqad8<]F{?N6A]_^}oŷUG+gAĮ]J3ƌ}ӴM<_VM$KvV=@irY!1Č:z! sLWs,5Jf]QaG+~9ɴkdz'QU2ʚ:IDM$?D{ \9˹-Ć7>ky %oǓ?f~>q+.粫CGBh_cnw5XObskAs%o؍iB?t_ddd̵8}<?=Dvd\E8hx0ؚ)o?Fsg?r~_ܙ">I`^u7;yw ҽI=5.//EwGcTW._gO͏7QϻhE2z]G>|ߢyy ^R*u z96rͮAYG#\6^ '/P>Dk T y(ux^ TI\KLMY;LȁMlm#+>܋ y? CnEM$]A0BdIn}gHJd).7ԑG;yA]YKAgbY:UiRU w]6U%,ea2ҩJE i@* \tFY64S kЌ0NW8/;ߖ8rѩJ eɺ}d{Y 6qީJ@H.70THT }qݼؑfCPuݣzcucS_>tE[ևi{)LlL.<؋wA|x.gٕ/bj[ios%ꇡ&s^68و≹d^rɀzZ-*kv3O~gdla!OO|Ck0lpthh7@8H ]T.qr q4*4Ë[F.J1R-[F@C-6goh7(OPj7sC@t,9k*nUU 4Ƿ Xk:dv~YVYDf{8Ia÷z}q#t'mNѧ8hYN hñ7|]]x U ;7ޘp n̜- AC3gӨ1s9DB6hMF~/}uhCgL] x<]YXG-)Un|˰2rA^{VL2WҦ1`Ĉ`Os_HC+%䖛Će#ݰu6y~D3mTDȭïK3Yʱ+DzXp,wzJІHGNEB#8*%֑ [^w!kc5I20,J8&4,fXEZAA8@ gㄳm@`!aj9+L2 e8y5O1[M7s*އ8̮qЩޕқ%SNJl:/!T\`,03}QIT꾨 $#%q'[W7@8*QaaU,PX e>@@,Xk " m40@ mXLQTmg &^5 -1(k C T d@bXrA (X聏B7 kuHVpхnq68umVU0 tEZT1]5EW~^[좯tߪ}Yߓ_ns-[zz[ܭV$'2#ePٶ[[*x Xgnmo$пXZ~Z(EբT!b,VM)iQTlWÍ;-$.EX BM$܇Ct2&H04@hI=^ vL=*DOfwEd0%;Y}B jj[Hߘr/&,&h پpzv3#@0z/LnۑZ3MVaKDkJg]J*4Kis$oq2[s[Nyf|l8 ?Ǥyg%QA␹HE~RiQ.GvB{NDD]¡6 3X';:~avMF+YcIM*`""W!B:తC] Lr|;v#-#`-.莄Hh9;QlZcͣIkNɆ ?ǂC ~,k4䷐.OwguTz4xp BYpv+Q$'x:)N`""QS436Cd;$Y -ƈ -dNy QJ&cJk@Bq`",aLѓ,BX'V˜rfp_+k \CHBgW*-pֽmJgj?ݱ/>-358a1}GmLj{Pʪ^/~@&/(% `e}bf_E7S7 Կ_^29Ho͝H{7B |]o Άl?&$Ϸ{'lAGܻ#!N/>uuCP!mxAO| YrhuX25(.8j*°i\U9ʛBG4lowzdrg{eY֠(vEr'Ok#ߏdo Jc @62JۈhgvISg/OXa^ %-)G3 9(nq2~Vw<}$-FϤ4Pc amcd0ɶB!:X#lnOzgkH0X\gj3P(Y x:`#(#nᒔa~08)fRI.D*ʆŘK0@rZ$Bhjߋ"˿81u"UՌ}ʲ+W5sC+(W_0+g&Ɛ8 AV"뫳$X) ౯3mm1Uf#CqVzl緣E2sۀf =8JrO%ha#'R㌈:eH CHYoLD#E*o߅Ork΁dqLaͶxOco|㡧dgZ-7tOӟe἗\]nxz|ڻͨv^?Cn37>U/=ȍQ?cEjү"(tB}N6߭ 1$N pwA`sx>DZ'1|_ix`Z?KMy?{1q35翝Y=]tv>K?OQԋVwf:mf`z&'u"r8;f2[~XQ?g5+OJ_z^~翾mzs"p} v8ibI7֭,Z)!w%eɦڕloEg3Ù`Uvgޙ?>{u7Y|Zm@Aw޽jRF]9۠fnM*7\#]Icڹto^\LCz}x8 >$vٗPIaM0lShq;_W *^tDN!5 8 J;U:96_lMN>2Ǔ~o`f3XNR'WOOoZN_v6te1@ x~1a-{nv}Tތ{7|:,kbnSf XzWfW(7ŧ/c̿8Yw~w ,^/oG4t2Upn8t2C}gB&_Q(ghg]o?Ѭd~ko?E`Nlz8.h  2ķSӅ bW.#cξYfBf_BU_ )F۹乪&2(uwv9¾Zbsf?! kLEd%@ޮj 5a~%N}U2ʊv Jq  oo(*+rU ȕs[< Ead%K:&r//~ ++]q.?Q/#ucx𿄯W1+ ?ľZ^i͠7J8LG>#?H{Iz&Yhq .vyluCs.:[=D҃||u[|jkttد 9;n&Wm^w9dƺ,~|y޿:FEûW9?eǩ7}v)B/lmt2{_d%SW68],PxEfd Tzܤs m?ͼA3ئyG#>6^ //P1 L]{4Cȃ3lL>yo"waّ6LĦ*vXyæ3HZSҫ8/PtIzLz&鵢׍i6esjߜ@%Dd-q8ffv$Ga̵LЀBØ2($FCƌD"Ff10|'Ӥb*W5[~V[ٕeh"%VI/ʤ iqvD`kٌ8y5++y I83N5Jg|gcd5#[ܗ2˷>|oG{`E@ip؟u7Fn4nA[50` F-z'*ʍ$KLԩ-)"f$@M dE HFyLZ ڭ.6E s lIo{j2BmJ$d $%~&TFnʹ9\*P5 t.$pL= =cjz [xyV 7h*4;Tylv@NXc%Gvn3[؟@eVX+`1?zcLB^@'OhKL=An']6U ti uBp?&rPhP}jh#pƚ-)7.IUS+5jȐevvHC} 5%@zaOz3ɚ J߅:0c:=C_XBwPFJ4:QW.ra@G[KO;$UT|\;r@-nկ@GkQ^@O`}䔯.Hpӣ`s'Ń@ɨowG7b&A|?g*,y43Bo{qUw\1Kژ&m|G,:mI$L U-m.i~«f Ҵq`$Pw^:ߴOf\(}e) RkR$)=]LH ,&}0kDUHwR2֛a+ q82܊- zc2=]sk80+l}$Z}>|V[W"5dl[&_ц83sSw/x؇qe®V^S pq`D}asju}{zӍ{cLC9)Y*MO v?ڑ}F=RG1q >ЂKr,_e_B=D|ab?)KS j%ܭ͒ ѠYiS56߂XuKwqRDPwب|R{箉95 S؜\bvLFO -d!lC! ,@* *1J`q0bDIBJJLBkT]^CۗEyˢJc28KS/h( 83eaa SAHQ e# p9CCTw ǪCB> 4:IRC4| $u" &4BNI]4%U%*ؗտ#~7TBzfO@u5nD}&u_ۃ&q)oH6hxPmq1fjAA4wU@(Y(u/@=Z(B:hEC"$ VCg.6Ƈ|:k#J`'>줓P<ϔ?3 ӽne`+i1xMd$ʽY_`ꢖcc;B'c;(a߬ 3yi9re#qo5WF"MU"Si`( BJ`4`Lܘɕ) ҃Xvϥ*\Ss̡_rmaC<îP x'l䣲HDLL'|$2\%؆UJHD`P!PŚ#wF6ZnҨ,CLJ;і'qJƅ~A+(9h 0`0"!L8 L~ CˏF(%eШ "J3P*EHd DE,#$Ҁ8Nc-C41e:&t4٧~[#EUoCw$vv޺Oڶ rxxv ( V=[:ξ>|dظADiSև@՛<;76^T9n]"F;DpAQse?#bUӟ"*9haw[,͘Z`gK"v o};wUmY>ukg+R|> DYA|xg 9ITCQ=9UՑ;h9MOXlNr7(m .buR_]di[i.r3Y}<ȝُDv0T 2xq>R2g};L2%a '\L:O21ئtr?7; ~m-t0ceEu}!3p05S 'b<*ᰭBDāVJcT84lP"$&Q A9K

w% zZTt0Jѝ<)m'ۦ f@0x#!l_=Ck.)g>ӳg "^*bEiSL}x:"r.,aah) #h}#ٴ`o'q=e=w'=6W 䚣QVwg?ebߎ+JWlj!qˡ;P•`9CƛY2Dd2^r<7 HxeG%Ip5`ԥ}I JW4janV1COM~?;^BXu׸ ovBWI`Ro/Sy'z5MQ*&Z`R5EZkIhQ;CGl;Cc}HtG3}S7u%LNw> `Ͱ,w W:8Q^r$ixC"TGQ;|c eE:Qrf#2(RDalV.*T !/R]#]%Xjr|49Mn*VZ{V&>* \dPKDTwD3Oz-OI/{Wܶ J$"5*=ʊ,R9.&)XQHEpMeUq& Q^l] YRUVU4"KBr9 ΓyVTnVPO"+#qrA3u"lhG' ,'ÃwJ!Y(d-MO#dc;"4D)ZUvƋ\(d/D\4& MeJ=Ҙ#ǚΉU8/3+mH)5*>1(s;"RV^A[졳(w-HݽSԂZo_e<ѯ5rBr؞AG^v>|l'gT': Hbnt:n0q;~C(> #.7gN1{8⸊ ĕ(]\;8hK   6dK*]DXmѯYkU60JMmœeYI2.G߄uZ;A5Cmxυ(| su.p2X;& aS-H&BHZHUZeƁֆ/Phʗ.1e RLa+#Y.@ʒj]Ts#_$od#D1Ց,H1'#)BMq#YasF@bk쇘D2I;\DT[Pш :X,"4W#yVmvi) bRs|RpTiD5k1: ϰ@!h@@H1 30!)1ݑ:D lfNz>,;a~p1Wm'O{,^2XռS3Ķp>޽I۽DDFz 8y{{{#7L\hwOC8Go{?&Xc16i7VLHsj{`2ZyÄofe:=s)F{ 9 ğ+FnmQK^5&͢JcQuȌ&fM:yg%C!L}&^ 4 mݹBXoX)E ru71e`+C@M1i%ǰaM n;y,;  MP眭{Og%!e}&k: "5=l< aUDf܏pi+;" 8%*ҏ]坏\(A  _K"?4G8!s =# nP 1yQRs0S2U[suSX>EghW^^c60`^ C#enN"2v)9TnQ9.zWΦywZ` : 8;2ajͪ2J9bj!߸)*~ްnCnyy:yߑź.ۺ,vnEh7t87 [N8Nwdn{\ETu] j Аo\Et1BR\0D[9z'0D533muv0F@ `# U(tNIme&w, "xD<_6P.!ݻE$B(y 4P^I1J qBX*P8D)&GN? 7 ;7Lś͛(3^\ f׽QOQ?1tq#4N.D_/yG c.z(&ia fҽMá), :&A6'BpV*ֺ FBdlRMoiRl;TL]Yvy:Bўl,HR?['gN9 w|H+Eg_mJ%ǵKrZ4AG2Rr"#2RlY=ňy䞒QnkOd)R!_]tR-~Y!7 S>f*Fc42-acQMǚ~/m OӀs3Z}gh]ZqoaL~̈́ X&mtF $f<u0 nscB[^bW!<%. lL^~/5Ў*'U_-q}N(9Q9] Ucs͙I1UUGIњ|Fkh}@p0bED˶bz)!~E1DsT_lP)^4sDcmqmse1G0#]ݘ& :3XZYAgʖj 6Qț kQ\j=V4rs6ϙ $?OX^{/٠ ,=~7;MNwmDլyT>G! _|5i}e-;djpj!J4FZ841Ոo/ z'l,{qh٤Mׁk@W[Y˖w_ l>FF=jӻZ} t~}=߳>-~=ʣ(DxߖX[h2R#Ǹ'j5H SFy;hE\j`bz[eiؚV]?>aϓvop҆g#Nfn{5HoD޵?N3!0ʽ[}襹b/Wgwwuu{s}Y#֠?m}~}gy{Wܝ]~FD(vwߛ.]&|ryݛί =q F0>>YS^WCtlm B{fֶO_G{9aّ<|i{L`L.9FlOō`6׈]\6{ R% Xdv홱4;GdzE(?'VpeA{BO3$:MNӟOMo,:ǗpќsmW̏Y^Y1xh|7 e%>>L#/V~/xhmFtP՛yV뻵 o'劭ҟB_A[]^zիwУq#xab| fFIl|o oi{+9̥L A</r1 ~m[{=FA8{:>:g~8N_||3$}5_`+NEv`L!q7h^nJʤϼn'RLIRcϮ 0b_8οlBZz"# c\ӥG;d9@•z]MʎgOOkIR _([^~/2*(E%j8 xj&_p^ڻ30~:)+ t "r./S6܀>-!4/r y7lw5|YW:6^l-/tdy+1 B;ѠyD,k,hP2(S(*T\vhP&O[jϿ-UEdQ20wˤd`" s1a(UFZ{|/hB eϫPPIApgQy&tն1vbxK!+ڛ˫y맜7)ךs rjӞnRQJTbB9G0xhtlK{ )\oO4QAQ\Bg,{)'Lr>h!oH2^lUSVOk34Ǡؑj45#yJP6OEi/6ci㠍 S"\"w1Q@d,aJ0%ו8Nя_M!\m5-ϙp 3UkT˦:hG>6c%1Nw2I=T**9[ġBs iR9q=W7Tܢ4jkR !n y=T**D:>JC !Y zT7Tr CJz9T8vrg|nf|D/qrsqtHc8kUb-dC$/%aK*ġX=W8T(v 5ֲN6Q6f{؜Q_2#n`\fɒ-HU/*)< )rLnȔ{Hs.k7*ɘaZk7`7*,[59#+ŚsJ:yz_H#ģ C!%ȗ@([HdmfzN\& U0 zג^cxt/-D&}?]VzT2W 1L!C5ȧ툩 r |K/_KY0õ+uVCV1ک@hMQ6L ¢z{)N`ٶW!'Ns@z%zկgĩvH;>0i504NG!G@>SP!$>wLI"7QXuQU6f-o9*3{洍n OfvL;tڌg%7''] v5SjuٳM#9w6T oPP;+P)-Ҩnt5T:T**̜-Np?<> AC=g#Gsbm6srt ޯ w:]E~-1(,j9DFM%-lj=w,& Pćy< W:AP{3?`b~9\](].?{FB9 u>+۫ڌ ] E,Sud(\?{bw_Wu{CX?u(+6ɚ; byxv cS6GبyǷ,%axwܞlFywv|xqē ]Gd Ti[VtK&B-qWȝ$8%ߕqoAVs;z7%,U`M禴P@1zB5Ck  Ւ?Z\.Az#EdjaP01`DTDBN b$spQ,bi(}񫒀 ..D6KkJEq8(0Ti ӑe(6#C<J&fFX} EK8.a9P(vTYj TaI( TVFPlޡ,HE\ik҈߯V"Sh"]Z!]E,6(@G8 kD2D cBŌZT}I]?EDIɔӧ<3I>&P"Dd)qc(Sf់kD(xTH(vwpYtx.T >Jm SXmuc҂pG9ԯG$j 'h256^ vIZgKoIn!qvEa9oxmh" @t--wk,IFÛ7$_W l$"8'J^lE".*w*]8]wgW̥9xkdFs$-y,~is$2 {76$@1;6L4%e NT)]5Q{-;F;5Zzki/;;e%AHq4 t+UЏ1tc{݁ڀHʼ%ĸW|޹(/!Reb9m=%`v;"WK0kc]^-ܚJI G(rGx'󾝼I з=H(II[nT 4@)5kz/>TqBإln};\ۓOxrpaez2Q:PlmBbU܁Z" n_h:W-j+` M t_\ͽYM7 nKBL3cUOip S6H!w\Q&XCLS]FnkIo!3c4;:J9lvcew! ELi; \'neiPDtʾctiq&$22{]&sEOҠ}*)WX)3V>vۄ" SL))՝ÞF 'NG]kpߝ(kXs;&+E ̨}׼6f4WӤg{bTBsoh>IbB;/X43apuv١ Cg4׎4lBe];_QIםhz>[mnO;݃ZjCbӝ>،K;Uϲ{< N"%WoN`-%KUۻǞ ,N㤬ovVqڪ9jW׉ZmS[kV|X|g_E`~t`5j}/lu֮;Y^J X(@0I堫b:L)&Tb=\HZv:.|]o`~io}UqWitz$IzXk;H1էQtpw. D3l#KwI?6v^_=<рplӯ/@zcᱣ7/^tO^+{u~om: [i yKn_w:ٽװu})\?Εwr_uIC|' 6c3=cdU&:uoRF.ꭵ M쟏zzϰ?0|l'G8ւ$0&+llIhq=' 2^4DύBrc phi|4)JFM4pA{uY/ 2 bIڼKGy@M6 &(i·f6t垘"Pޯi|IkÛëNϿ);Z,>^l!}v›֫M@u8:#_i Ohu{f~ X>|ooNFg5X`bt8y鍿LIwj~aa^xG_s;ʋ񙺊]4Uy5)**dĕGÜl6a$SZT+V:I&b귆gP`43:dGDZPXK r-1R U&_^!|#a<3u-öhpUۿqLL\C)"%wYYA\H+(>r >eOc m UJ@1(DV"05!cpJxRBY6XdfS2 dHlo2p:EFL)WX󅪼^`B+X9<}iwzN7\ϗ`j gjik19Zqj)ֻGHok{Y8ks^Rt7's=C윗c Ы`Mީ)Jᕉg6)@p J_J]cTέ svgP^]";K3Bf ^Wx2Xު!Nk\tu@gqQG`9h4h5V4dU։:[| o_`q zЏnH M]i:؛`I${53"q:U] S[icK{΀}kqwP.X ZLq+>Q. BjnܠZvkmJr=[*q1v`w+3jCN)͡B \jQeծlծJd˭Z@S5qSIڭ憔 oC*rks-8US?USY9n0R7`.*[* VHVųĕ53j5^0'rF=r?gHu$qoKJD~"Nز2#ga8,pd{OtD46X h?KM,8ICp( 8Y8PanDf#lv)}B]r8YhR)14@IgLK |VsQ^B@4#@ T' $YJmr44IimLl~9M){\D  w GR*<ո]=.[A{nbDˤGG媣ѻ(*iVUSߓa9 ;)o\dE$)i );YF.m|\^*w,OdHIqj% BjH‡qL83V01G=LٓyJ-n洠c\iJnH$A pki$1Ik22a0t \$aJߑ(+Iznϥs\›KeqNyݟEt #Oק@SFdO=Q+&) 县] "H=?5wG0J;d1,n˞q1gO׮K -Ah9Rbօ1" ~?WP)ٝT!!Qߎ{=!?\L0DOC"-%2NwwYqCHSun_u5DI]~GQ2 @!,r;,Ke\dBCl;٭Z҈~4م{YbX z,7Ŕ&.?E: e't*RAvQG͸TAjNU"Ejj4=tHdaF>AgAK(0,15hЯڠ7›S1r#jybGd3 &Q9jl4Wm46y"6EU/ s"{XiEEŨA  UFx3دHaUuU'${ԠKhOP'0~Լ&Ayq9@Ire%7 *ġg-`ݎ@%o%Tjx;AJ=|BRNηj"Q'ǒ8ˇ﷎kg2#' D0~Iة843&w?%oQ^%G3+ͳhr0pė4$|#a@$$5ΈOW[_g27EzPMp`51ܫ2x{en>BF^ I#r]X좒Mê<̊,7ae-DZVF+;Z`dkQqEw}FNde'4.3b+/)"("Zf&9~*}h@j&h\h&b3eٻw,]:f.oSpv}Ϛ^?7Zp9&^Y%X.4+_>jE+_4)-*_ؠtR ,E\%F}N Ҩqv o(QvvO#q6AL3SN8v ocdq>[x ˉ\Y 0q"լ~ :Vb+W$Oyv]w˳#;[.m.Dr &o߃n JoLez) ;bVݐ T޻<=wʻfgˠRք婑r@$4;_aUZc׵TJ鞫ƽ&c%E b ZPJ^:*m)U)1&SȤ\/bAO@B+ỲP#dPYXIr-Aױ\ )+FΉ #H"sA䱩>jbZ:6Ql}A%E1~BONo-T&#v+ClT_)HddJ>1Ul$_7>#I넢j}aϗQwK%%,/6^DnK̖+GtݹM,1dػum3%F"XD0 [߆nW+=΃?:D4-bLiHjLr2& \`b Xt3^,xt:@;ZEQX#;)kT"xUp 7Շea~V.͹XK8'ǫEd_f-r1)ۺhևROph]A!OHw#y..b۴Q75xG|Uj03\=^ÑwSAkjM!ͧ_Hy8KJ34tؘfMT6 ʧaSDȅGr>(0YLEC{NiE:,)'rVjtynN5>6\`x9DtǦxɳubɣ˃J4 f[ T\b}LB}n<,\7|X{҆ۇ{W=eG%Z.ƪFL+aă􍅗P lH~t2N^\bdl/bUƧh9SX7UeqFV)S7Xy_SYʼn֭Ƨh9%{;MSB.Ai&휧ăYuL֭Ƨu>LANIDi痈Sp'0eBHO4:r/@\p8s@ 0{l\z2KG H>Is[AT.PY3rQ3pHꊄFxsD9V$YyK$fcB` #J=.jV^4һHc1j,jXz8]A֖iu`@"!'/3 $.fl2[R)>:FQkMI|uh(WI[_m:Y1?Bzv"M$RpfIM7밒4`]*ĮM9 ?W @h$X ] Hd 7nÚ" az V3$؆ߚuRT 9JߤՑBUEwD4(Sv29A;J!f(MTl]zkqPxquBFo:sRxu$ f*м'xmW$)XNkU5!CsJRuQ 3Ld@q`<:f*S;:d:dS Jir~ :gNri<9=ZNr!i ˈ4(Rgwth9=D L^UbDkLh7GTcF l5qΖHyZ~r:suR/̖qg~?|}WEoCI>d&s4ٟ&{=Ig{[ =- kG]q'dvp3zW?;g>+0mG-ovUͯ-JL_Gմu)έՌ2,=Z 8w<YPdq0u0Y Cq~"|`4H|81N9~<R@&!—j* $3 kS8 b\2)R đ{8dӇU`k Ԙ>Cr#&qpD)k&dTrrOY | /z&߲ I9L{ Jhۯ>$>t LWr8DB(Ll'J(=iQ}횃Qqk;~'Qӭ6?dʌj?vRF nu Q5@?y/wpr?%0~3MfLԴ^gm\0n[L0gCɥMw3\cr2+%f͝݌R"5 7ՠZ":7՞׭=ᙱZ UNF/ByĐ2*8Jco$O!r!{NrT\׿Ro[:)!}og^k5rLt5fi`lp nݗ*0&w݂ĹS!x+nx çgoJPnqK<-]/'dY g|_{"lEo6d<[d# ˼ gl=XTIԒ^5zi:)$_-I5䶂W DpP޼)]N$:N٧6x!C[38tՏX)th֮:v3!B+vY>/`y m>>0ZHy{m7wU9Lo|/W@'C{ӂ&A7%G# .7Qh%%r[tk%"SWӱX #i@=M%$!_ˣ5*L#4pLt*4לN7w9ZhbM1XĿERyeۨ^jZWVEݿQ8]Ja?,xvz\^x[Oab2,h ;2<0AU. Rpe) B~IanXʦ881>Lj :WXfvĦ!61piow xTNYȥͿ -59`߄^t(OaJx8%e{$aidX "TG<-*BʕC P4ˉ:}F)͔$27IƠ[^h63$r?ʰo =l7IwǮy[ktUp$PHI53v' 0\xT T0㨶'dG,Ql U#0&s<:3s蝉jdga_.3Nka_5D5Vւ6jTxFiw I@w,p?GSRS:,en`L0tԘ! bP:dR沜zFZxFeNwx5"PD/(v_1:LQp&#ҒbpoEp*n:D.¸p>IIé"I`2%l}Lj$QwK!LOmCb!t13Ht⡈hJ 4|~*'k] *%9}MD ΥaKwmTBGq|KTt03\_ʑϝm ?K;@/v̥'vIRד~HI^lH(9f3D"l62~"Owdzƫg~(~dzבՕ\d+r2T>f Ȋ1VkW ~D &<&Wb\&*_j3h BՓ,k2DZ(M$x_Q;WE w%%5yᾸ=m;kM K8Vͩf0'G}f\.P9:+ y 7ұS; :x@5dzkm ZC"]uQ/o#?]Hڥ 2_Cch{唤J#}]UYJzuS$+3DgCe_6\Wvn w*` hi ~&]:@ uԨ lY~KUN| fvJ8t+)um֌)2t+nАgt у8эqV.S>"|VvnSmUtN-k ͞1N/*03xca3kGkF Q9ImS'|+|K"SjC*dʒc G^Yܹk4R3VY'Uuu'C1jbAv; &d~idGd+L!$8 OSŃ $@[, Ŏ_i%,5FOi}nXj&`bGk ƊgLb]Qqtk))0ډKōb&0ډ+3Z]zf5䑁N+]KZqe( U|rUZWR5km=R egDKqW;^C~PyD+0?)X | ؓ4݀;*IMt/IŖֈoesHc)S ,!Ad gJ$ ˇP 0E6[^.-ͩ&X0W:RAf,<8}WB~%$` ޜ }{ TStkeyn^vOL,ڱ y*ZJ=io|xIey:eϔ12-$Jwݶ!\EK!oqΣ0rib,lEhmXSۭlGm-h3W9r,IS K uÊ\Pt%Icƅ꒤O$)q $&3ʂljE OE~T"Z3<@'!jI:%n -nZ}=`RQ@7팅΀8@y' ZQX贚g 뉃kU+rܚӴ+CQj_t;A*CA+f9٥ }ޫXͼ6Yhު KR;:뛽>_Xȯy~LSPﲸgi-es\xgE$fayhZ[^V3wX7x[x7zO]ߵd;P}$ZMoısYyr1dFQ)EabiQK]ѣZm^N1I'<}5T 45!n>>:AOm8Vs̖ N4w6h[io/7R&VMǧOiFО7z5s|2Zj٫O\iJO׆ǿWY2l_#}ۏ^wߎ{Y`xݛ7yűTjc}}qKs?{ׯ/_Ż߯Ov=W(Lۓoc7aݛ&ͮIRCiYpE7HnL';,fmo(Џ ^6mdҕKO8g'wgّ7>:В{ҤzLN 8FltSja#OemFíRrǘQ!&hWMtR_n&snZn:KSxe=OEE;a4LRͺQںٗTV}M{ڳ'ufY1ѕ$=_t3c[ͮ5x!7٣>]ْQFպqc#ڪerεGc땽W ,U}{~>9~wu(Ag/苑P%|?ח;vdIl>L7ןRd0Jv} fEr";rjk K3-f/]{~ٿ?[wj42dbp;{[>Z׽WWTúzqqi93LuKSLm 񕞨=Jiώ/ǖW6<6VxfpjV{r:0O_t{ˍܖӣr]c9Q4WY )x)]bE*R>#g -U(.㽰G[_M0 ͭMp501$R&[Ic61,:fGQE(Sv :5 "G%E 6SG+# `T &[Nˆ1k@k6;U52Lδ|jDPm?e@Z M+Y:EhƢ)V.l"lT -6tMEnF=5БXDѩ].8;Ω&ZL5?E@'Mi_[; D ͠_+_ vt[B}}[ П4Szj^)CS@_O+tfQ.C‘jфD6[#u,i-njojD~ A9+UʡEIbJ2E!$}SI:& y=L+kʆmw3q,^@|z7݊;#ۜ 7}_M*n$&+*Ɇ0BPPq$YC%pIcRpPǍhc~jP3şQ롲ӡVM&BδaZPhy:fY⤗y㔱*5H`o((* {qga`uʼk;YW*BF}4Wލ=rLBˡ*縣IۭMu7O*2EwͰY%\9:JQm.Q_$8X)Y6{Cʺ]9!(?$9=03C\St[ƶϧ%Ft<$3zGY(S̳%i}gSK.1PK7mwoB!{1/A?"PgPC bdX8"(`tx-"x?5&sf|-QZvW:ssnҽ+9LȓzKdnes \`N6;u+cGq`z+ꆊ[ !0g.ӡB8 G׳JuCQCQh0C)x, kߴk Cw8TX"7q@4U!**8`H*,L&$ QԹzG/njsQ<5,oHMj{4=0RGQ#PLL@6V 'Oy^kdRMʉ tRDр?"6 :;DTw-،܂B^NuD>9wÄുgaej:L&yrC-V^h f\̸6qm0㺈 aoKL%! R1"~a š1ILo.ivIO[r>]ڠLcib5FUxb,A1 hB [)2'38tw} T|CU,B9"ҸYD%ʾR,67mpSٝXX*{if~+BZ; pa#qБ 1~\bRw?N"$#"a4J3HJ0R)`쇚E]D\DMHѾtW#:,h["B܄CHiɀR*-TapÀ(?z"P ?{Ը_q!IU|3Ij`6NQzw*nɶZ%nKrABd2G}*)CEE\}tt& 67g f~|<{V8M7[dryv>v@cSN=a &#{wgݍ",vދ#Wǰ"2m]G9X|uw#A@\ѽVQo2ex) %k3X1^kt0 b~JK;e`H Pavq<茼2ֶk7@Ҝ8?9#K3f|e 6(˷ۅI Y@>YF5 a;NP|;AqtjJĎ=){d)^9'g+^MkfA"J5YmC_o}BaW;W_-.H^PԺ>W qSXAۚOo;;.H\/wi?!r12|: "LF*iNj[C 6~kmWe:O"rHGRawʼnLDvD. 5 +FkPkX6ɷ#b𮕹[ƫ GA m[ 'ol0J 0modVJꭜ|Z^(Ec<FP2y5:aV*ol:eִrϢrT2N_gfM4ɱ2=ߑ|"ZJr&u[nSA}GrƌDiɎmh)[MUv+K)"%wZ*^v+Ov!!߸n-Sh.ւSz&8#HtGoslAQ#or{w{Aj8w/!/IILfMsʺ"/ܑ7F^2D^Vy)삞gHHn԰m4Wm˭ÕL;\!xjXXY 4 ,;M`) U_ 4 *ֈoL4AV ~3 s׃M8 ` #).ʓ1#{4aJæl?7pUhmVSR_R@&hh48:9h#c)cD 5/R^"yb}!="?i_D}M"q,!@z(C1 E&:P0ٝ+"?4/RhzJz\ck$y4\HTӐ1d}EC]ÊrkM=m(fa{`s M9cKċ;`jB 7[&wJ~q`3&hLIo۩FB ; w[i]SJõOkomMS,+ff}1kvZWC8?Zl#!{)w&+%(Jm],֭o5)&4b5{$9 P 4pX)XXR5 {@v<{Kn?ou0M Ũ՞Xq 1w:a+ ?ȝx~ci.<:W翽C_]^;طYx=vWoȾ||~G\>l,LEtv]Y7Y;%|rq|g4-֖6>>̭A>}d/t72vD+S>z=g<y76Gɒ#{p]BR 'v0M&û`/W &wDǝBrs,29+mj4>QLfQx0&d1J^C Vs-Ǒ0'&̺IrT0?"Z7nViq V>觟]o`ߔLRdq 6ܧ~1oC5J"7 x[V.<{#L?=??Zb ,g)I1@0 I˔̥70\rB[Nd!vt4(X&I"@d}ŽGn8_gr|D竫?@[mWZ4Tޥ18ff-oܖ1ˋJ{6Bn_ eYD\̕9_?r9Ğ\O5$a΂ivRȸ&/uغ, iġsb%I*YsN\91u 5&ʩ_YRnxu O nu|Ux+[b ۇp3PNs$=j>In#70폑f O卿'GO#[AuLC=Ѹ[x؇MO L jq9Y}&/^D܎@Q8Ĥt3d[Ryy \$%W=8E睘;Y3,cєFCn@e|i\6rע͜y[971Ƽ"n6HUlmLjX-i$l`1y?*/Nv63VŮg^cENxyNp_A=R;-U|=5ሂ Mkr(,xqQ5.^@*}XPI<Er_0f: CJB( #M w,pMK? iN\w z>\(Ebp_rH4#@R f(j 2<&(%Qy4ڜ ksqVȜx02-ǝ= ݵCZ@TgIGh=JNOfEK&eZrx[&^t7:#6"#W0K{d1"n;\3FAŷ[w?2ᰅHR|{r<-$քJagjP\TgPMiSũ#}δ6fF&"UehLjL5kC}k1[l"IP`(ĩ` ;^ ;h8j\NA&9,RTaڑ1 y}S;éOqŎ 5mv8Q1ً3e>Cp?F&{ANT%} "6K=_n\q kz^{f4/RhzJz\ck$?^`p E\kʖڇ57(+~Rgij9k _zҴhc3uc:F󧬭a$AiHO. yq •]l89{ɭ<9ğ'%33d/Sp x ˿\ec&[7v{أ[ʣ؊dM/̳}.,-[+>F P;IyޞKN\v>S/8e(rp r^xQG{I$X(Оc(<!Oq5VdAC>PTJ.MlwOWFa [:JԊU Oɨo6tЛM\'K5TK*8PеМ9k,7ͬ1s c3"Rm!̓8FFUzt\.oX-UH$/WW۽Țy.]3;Kq0EomxXEhN1RGC!8#kdĎdĦ ʭuƲ?25tR+w 4'M:ÆJ|x*XbbI|XPsϔH(Bq@К #'l%F~wًvsv8k;7Sct{X ̺Q HEJErxMک {[W ƃm`fNe3F2af16l4&X0eaaLSS9ʞ) (Wu4ԧP+UOVIuRQ ~8l)(-9TxX'6gz-ʽZ4Xl'8wr@u\t_7V+ȾzVdOËQQ!:հyF2G7d2o{H }iq)lMuq&әcɿBC&bVV9` a B ;=94{2g؜36[ݛQ t<=c}ܙLؿP%& jX^j䙬"S3 D-^ס7z3I$&.pj1hYǝ*ƫw껓n6.{ӛn?eƯv_Ff=RMcv(5,:~MLy"PN1ˏr:&kFG$r)Щcp] T9VEFz9䠵^@6iPM{`7ƫ\plh0o@=g@X&~π{’6bv *6ˏWݛ $EVM,D@=%II$,%$)PHw$U(g tuf;/`deƚY17>w`1bRg c/$䘅?QG6ݞ;o=?Y>4oN$T4Y6 .A_kuQD-8lBUeP Z>/K`SVRǂZ:SE2\z/T*I%-eI#0r 'V'XqE^*#QpZ; YHqα !M}SO2*ʥ(:QiӨS.+&x t茨E"p <0Ryhc*8D'j+Б#m)ϣH  K=2Ԅk$Ꙇmyȋd #ɴ4Byx>0K52==@ 踷PWjbw7ض(2P;9e[T( Wrxcj'7~7 8b*nBJ=i#*qDX4zmEI5D~@H@z UNsdYauS/fA0A?%Fv1\2Xtʦ 3ix"Dwrm, dDBמ aR6eD N YןAŒ#)ÌG,z7=T3׶Gq%,&/`GHΛBl-a-b].<~kdX1maR!O{X9BX/p:yyï[ A gjqdjv~SV:&qasIVEό3I0e8PSu}>#0Б: T'x&ݨ̸PR8!J1>w7/ FCTKnE)8eTB}Z0 j^)l\@,,1`Ŏ}¦Aѡ5@m pXIu,EB/3jt.p"Ԗw<r=وgN$sdEǾ\w/+Jaiiu_R'>{}rJ!Jf_@7d01t8pb߇ hAA]pG(ˀS<귵 -Ҋ@)8Ho0$ #((ڌkbeD8acB?#I"CA #zQU = E>$lyezXSEFPI9F5C>(C30b B-(A$@X/d}`C%cg~MXШѨBB-kL k|^NXƮôw=']͸&\7iyalP@~:o1i)6"qz!FF|"#t1]buWVZ<9"^N)ɚFߥ{/[y>Q9j10 B̩( H UOZMr |o$I8y5 H a4ìjmƻ~W7M2WG o^o #PKQY8` aIX⢘Fف=_Z)d94㚄d,A|rrΑi$}3D3#4X(_ҐHՆ0A D&cPB]9L\ zoƦncMM])L1N 8)c)15N*@z_`Y^a306DYUK>7.MD mXb&@5H}׊%61eŒY\1Wwp sVĞ"B([XED!cŒXa8d)MRT,EC/e(`e|u"u1& 2]q4eXT'Sr=:ݦWPI z\!)MWcq_@p+[nf,l IzQ9SPb$lTYlDa8Ed soIZ= Nrk]Qyb )qn[44ԒXQ^`90#]:7S`F*EI_F{yT,힂~dG*W)"eA6A" ^aIGw!M*5!4t8htݬ,PD{v C3'5}VSK%CpBK̚2&܂:8Hq>nc.OA=YD=Z:D8.'R!:zR[hjwoX֝ڲZֻYyu*Dm e%cPAy-[SǺMTY (K7x#0ݱ18TgZQzO0ɛ ;S}2\c>h4ɘ= ,S0 ln|7f pwLƧF`:9}7i5ƭFp<x᭍iC+f6,#O}20`7,Bo^i>-@8iw'm?nƭ?t<5Hw8 'ioA#h6@: t`7Qr0c?gin;:Wot?ߟ~ߋG֯ԚaGí~wd򳳋_~|g?>NZ{Ʒa ǭO¿mw}wl7Ckz^᾽qo!l\3 qylg3߳' .vq6i]Mv9л^R@!0cnF^o->vEú wM?2WHnaH=xrm{nޞLzSpg79`G:GO˜@LCP&eLG_'oG ¶&qً[aW^|un}a9vZd1GSZ_j8 T݉,j}rz홓]4 ̺śO0Q h7[<_ Lx#`# /`Q֏l @<_o9^/FO{;LLHu8 Lx} ˧v0Jvt5|O te, c8cyMIʸWIq6NdQ:ju3N_.v[F\>OU$a͂jz Y5e\fءxK+s$*hz@'mE2ʆW$pFDU*QF%xTml$6(?'qg4Ll ?^4+:\w8(6|_ _םwvGÏ%$2`-` W :|@x>0-`\=j^@g-ٚq6 ̱$$M|,mc͚wDtuM-!\(`ʗkSqܖ:mza{'+%L($ ˖k7LԻ `x{dY/~zǬZ-'.gERGR[2aޟݯ3)SU#sR֤R"]xClra1fUtz .x;:<#!mJ(nlSh3pxh/ qn"ӏ'TQ]08QG:)g2'\2^DHE^C1ȣ"(73˛&-idǛ,h{76ga$$EWd4 rp>­#k/5k ! ٛË0#/e5έ_9HU1+?"i|d|m@oѨۂ,#38u$)}Cw@b2dQ ɛǰBĂ^H<f<٘]DcDuuL}X鵣ë:~dVĞF7tR߯rar<IW<2 X(T/g# ,咭^JCG#5M@%_Q 吧:%ц0KMm\ +4.i(C;i@#<բH\Q C\x|Mỵȏ".a/x ^ 2F 1DFVX<9Gb/1*}# N2HϗY8I^a?4g5S=F`ۓʙu#G؆,ONcX!2{~LL$Qd0ܛAD4i4_5(gd:K=B)nud6ˤAw~ڛ@Dgn-H"썃SNgx\\' ,׉#kcH\JBBZT^"(ɵ8]@OiK!:7%"uꆪOєGҖK=0GX$,3sgvoj\r.◫9b&(vb"Ȼ7b,ǚXsj|_?gmc6VO),֕Sr0JIaEao;nV.|/p#aT$ݦQDYULP^j,[CpZA۰7B,|Y CW8ڶIXy6)o@"=f^vZ-s Ԏ':E^j2l Fk\ְvb'ӌ%eM⸩ M5׃'ouvjcK.=w/td" Ȁ+pe%n^v <͐)6O=f)VWԃ8X -p6jQ^KhnUH[m6_ m t=FU@MkZ 1KM5V'Av@m.QCg$dOt?TH1"<#*Xy HZ31>~MY!YWH;ڨD^G}RT˦10 ǃ4ӮEdHwufiqSPHa0bE<,:E1bTF>AXG_'DzE5cܦԄNJMU}T'5 r AVQYdFekruSb7PlxʺFV n`FߣzT6Us9 ׾'[*nŏDZaQo[*\9Ɣ5NTTW1>U8w mڵRU=/ڣ(n5 XǪD,I~5Qk<; X8$oN rJ|tner"& WӚ\EE7m3FJHyI7}0D+O#\.k!nH+ſ=&ՉLUbQfJG~_>Q22<5(VzEِ+J_/QPecLRASŖSiHFulȵ#E0A7 js3`,ۙx0[8s;6'S5,ktVM2MK×//|ufĵO1Jw+EM?`P 1GUZlpOT\+jm2/%0E4UFY dB@ҟj1 MI驈4+iz)#M1EN0sEb^LJm{;109# Gkؿ{,س.`o0z2 cOd%Ǯ0X\_ [{6GY5R0D[L_ۏ8n naԒ McXi {D,s7Eђk\ 55'+0vL(&m* +nsi )gd ZT@ h@Ey*s͏IcUYAg8E3J jU>PyKE{k[duES_PMďٌZ\8֋<]8I[WN(FU.k7b|vSqyH%4= .,ATn PƦj s!O"4\zŷm!_^GQWp&Z†(O0ZFk!˜!a#>If<{´dS[fY =y-'wUZ9A7:\@঑ƪ P7; 7+54~yap &0-gFZ_q^g7Y`{;;p%hCxgCS./uh54z^pw2n/`@qg4*Hk!8\ ]]Wt+)M^:V]Q!8)v^\n]nEq:E߱t{]V&݊8J28g'uEݬmvamf; IYYܪ[;ikaNP:KKmrskhNX@xţ(.5,g:Q`]7jޤ7~300GY}Njl!w\^1tMzǾq%Ȗnk kYs(dF.}v# ep(᭍I eq]-2>K˰ Tm5.vN8?u?EO]aɬ^E^!FQ=`eҌ)$l+ y#4Sr\nYσJoEOt70Ұg۾\?]%?A=IF ќhاWYtEL;M{ϟ|gPruo ƝyOb˻g ͬ؃Wyiw62"Dh#̖H{g|~Fwe=,:_@w0Msr6cn^0Z &xp4:ܶ܈RGrs &p5hۆ(/& x# {iٰW=r?fbD֍USAW3 4lg&}7 n:U1 zE4~fFOYt6uOjD3&ju[qw =?s#|; A2Pw<<9io>Dg',`ax8>'_NfLI lvgau7;[Len5oaQHa7<{qxߧh0NLHws>84XA^f:*5(׊(ayH Kxu_kZ h͕u O:3 `/|0<ȶ:k?";\ZEd0ؚ)oV 0ﮩ;Q'qg`t3󪻹 o׃>mwel᭱xqqC.΍R^(O&y&6L:}9p0Oo4|bqqm3E[4oDahK:ZQ{ۤe8ɧ4M45pn N{{1^b"[᎒Zӭzamac ig@1&$roD?o }G Ͳ>5a.{SIge&boR&0{MJ}5Zl^OztNe -14Κ\@<+R0ad +oNC+&Cɛ>Rd 4fƆ%d-2vtM[CN#,Ec~Y7FQmH$)EwK,ú%眨C(Z(cM)ʐ`VXcrˑKHn2R{/QmS‘nʖF  AOL<% a|s_#"Dbk%kknED2zlK\ ږ%Gә$SdI6Ķ(|mJSi]+A3DYj"1T JZՀFRʀPQj(F$I2*qFS&QDfidSB% 0W?3J&1GSfs,U.(KY4PxRTfNMe2 ٤3mҙk-(/(+1RWnAYn6ŚveM@:N0;aPHe0%8a"n:uVQDpE%.!făա+rs?@cmwSP3Zs@C <`-Ȍm4mx6cӦ@hoY)&~5x>loR\ٷd[ocژ;JD^G?.\Sa,(eDPb1`2&"s:DH=am(MKRbB#d3j4\c()2)IfIRjG2 Fa:?ZI֚':H'_ǣw䛿`PW_wcy %崽+y )H_r~ڋQ.˃r,\CnH>|K 3 7yMH><4 +}@(LX8fz(TFLB9">h]J](*E{ 1OSHKB= })e\{<(nn IsdNN~CֹU>R ~o&xȯx6LF?]o{xM*C;CL6Jo:s[5%nZ&T#Df_yov?$>˓[6|o{_QXV}M;/_tw|/b[~|zW$,}6 }3B nj1FѺɾ\l$ ?_R=~sB-|]DK/}R'E|@2:0oScB4V[%#tI0 IsMY:z8jT_2?3ņ{ qR8;ڈ(!,.iYTjv'7vL8Pǩ{[I@ Zh⦰^LBi̬"@B MBTHI1O8%!#4Vt"cfVmTrP..dNRE8%QɌ%T!'2 \QRLpWӭ K2.ea?)#{]wkxQΪ?|9xjZ}~A./??(D U# ܞx>"3z~y.|8BPhI$0÷{7N NIoEF1Qi!Q)dcx)EP2>=oR{2JqѰh2u@A-;]:d:":M^$ޢ13®yE:fŁőK]ms Q҄6IZ Grt1A=8BvҚ|W T4šN>ܙJsyDoT \9$L-S@܍QS [G:RB=S ?k8Z\TkdkzF벀0bC!l{%u-Ɔ\l $6k*T#?mTXlX..;DFM&\o1SXI՞w<9FDz&rf]4|"A{GG_x@GvHMbU9.7"BaU䍢@Z`^n~P?l4)(Q2r;?ïK t-[eق/ᤏiJn̈́TDD)I,N!ɓ,e5d,(g1Yʆto@g s!I< Fy>fo~672p @1xԪDdN0`!& 8I$&,3b8vܭl5-1Ѭw91ReFPb0d} ;3&)Vi S !A1SI2lF`0ծ&;\fhwIA{FR""y7iY1xe $b3ccq\%s8`m2m#6@6( <" 7)6bN' ͼ%U)PBP)s\G<<A@Nu"L!#&Je&%W+M΄Ք$0`!0CTN >b d6UJТgw`KOj7E@5B0\DnĂM.jwE:zڨ8{'$)ˀA(J4uTeđJ2svJ#kl@ZF-~W*U %FbNMY$-Ҝ9kA΀0P(6:hl;P"W5@ $[B0KDospF")I*'%?Gޭ/bԓOb%7n6}lz!w?UdNd Gr./!k% 89ř"b̏yقk zU|+a-bPҪÛ[v2+J )>2;e\R$}h/"PG=$sG~ˉ|;#9!͜VsL ( ߍ1DjfB.%U%Vmz7(ogz@wt@ ҹM%oM5FҶ%<07̩֥d,y"v{6PbY8g[[\{I t{;=f~Q ˠŕs4| egb((:/O'n Rp2}Zw\QvzUlY{bn_Hp I@|r(4䍫hNiyw#;˃Չw;\v[PܚwuА7Q:gccn<Q1x"0bn%ޭ y*NQƘ$sOc?}Wð$COưVr|qfU:FYqZ@\Bx9^1(*H2\:io.+n]ɨ/\˹Zum w1GCsU\cGX#БySo1VQA_t /~駡^<``]U%~۵CWebϢXMFJTGeO5VH EDX يǑS-tơ"QɃj1*ΟLKMD/NB zjА71:UF;ػ ~=<Q1x] z[U4J$;w(WAX-XPbbl3rx y*S"P@!듗^zqWxuz7rkmo2;{(E[3>?9709] o΂vOcL~ ϴɇ]}j!o5y!@m 0$h_>1Pt/rzc5pKeë~j 1كoTol@n~]yoYp>q=+')&r_|VwЌzs{[l aTN]v79IXA9$vdI^2`ˑV=%")Y>U؀WIbkzϠ7)(YAG+i WPĔ|+:I}2"ysoa>l8 ]wu2m-$#%rc4qe~cM eљ' C^W]aaf怾2^ iƛF9/h&Z9րfBᲸXְm'[LNM{Rr,~fK,[q|"{xBܗ*wO|{۞cZKKE+_].X@p5{H{!=2g0Jwp?` 2EHuꬢ7"+*)u 6,֎&I)0"XZIvR,)_j85pv*0CH ^*2ݍ э_]$G\^?p0-4hI/;6,g4h~=͏uv2G) mQ>;IOA-hk?_rQz0JTf7_1ǴWsuu{s؜k&q|ptM ]wۋwe]]uz_7v!L{aܜ=VS /;N2vى5gϺ^;on$gue#AOEah:D㞒Vgke˛?WBO͏jEYj`7|\ 5Mqq4SlL"a0H?LoU5(xj}Cލ|I|g 6EJׇ꽨qt8<#޷D&_/UDv`t3M97~ԕĝ`_}3Y|iHvd7Qv̼|~6R:οڀ\O5$UUٛEd@rSeO5ȧ_i%a iH $+|)}WCV3eAаl|4ʸR|4#QK귺)Uq_'IFUf+TSVr Wk[5Rɹ.*YM[uu]{F@{1V%bΨ хP A/S&o˴$U$S3N*qPJe_ *J,[k2'A%Ns F̓J|IFd1u-72Jpl>c$cҏt|Yr*g>syf]݄I͋Cg“ E¬0I:d,"{{B)҈%bL;EPubX[O([n7")X;JI=E1 9p fL)8QJhVfE`4)Y"PLp`9PnͮH -ey44ov'+!ž] ZLd^!%'tiX?Q'2bAW}rq;j  [@/Ĉz@\u/E}E8+hw߲z)q K6.5s;hIbC 2)!C$k@RϛB؊Yl*sW+ ֑䯦a&v 䰖eˡ$7c5rx_Յd"@.$7qw^#;碗0\}Ef"Rp=.Cĥ1D&~K j QI>q<) bOB:6G"?kHO B'HD-"'mKDaȃ̏dB F8KAU?rԤ!> ss}5NXT_|5*ԊZ"@%i Dc"|ى#IB&3 7|}cFjDїH@VsTlf*3ZMàHpp(0)8¾H*QF* zqz%aߟmӰ0Zf8g] ~]ƪ _n.E F g/7J9IC랍BHaPj3_vw{\g'D~6>]DMoպsƍʣ ,dqX RuA~_<-IJ%YqA2\ }^fݥ2=CpTѐ2+Exխz'T ^eTa R^B YuXW6;yyzP^0"Hӛ_}*6Am58QLRC_i.#ƤDzMe;Ֆc>=mkuտA;RO@WI(I:$~92ĕ۰^Ȏ䤟"2ju8ho-w'ʤ)$É0 q{v}݇볌RMj^q9N+?% pz!Y~Eܑp5qyҸVt\'y*G%A|ꖟjm3YJhtXBd-b8Zp~qy!_'E Ѫ8%frU[cvٯn<~-qJ0S>"R>;,-ԏRԏSOv7oΣE,ټN{r6-!k@p4=d6FzpyIS+9f`"88R|u{_obvՍuwW踖UQiI ܴ@[֥$:J% I- +e֥H{$!A(C;6t*riotMS/Q׌Z! ٩QH\9[1X6$xW睫YxҹВu.ۙ4>ᖏǷקbi .ӿ>R ֣Tkr;J WO~x3=bjAXKmHzT@Ue}ʤ*ʃ57~;NEH) jƒA[$AU-Ӆ'fg7U582TO8&YrXK&ZIhPHnEy:EȃnL?:neht ˽蟡l/l]NOmVnikѰm~Ś;̽9I^ӒIVbOC`""fKT_,R U̞0޶F AU6M:20tFqQU~Ѐ1 #$бc<}kN}djZMv ı1ft8##zFF}N5+:u\4ˑX\@s4:tbh{AoɋQ з1~! L *@?h@MИV7 no$x.Am- R8h7Eٞفpp\TI@wx#A`/m-Z;N뱙 +@c("畗F@7@':1Dj =9}{N]8  Ejd:#GV5;r*@Sܱ4݅=u;̨FDV&!#wYӑT='Y(.[""hg%CG%2T#h E[{9W,r|(cf"бcO:l@Qzo 1u 5~r7 qɅ%I^:q ؎=T!:["^FA#8&L_Nu=((%?"su=Ru#1_G>gE%0Xs i+cP7">Q+?*sHDw#`x:a0ՃpLzZzbt^7y!Ȼ;_ ^؊يp;׆&ڍHؙ6WKz U\Ti32gn/$?rioN_whBQ1q9,erj)^9E\v\$Ȓ9".%q$ټDK+7[Kļ$IJW`fFuͫ޲vuSؕ'[^. JUcu̡řyrcTr&y1”0iQsr=h GYcL8,Ŋ;?]UK)bD+8+!Lc̙R^l~PGCC5&U`MtIg\`p$`[Q6TSۘb ]] 2TP8E}aY`stec+NQtݸ"w,KC:Po9 QoiclGPžR뻛0yqyݔ5HؽHFC'NFNNJ+"w"D$>{q$$bfA?YK"KKNE7iM![j`4e)Y"۹h _X{fkħg]#[@2-Ǩlg3vu"V4)כ6 7EA6hi#G'S&z^% Zh0HJ #!+7ƹ,Ay㮚 |5q:|E7z?K:DOF4I]^:w`7j0N57L&3:i:"ࣺՑ(eL-!QABrxȽ!!GgțЏBzЧ! 0}  ǔ pa"C.2;þ?QUv*U, ޮl^Õ˫iAoPG͞_#4DHL`l8mǍP*Pݞf(N2\Heqa:{TQTZW,((o)$щE#!hO j*K7lYyJ+.G(DT᱗޵md"C”~6 x' 6A(5F13CZd9 X"̙9g3xx5*9WVKàt<&Sj ƈ6GgG9:ұy> Ë`ޛɏ,[^T$T1U*+3GYq,%ߦɛQz~8XVTz2""VޚX]ic¥uǍd)h0ݱ3x`MySPI_zU0(ZI"LPRFT őO5xBG C"a:dvG_`gWmI:2I;u_ľyaO9t,m`wv@ɥ%^~}pGf3ͭUS^ިr.UuQUIR]"g:'T,6,ұΧ|RVfE1sS6E* mo0~ ѦQJeqĔ0s2Q' HV7D(; DxL(V+\1tSx 0YbD*k l]mO`h^$)Gn\@s[3Q{BfaVL!_, & 18=T+ںcs,̌%uLjjp[cS ;sܪ`u:%TDKԠXV: h.n@B2ֈ$! B R4ab?*CIBc[I[7ĂEp&uLpS6Riͳܷk/on:"Ј?r1BYS46"FQ*\(lnaOO^lr➬*j&Km˴.O9{߶qvhV(̠ۮeo&4woZUv!=&pU*.{m\1zz2)XI IJq%v:NlҌ:ss/ ͓aԍ3NN`ltǏz7L^j̜+޽ʼc]DV|M˃:Tsuڜ[.!T`ů_=jH]`W2Qm`}"epIclL? &QsϿ01 zJ{1ObI*ûktJ.Kճtk+HuaqgRp~aY+z⻐avpOmip$7'͇sH$\(PgI];頃RIȀ!hBBq ¾X")V\V }¨HOn.I2KqzA:x'K]!,0NHl̘"9`*_&2_sƶ0l v`NXr(P|mI~X~ Jَkw*Њ ҎkiU.քdntFwB{ܼNZp ,Q0\.YfNnrtzx'UCp-udօnPs(>z綻BeiS΋[3wMlkx kGu)JyBu\]3y/Ep,;~sFyt^nuONes.~]((N-ϲ~篿=8蔿;oba"|بBDd"4ݱՍ`@/ H d q4.y@N>.<+<6QqkJWK7J/J6X'pH m;76gAQD06Ɏ&EW<1#8 =ۄ%(J^o%ƀKaP%hÒ!d2 V߂li[gQ՜t;*L5Ͱ38VvZ2c3Hm?p|ӭWՂbT*{;x ]A#&p.YPgAMDKM&nbN* oO$4R X@% $B,f>bHFSJ( E%;F]MLcpW.~DIQ`K>H{d,&{pn{1ph;t|^4 |"$09ɽo#,(5f5Ndzc mƼ \,?`xkgsW_π|4;I ZvN6H/Ewf)գ\؏YzTPJV?kJf<vҚ`ber6r"͆oLlta7?,F+Ѿo't lx>兝<북K^< ]OF(83D;c<&4^3h݂Ks¿ӫW/_իӷo_Dl:`mcgO^3ӓ?~g/86^?74eznL3CyDy~ W=lf@Uf7U~j4e`2J!SrvاWLu8O>M@sob0 䂀\bn&0s{:~ida_u=&C5+2De Mn*#P;pؿMc`yQilڋh3۸R3u,e׳IaE~k&Ǐ^3D_~6tOҳ# M͇هl2=hgH#Ș{?8_@gY{Y} nI`GE93hק;Z̉nHWqJu-v s4 l5Tm.whRE ~udGrE1W :{zi>lZO?1xMY- 7޸.|m|L6R vcHS)5=>u;Xc1S&>D2wܼ(rIKR[sVpymu VmyנZZI[@!zw໛3nεFwip$N]>s-w\UL*X5Nܠ 7Sekaq.8Tٹ>q?qٔ"\}WT$0І%kTsg'6vT3M8ٌG6ʿ{@$Tm\A&vNVׂ%5[ߦ4n0}[>TDkٔ?O80Օ}Gmѩ; qݚDkٔ8:߭ \L;n[ѭ>ք|&bSѦTWa̤j|ۮ7|#0Isz<,1jBp)(M&֚,Fbe=LClnm lC0%NՃtWqϏm.p:~S<:p D8\?QnP x_/p= ~,'k9Ǖ Q BИaDgOfwu3қ@ U69O5g6+to$Kk =:\l+Ov_Ltqh^uu3^O.gv^a2q9tzy#'>y8o^?)h}t/ߚ(SNחQ~۹X>>zD˨oN\UG2k0{#ŷkk8ًrlEvB.v.acw 6w; eW51mB`h]K%-h Yb@a1,iLyk@ P7"4-m馜؆s!5]IVk۱l@Ա?S7 )h Z$YǗowKá/L?6?%xx<{f`藲HM">x |o|Oe/DH3 P #1$! -Ð`VQHIùD쌆>0nS"2c<腜5J^QAU;ow&jޝ9%p1 AVfMt(Nw`ˌ`˹WyՂ3 ]zgږX8{c^Ήl _rcup-˪VfVʺdnWP@[Vz($ˏnWNf*S=oBЈ r=hU76}G#&p7@CN p<^|gv΁v0"6U_,4כY8Plbڻ+y b"2¾i"(MP4,o #c n|_5JU:!22F^AP'cmNȲ7Sǵ%j7[UJzJ2̈́UfB@Nڀnq=xGŌv<@& pz (Q4ɝT ڪګ2dThAc8V?L5o SNG&lXq1)Z܄$K$YHstٴLKȎOejhN(Kh@f!6h-0J@,ӵkA5EM}nz[_d싧^Qk?:šz#<2 /6}7>ԽWΊLQZ&];YwKTc5h41_0POF=k}J!bzcͻj-P obcū<&ZǩӾ.q)5׼xպ97Av1rB$ .jRCw.맫u'_あ֓H 4XwtݞFmx0Dԁ85lM[83uLi_νrz(CTDfvME#0YbS7W ֡'$oJ;߷36xKqR4i#~N,Fթ(s@˻V뵌̐3hJ{eR G=4Dujd:~8M53q!kGl*eCDдE}U|Z|I_9ǘS++8/"%k_ZW9]Ҵ>݌%LB :BťLF^sE:1W8Rn\hn GY̠F7U dS6=PL7˶Uw('b`(Y^ vUen}׽CfO [ p.+z @!Ǧxw֞ni#QBV^ۨ\5c _xf8,#ĜCIK zV 7fqBOś|Xx\ ڄc%2L]G@h'iI2M_b_>.dT66홾o g0:.\=\mks.SՍw0,g }Z 8u8oÖa'iu__wH{2Ćdh6bH["Cf\ tsy= OS`RS\Rv:jiq]?_!@ F.,~^ ~k׋O`'ubصۃ%^OEy<^r8vDW#ޓ;q%dzw5Tί^R?Fʺ~7VA8U'oYnp[: ]i0uwGy_ue2@@A̯Jy ~;M̬[x"R2Vmm0xѫɋc`@ }N2Om:vo`^_] 5o\̽ hY\r/` ֖Hf?|w_/W˼d[VXNϮʥ)*E0%vSG4FYg}(A*D2z[5tAM~OsG{fXy-Vz>sg_?] I^;䟕8iTi߲c͆[ ߜݼ0[-_ry."4bd5OmH)jƋmp*a/QI>i 3[IdhJ_>yk*pdN/"lS3y'_/L$W+)M^@y-dV')V >Ycl3[5ܽAf)+V%XY ]bzwjkv N{;^fˤWfu7H؞~P%LLyΞh?/u}Lr!: 5ICe0HCC^1u957$/fS#&^N<1K()d|A(㞇Fw'v7᫺]>Z,ߵz|tr]9*uhofe+Qvn9"!w 3;u_+ɜ9 {?J-AN8cTT/c[H(TlS]V `'$/$e2u 1)kXdZ*^6`~2a$z81ͷuxwho ) Bm->ڬ,5S=G; }bUbY%%6Wq,)7 ݣv]q20[m2oeZnpJ΢*3Kʗbq$||;!=os@/LOvR扛@jU!yB q*6ӛy'xr]GfHQs}/=,hdZ@4~˜f[j|7y::V`zO2L.T'nֻ8]~\x}XlI^i~>KLn X}B[Ђ UIxjVq'kO&]GA~ su`iÍQ]zQT;aׁR*Y1w_zqRo?xQ7W4I>r?~K]|n6蚿Y7۫?gveoW cD/QHr=ߞ?eq~=\s9;5\zGl4'x05ndV|C۾'ܛ:0[T p f*#s2v~>?c9y0v:(p+&> R~⑞/JDhg^6$7a|\l DTNuyӌ2,٧z2,O`cL \I<9X!uk bWbwFdW4GjVHδgt~Ţ8Hdx=\ZgE]x" x:f ;p =e %(%ۄʻ)):('0V0=~(Ϟf0秨sC\!啳[hKAoy?͒I^ׯc++1*iX!r=Lw%HIREp {JԚ+9σIpe< !~ŝo>#ٯg?gy\}|xvn>n,⿛VSrLW$JF0c4/ᰏ O T%vkȖՔ-R,Te+]ui̢ Goԇǔd9[3{"x,Ջkw 1Wi7~#ϯdh gM?On3"F_NR5#R.u<޼7#~씣25\ـ9͸fʣ_upom-sA7gmxG~iOy/V3!59r_z7rw? ||cWbsah& 㛆{o) & l{a0!~Bs0 ekZro,?5Fw^RȒZ~R_bKX F\w_\,Oׇܗ2bl.kQd<=(a$[RE`ۻ+fAgOF'KO u҃iV0\D3KT#5s_7]M&* q}Mdε5 #O!mj;jS\W  y*/O|YȺT͂pqW"NZuX뺔cCX;ƣ |bQ iQ.e.̅! zwυfsnZx)u@ft#C$"?p[(k\\Ey+`ȡCvI1:a!ALAR &rRLjSo㫍Qr09fZ7Fi 0)/U#B/_f lh^p"!BDpo`XDosk.$%|k)w$P}ӄ@,GET3gBDeg뀕aҁ!g&1!պmfQ i8 #d u_hګǸMq&*k#YTft%Hc'c].*fk. rU4#49wK:MR7]If]_bpa\Y}6-\MrN2gA<$68A0U+A hA)G+4Qs,*iE+Vv2LV#z֛KΆl,5z\m&Z]?/X-F0-\mr_Q=}p ?w[ ُn"pW`\ 9\R樳 kH47iC/W`݊?Οfjx ЏQ y3M婞ѪIrvFe:HW7s9[ځ ;+$r_I &%d'gQjEXgI 3~D%v"67@$ZYc)"d|nRopB̨Kd".;Ce&ӛyR.SF`HRܻ17#ˌV;͢UIpGr1TAˆEf/:u?T6Lj&E!a pb(_5 "?qp*v畷2ځ`F'.M?9M6QMxgHzϬ{[Fڒ']M >4|F,/]:@(f@$XacM Ah{M(Wo'!uYAtWG됁1*$I{14Qm].^& ^;KFM&H[( Z2o{mnڧ٨"@2r; s$j$B@oi bqG$\XdGaj{bĦI7QX]W<(9u0 )ByT&6F `N+BkrC9&N_Q^Zy4U7Uhؗ.˖-7|jyٿ>JI~~;lX6/KRƸpѶ$ $+oWi9E'8 ɎJ ˕E)xLhٻRSdF ŋF?Mm dSC"d_5p)LB([[VSJpf8Cn2pEt$|Osv.`FF<Ӡ16tN<^=]}MR~Mxcosd Je. gfZP҄8Aڤd0غzo7wqѣLmcBh\?~@9eAQRH{f6Ge>:8q(6] ;jD&{ͦxG>RCDl\I,5A_f/ɐc#89o-Uh Y`rrX0VcEq\ "W5a8qR/5EL[fESˀd}2"!LFS ȬbLHI)${[F*B,ql@tGa`)?p;oP=<~QTJh$\_0Qy乀Ax* =Y3a'2юow*h=G;Ψ#K$X rJb8J+#=?E[E]!YNlh KaˣKjmzN UZ㗧F*%_XRB͌y] ߉  .EL*#xqSK08P寴 r=-Vܤ>4(eXs2h-INƼ-kǹK94dܬI̡mO+ȱ{ێ]ĮX`1~}<.1h8&Q1WR2Pܸ'@!o/,WF$a@ ) " {lJ^3H]>4~>`~xaZ$B.\&"˹p!D ֳPЖ_IׄS[ሚHED*TAL^ʼ&RAWTeo7oZϟ+p?@b:5!ʊ_}~ҋ'UTuD<'(&0?uf+د<YW+ї%Fq*&q-۸|p96UYDn0𲢥@}Dp n]c|5 &͇GC3=8ϋS3 Οk4G- YOi+U!19A}'JPk(UUƽ댸]V^5~c4d;!ނE\[4I˃ѿ{rÇ#g yC߲:=r(pSՐj;QDo%纼lҗv2X1Qm.'լ@̅k*&mnozJu^NَOh }%`J,ѦOZu"u$Ò.TJQ:ڴY$P'E8MS1Jo\ØAzǔ_^`& NJ*S9B1g[Gpk,ʿI?>( =Ӊ*=Q Br>hKSXkG8 R,DFR| CZSKT?{6俊?0&q9`. yb;n[=d[MJ-KR"eV$E6~U]]]U0L(9:%& qlJ)|J'HR@2{ 2ErCVC#p=+qP 1^hp0Ҕ@OJIa{;.^GOthqa$LœțvVlrV

0 BaGj,mGj;L Sr0R Jof@B!$1LhGaBT \;:T:ؓaI{Vy (#1#ЈY8AgͼXjp}: e0L6)$;4kbSG,)rWN,k&2`F,5.+."y0GJ2:"ѡ 84M%$"FrtFy 0Jh"uI.6+nt#G: .GJ$80aϩ\ĆlKo8C/ T;;ٞZy.rYQb\E&;Q0~J1}SGU@C8=ҧ4oJA0~.cu#X(oVijp 5D.Q4 E K8KH5vHio[IP$c#&CY <5YrZ R8Xzr DVtX6Xh2P%PhUvy 0ќD0;1+`>FSHeF)AzJpv<e['\z#0Վ #Q(N w-,m:\ӯ J:";m8fw4eUx$y;/H,y8]4?'濟dq &^\n:8!7Z޿;Ofxeq,c[ S!=uE7{+Y}aEEL?ls_RγNu$td8L=OAЅK'nZ'g/Σ$Wxm+PA)YACIjͶP(ϥv?>I'%F%䩈AI!1ԖU0R&A.OJ`L%8/Z(!&1&TiZy]0Kx_o@(WCTQk 8}vW8aR\̕VQm霮gloycA<ީPq (̴hd\]'\b29%$Jv D G*mU3X-Ռ ><$!錢;jLq!b#N ĭN Q `7UeKUe[b&Abx/9p I0:mz{&y0g8$yŅMÆ1&X2JN4z02 2K%ឲPNzD$S"&HQS.W.Pvł+Jp-A3|@ ݗDBv!R 71LGkJp&*`k̃&E%% ѤPOmg7`❅^aּcS~Oۿkm,P7iyŞ^B+Tcnm_{3օ"kI{gFkO@b:q.ٳ3dKv9n.n/zzP|r[k1Sssn QU^˹wGoQrW:3mf]Gʹ(r!\mQ}Spv26˅KѾ|rv%xKE.y8a!l\j엻U p|Y IZ{{_ 1Et0^^^;A[*devc+ŌnҞ:L,>{X $Y0TtV wQ pYd@i J &&JSP2X*iCIIcDǔik?vۻ D\xQS$U\ndr;'Q-i\JA!HԲVDW!_"7>j0tZee$F+1Md|ҟTH`'"L0bNj^je%ɴd E /YIF<Xa`*,[ =wQBct1Ҵd1"ĻZ (kVpL~}Ag^ $ed 85H}_®Y}Wo@rqΝsZ7.d)m49cO~3d2?C(P#oȀT y/[0=u=\kݽ!᭭iNY뽹\~gGa7i]WeƭlLB?o@-nA4ImE,Չe͖_2VsJfrr rtlLb>%AL Eu;|ˢNizj'=gj'~;1N]t<ʮg [K;j$~/m٤:?a{&oT <=U36/_޿i|6M]gic//~;?:{ś_;߿Ɵ`dp|6}t^wuwn(]ym@zv[xOϚn(|VQskM ؋jz m1fSdv4^3 eU18ցЌaLv G\g-x2-u 8-h֞`NMLxCÏ6 ?'m%TuCkONjVbrHr|4d=5;0&MM'n آݭˮQY\z b5οw^p[0[wӞ,~F< ׳@@Uwnmc!m-! 4Ӡ bʒI]#]Jd^"%,qDZvvvgxp(06 ayzgǟGW,npr>#pX{ި e(shnx Nmat_m[{3v7})9vqo7A&'} v0.,.c yMVaI +}N֧dsEٸeDq-%%cINذX0'4]VNӨ}&kㄣ+Z7MNkd7Gm;d(I]j|z%S6fgrugzV4ϙS3ɸߓ*D~}0hkL銃;4|g}2*wƃ_. 6_ehs? O_N'p#s:?&>.׭kk^s.ࣀ|k%|%KZ_s}y~2y-q%F4Cc:33v4|ksV&%uq n/Lhؠ*`On=uv}16gnz~qM{̲NZq$r7js2=ʺo2mmjk$JxJ+yF&IFKlH5eOS"*uj)ja"T͜6s 3cU]C)}Tt%t\d.q\\`4 kaksq=L< !: i7hTcb3bY=RbmNݪ&'NjxT M.,[IqH%+#J":AJaE?fT<|j!mwcI(5WTq k)-ӈj*: hD%jeeFjf+E6_̋wU39x(OΪ1ޔQHe [lI$CNCX#&-xMHu l&0 %QF҄rJa٘,R]n:l4,J9ܰ94M|1^nD@9z< "ִt-BZΗY]'<@kEf cPF4d&i&ӼA>)) ş ka*9֙ڷvc A8rL#TfSr]YscvgK/&2ַs:R^ eq2/ h.RLNA A)$AÉM9z"x@W*O>zɒo'Ʃ<+ĖE˭V X@FkR!9[ڋGO898 /BS `ME3a1Ї IA )鰥4!AKV h=&r0Ѹ^8"8TiZ ǘmE"F&T0Qy,ccb"PL^"QhQ P [VsQVSQ v6W pAYyH^TPnѯOνe>U[9VnH&.A5Ghz.懓wgof쵩TdKj;Z? &xN|}~q.2쇼 s͸+- yϜQk1TEӥͬK~'L釨>YZ*N+SIӃd{VZ]pkk"&Hp"1jEHE*Qu\$B2h"#C0aHE:?" HPPXTm2]:HP:E!SZ`>Z1Cma0v]8:ި-?5&g[ߙr8qH"?p;?,RMHPc'p%;VmϬ2^^hVmBC)wTnGfs ߀m@+2jWdW^(S)raT^?)6 6S,VM& П43O1U.t駵[D(=K5WA{qhTg3;IhwTU`Q.8tj)ݨ>+=+@&Ts*+) .MiR* %O6x_Oj)N:j'~omq%q:a~!XB^(,@z'hO%؈{S{i= 5ƦHrQ3ȇm[ƨ)hSf XV&5;]U<&\cՖVK=-@:#SWY8`3'#&zFԝZKY7^svzZt,yOltp3BgBv$3#ɿ7\j?h?,~8;CsCDCDDy'B ?9 ]L+ú[UgvmRiJƘ TT^n~U޵ۋ]nZÇfČkb74/PbZSBɋ{1] /Q.BQ[ {vIJXO+g`|X.[}X~`Bh"yUNEKmbr sg" ,,;CXx*w; I~ ڶ?k;:lQP䂵)_j1k\n.tzsfO%U.#'+;lZ?(9?{q:uhM){ {xW<`$}QG(|x]qQ+k]ltjuʿjWYWo/nߎ^\ 48 +=/D5x +!4npHy#~,'knەi 0N~A}`wo7=rJ{fѷhV??D:Cswf[ێџ+)|%L^/39;X 6V1gݑk-W]:淛1 \w3u)Qi"ڝ KuqcvvZ~a.Qs՛ɐ(;m]vlF5-ۨaӤ[٤cۺʦc.e~#?YHJ =[6ES|տ\behaJ\UUBgU$bUuUرuLy5{;Sg)Y^cA R@ץQ QF$biBF63 T9 K}TZsFoWS|ɡ QW\m0>C(jw_]*Vb6esZ@JAH[c36ڻɨ21dYY(ždzvar/home/core/zuul-output/logs/kubelet.log0000644000000000000000005476036615144662257017732 0ustar rootrootFeb 16 16:59:32 crc systemd[1]: Starting Kubernetes Kubelet... Feb 16 16:59:32 crc restorecon[4690]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:32 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 16:59:33 crc restorecon[4690]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 16:59:33 crc restorecon[4690]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 16 16:59:34 crc kubenswrapper[4794]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 16:59:34 crc kubenswrapper[4794]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 16 16:59:34 crc kubenswrapper[4794]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 16:59:34 crc kubenswrapper[4794]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 16:59:34 crc kubenswrapper[4794]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 16 16:59:34 crc kubenswrapper[4794]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.551880 4794 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.558668 4794 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.559476 4794 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.559535 4794 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.559551 4794 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.559679 4794 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.559769 4794 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.559836 4794 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.559851 4794 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.559865 4794 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560240 4794 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560252 4794 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560262 4794 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560273 4794 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560284 4794 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560294 4794 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560341 4794 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560353 4794 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560363 4794 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560373 4794 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560383 4794 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560397 4794 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560411 4794 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560421 4794 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560432 4794 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560441 4794 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560450 4794 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560460 4794 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560469 4794 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560483 4794 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560492 4794 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560502 4794 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560511 4794 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560521 4794 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560531 4794 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560540 4794 feature_gate.go:330] unrecognized feature gate: Example Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560550 4794 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560559 4794 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560568 4794 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560587 4794 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560598 4794 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560608 4794 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560618 4794 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560629 4794 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560640 4794 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560650 4794 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560660 4794 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560670 4794 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560681 4794 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560689 4794 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560697 4794 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560705 4794 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560712 4794 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560720 4794 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560727 4794 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560738 4794 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560748 4794 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560759 4794 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560769 4794 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560779 4794 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560788 4794 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560798 4794 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560813 4794 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560822 4794 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560830 4794 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560838 4794 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560850 4794 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560859 4794 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560869 4794 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560882 4794 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560894 4794 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.560904 4794 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561070 4794 flags.go:64] FLAG: --address="0.0.0.0" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561092 4794 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561111 4794 flags.go:64] FLAG: --anonymous-auth="true" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561124 4794 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561137 4794 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561146 4794 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561159 4794 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561170 4794 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561180 4794 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561189 4794 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561199 4794 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561208 4794 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561217 4794 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561226 4794 flags.go:64] FLAG: --cgroup-root="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561235 4794 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561245 4794 flags.go:64] FLAG: --client-ca-file="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561254 4794 flags.go:64] FLAG: --cloud-config="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561263 4794 flags.go:64] FLAG: --cloud-provider="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561271 4794 flags.go:64] FLAG: --cluster-dns="[]" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561297 4794 flags.go:64] FLAG: --cluster-domain="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561343 4794 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561368 4794 flags.go:64] FLAG: --config-dir="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561382 4794 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561394 4794 flags.go:64] FLAG: --container-log-max-files="5" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561406 4794 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561416 4794 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561426 4794 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561436 4794 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561445 4794 flags.go:64] FLAG: --contention-profiling="false" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561454 4794 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561463 4794 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561472 4794 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561481 4794 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561493 4794 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561502 4794 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561511 4794 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561520 4794 flags.go:64] FLAG: --enable-load-reader="false" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561529 4794 flags.go:64] FLAG: --enable-server="true" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561538 4794 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561561 4794 flags.go:64] FLAG: --event-burst="100" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561571 4794 flags.go:64] FLAG: --event-qps="50" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561581 4794 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561590 4794 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561599 4794 flags.go:64] FLAG: --eviction-hard="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561612 4794 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561621 4794 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561630 4794 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561654 4794 flags.go:64] FLAG: --eviction-soft="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561664 4794 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561710 4794 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561720 4794 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561728 4794 flags.go:64] FLAG: --experimental-mounter-path="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561738 4794 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561747 4794 flags.go:64] FLAG: --fail-swap-on="true" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561756 4794 flags.go:64] FLAG: --feature-gates="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561767 4794 flags.go:64] FLAG: --file-check-frequency="20s" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561776 4794 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561785 4794 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561795 4794 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561805 4794 flags.go:64] FLAG: --healthz-port="10248" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561814 4794 flags.go:64] FLAG: --help="false" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561823 4794 flags.go:64] FLAG: --hostname-override="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561832 4794 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561841 4794 flags.go:64] FLAG: --http-check-frequency="20s" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561850 4794 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561859 4794 flags.go:64] FLAG: --image-credential-provider-config="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561868 4794 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561877 4794 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561886 4794 flags.go:64] FLAG: --image-service-endpoint="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561894 4794 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561903 4794 flags.go:64] FLAG: --kube-api-burst="100" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561912 4794 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561921 4794 flags.go:64] FLAG: --kube-api-qps="50" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561930 4794 flags.go:64] FLAG: --kube-reserved="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561939 4794 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561947 4794 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561957 4794 flags.go:64] FLAG: --kubelet-cgroups="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561965 4794 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561974 4794 flags.go:64] FLAG: --lock-file="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561983 4794 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.561992 4794 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562001 4794 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562014 4794 flags.go:64] FLAG: --log-json-split-stream="false" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562037 4794 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562047 4794 flags.go:64] FLAG: --log-text-split-stream="false" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562056 4794 flags.go:64] FLAG: --logging-format="text" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562064 4794 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562074 4794 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562083 4794 flags.go:64] FLAG: --manifest-url="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562092 4794 flags.go:64] FLAG: --manifest-url-header="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562108 4794 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562117 4794 flags.go:64] FLAG: --max-open-files="1000000" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562127 4794 flags.go:64] FLAG: --max-pods="110" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562137 4794 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562147 4794 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562156 4794 flags.go:64] FLAG: --memory-manager-policy="None" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562165 4794 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562174 4794 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562183 4794 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562193 4794 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562213 4794 flags.go:64] FLAG: --node-status-max-images="50" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562222 4794 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562231 4794 flags.go:64] FLAG: --oom-score-adj="-999" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562241 4794 flags.go:64] FLAG: --pod-cidr="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562250 4794 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562262 4794 flags.go:64] FLAG: --pod-manifest-path="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562271 4794 flags.go:64] FLAG: --pod-max-pids="-1" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562281 4794 flags.go:64] FLAG: --pods-per-core="0" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562290 4794 flags.go:64] FLAG: --port="10250" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562327 4794 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562336 4794 flags.go:64] FLAG: --provider-id="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562345 4794 flags.go:64] FLAG: --qos-reserved="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562354 4794 flags.go:64] FLAG: --read-only-port="10255" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562363 4794 flags.go:64] FLAG: --register-node="true" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562372 4794 flags.go:64] FLAG: --register-schedulable="true" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562381 4794 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562395 4794 flags.go:64] FLAG: --registry-burst="10" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562404 4794 flags.go:64] FLAG: --registry-qps="5" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562413 4794 flags.go:64] FLAG: --reserved-cpus="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562435 4794 flags.go:64] FLAG: --reserved-memory="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562446 4794 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562456 4794 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562464 4794 flags.go:64] FLAG: --rotate-certificates="false" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562476 4794 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562485 4794 flags.go:64] FLAG: --runonce="false" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562494 4794 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562503 4794 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562512 4794 flags.go:64] FLAG: --seccomp-default="false" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562521 4794 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562530 4794 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562540 4794 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562549 4794 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562559 4794 flags.go:64] FLAG: --storage-driver-password="root" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562567 4794 flags.go:64] FLAG: --storage-driver-secure="false" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562576 4794 flags.go:64] FLAG: --storage-driver-table="stats" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562585 4794 flags.go:64] FLAG: --storage-driver-user="root" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562594 4794 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562604 4794 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562613 4794 flags.go:64] FLAG: --system-cgroups="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562622 4794 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562636 4794 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562645 4794 flags.go:64] FLAG: --tls-cert-file="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562653 4794 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562671 4794 flags.go:64] FLAG: --tls-min-version="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562680 4794 flags.go:64] FLAG: --tls-private-key-file="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562689 4794 flags.go:64] FLAG: --topology-manager-policy="none" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562699 4794 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562707 4794 flags.go:64] FLAG: --topology-manager-scope="container" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562717 4794 flags.go:64] FLAG: --v="2" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562733 4794 flags.go:64] FLAG: --version="false" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562744 4794 flags.go:64] FLAG: --vmodule="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562754 4794 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.562763 4794 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.562987 4794 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.562998 4794 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563022 4794 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563030 4794 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563038 4794 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563045 4794 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563053 4794 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563061 4794 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563069 4794 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563076 4794 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563085 4794 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563093 4794 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563101 4794 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563109 4794 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563119 4794 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563130 4794 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563138 4794 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563147 4794 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563156 4794 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563163 4794 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563171 4794 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563179 4794 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563186 4794 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563198 4794 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563205 4794 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563213 4794 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563220 4794 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563230 4794 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563240 4794 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563248 4794 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563257 4794 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563266 4794 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563274 4794 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563283 4794 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563293 4794 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563326 4794 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563334 4794 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563342 4794 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563364 4794 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563373 4794 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563382 4794 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563390 4794 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563400 4794 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563409 4794 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563420 4794 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563429 4794 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563438 4794 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563446 4794 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563454 4794 feature_gate.go:330] unrecognized feature gate: Example Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563462 4794 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563470 4794 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563477 4794 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563485 4794 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563493 4794 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563500 4794 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563511 4794 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563519 4794 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563527 4794 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563534 4794 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563541 4794 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563549 4794 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563557 4794 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563564 4794 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563572 4794 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563579 4794 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563587 4794 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563597 4794 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563605 4794 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563613 4794 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563620 4794 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.563628 4794 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.563653 4794 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.572271 4794 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.572337 4794 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572430 4794 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572441 4794 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572446 4794 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572450 4794 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572455 4794 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572461 4794 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572467 4794 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572471 4794 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572476 4794 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572481 4794 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572485 4794 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572490 4794 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572495 4794 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572499 4794 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572503 4794 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572508 4794 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572512 4794 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572517 4794 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572521 4794 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572525 4794 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572530 4794 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572534 4794 feature_gate.go:330] unrecognized feature gate: Example Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572537 4794 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572541 4794 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572544 4794 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572548 4794 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572552 4794 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572556 4794 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572560 4794 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572566 4794 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572571 4794 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572576 4794 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572580 4794 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572585 4794 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572589 4794 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572593 4794 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572597 4794 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572600 4794 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572605 4794 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572610 4794 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572616 4794 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572622 4794 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572627 4794 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572631 4794 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572635 4794 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572639 4794 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572644 4794 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572647 4794 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572652 4794 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572656 4794 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572661 4794 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572666 4794 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572670 4794 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572674 4794 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572679 4794 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572683 4794 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572686 4794 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572690 4794 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572694 4794 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572698 4794 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572702 4794 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572706 4794 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572710 4794 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572714 4794 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572718 4794 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572724 4794 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572728 4794 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572733 4794 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572738 4794 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572742 4794 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572747 4794 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.572754 4794 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572919 4794 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572929 4794 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572935 4794 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572941 4794 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572945 4794 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572950 4794 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572955 4794 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572960 4794 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572966 4794 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572970 4794 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572975 4794 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572980 4794 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572986 4794 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572992 4794 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.572997 4794 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573001 4794 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573006 4794 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573012 4794 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573018 4794 feature_gate.go:330] unrecognized feature gate: Example Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573024 4794 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573029 4794 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573034 4794 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573039 4794 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573044 4794 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573049 4794 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573054 4794 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573059 4794 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573064 4794 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573069 4794 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573075 4794 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573080 4794 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573086 4794 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573091 4794 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573096 4794 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573102 4794 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573107 4794 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573112 4794 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573116 4794 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573121 4794 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573125 4794 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573130 4794 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573134 4794 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573138 4794 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573143 4794 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573147 4794 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573152 4794 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573156 4794 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573161 4794 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573165 4794 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573170 4794 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573174 4794 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573179 4794 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573184 4794 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573190 4794 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573195 4794 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573200 4794 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573205 4794 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573209 4794 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573214 4794 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573219 4794 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573224 4794 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573228 4794 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573233 4794 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573238 4794 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573242 4794 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573248 4794 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573251 4794 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573255 4794 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573258 4794 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573262 4794 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.573265 4794 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.573271 4794 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.574568 4794 server.go:940] "Client rotation is on, will bootstrap in background" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.582146 4794 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.582241 4794 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.583973 4794 server.go:997] "Starting client certificate rotation" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.584006 4794 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.585364 4794 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-14 03:04:35.182233779 +0000 UTC Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.585448 4794 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.609546 4794 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.611721 4794 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 16:59:34 crc kubenswrapper[4794]: E0216 16:59:34.614718 4794 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.627771 4794 log.go:25] "Validated CRI v1 runtime API" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.662070 4794 log.go:25] "Validated CRI v1 image API" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.664564 4794 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.677216 4794 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-16-16-55-00-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.677253 4794 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.706737 4794 manager.go:217] Machine: {Timestamp:2026-02-16 16:59:34.704760204 +0000 UTC m=+0.652854881 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:b3d0f632-3e25-45db-ae26-e5b3ec8421a1 BootID:ccf280c0-9a33-46bd-be2c-0dca34f382e0 Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:7c:38:b5 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:7c:38:b5 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:22:4d:1e Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:59:aa:5e Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:fd:b8:5b Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:9b:3b:0e Speed:-1 Mtu:1496} {Name:eth10 MacAddress:7e:a9:e0:37:3d:e5 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:6a:0e:17:27:4d:88 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.707015 4794 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.707231 4794 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.707567 4794 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.707725 4794 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.707764 4794 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.707957 4794 topology_manager.go:138] "Creating topology manager with none policy" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.707972 4794 container_manager_linux.go:303] "Creating device plugin manager" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.708544 4794 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.708579 4794 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.709516 4794 state_mem.go:36] "Initialized new in-memory state store" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.709619 4794 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.714269 4794 kubelet.go:418] "Attempting to sync node with API server" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.714327 4794 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.714382 4794 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.714400 4794 kubelet.go:324] "Adding apiserver pod source" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.714416 4794 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.719189 4794 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.720177 4794 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.723289 4794 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.723336 4794 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Feb 16 16:59:34 crc kubenswrapper[4794]: E0216 16:59:34.723455 4794 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.723642 4794 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Feb 16 16:59:34 crc kubenswrapper[4794]: E0216 16:59:34.723802 4794 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.724688 4794 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.724719 4794 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.724729 4794 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.724739 4794 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.724755 4794 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.724779 4794 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.724789 4794 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.724803 4794 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.724815 4794 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.724825 4794 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.724843 4794 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.724852 4794 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.725957 4794 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.726656 4794 server.go:1280] "Started kubelet" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.728385 4794 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.730727 4794 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.730904 4794 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 16 16:59:34 crc systemd[1]: Started Kubernetes Kubelet. Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.731404 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.731456 4794 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.732075 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 01:57:58.045417821 +0000 UTC Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.732219 4794 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.732233 4794 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.732363 4794 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.732519 4794 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 16 16:59:34 crc kubenswrapper[4794]: E0216 16:59:34.732805 4794 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.736719 4794 factory.go:55] Registering systemd factory Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.736799 4794 factory.go:221] Registration of the systemd container factory successfully Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.737773 4794 factory.go:153] Registering CRI-O factory Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.737805 4794 factory.go:221] Registration of the crio container factory successfully Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.737781 4794 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Feb 16 16:59:34 crc kubenswrapper[4794]: E0216 16:59:34.737929 4794 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.737944 4794 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.738074 4794 factory.go:103] Registering Raw factory Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.738098 4794 manager.go:1196] Started watching for new ooms in manager Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.742529 4794 manager.go:319] Starting recovery of all containers Feb 16 16:59:34 crc kubenswrapper[4794]: E0216 16:59:34.742546 4794 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="200ms" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.746224 4794 server.go:460] "Adding debug handlers to kubelet server" Feb 16 16:59:34 crc kubenswrapper[4794]: E0216 16:59:34.749220 4794 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.151:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1894c8a764054e91 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 16:59:34.726610577 +0000 UTC m=+0.674705254,LastTimestamp:2026-02-16 16:59:34.726610577 +0000 UTC m=+0.674705254,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.755933 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756025 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756044 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756061 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756082 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756103 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756117 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756142 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756165 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756189 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756204 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756221 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756245 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756272 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756365 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756384 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756403 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756417 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756433 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756453 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756465 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756479 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756494 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756509 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756528 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756541 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756555 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756567 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756580 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756593 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756641 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756698 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756719 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756801 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756822 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756839 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756856 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756874 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756886 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756951 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756964 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756982 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.756994 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.757011 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.757025 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.757038 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.757050 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.757091 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.757104 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.757116 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.757129 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.757141 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.757193 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.757208 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.757275 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.757340 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.757360 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.757374 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.757387 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.757402 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.757420 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.757436 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.757453 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.757492 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.757555 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.757575 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.757665 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.757739 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.757754 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.757766 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.757777 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.757836 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.757851 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.757865 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.757987 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.758022 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.758035 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.758055 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.758076 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.758945 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.758970 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.758991 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.759004 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.759016 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.759030 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.759046 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.759065 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.759123 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.759139 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.759154 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.759261 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.759281 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.759403 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.759431 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.759446 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.759481 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.759554 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.759572 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.759649 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.761039 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.761508 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.761588 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.761633 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.761662 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.761722 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.761778 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.761818 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.761866 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.761910 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.761994 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762030 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762054 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762085 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762113 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762137 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762163 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762183 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762200 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762229 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762246 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762272 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762295 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762352 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762381 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762398 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762432 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762457 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762475 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762501 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762520 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762562 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762585 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762608 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762634 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762657 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762685 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762707 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762724 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762754 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762778 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762806 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762823 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762839 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762865 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762883 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762907 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762927 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762948 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762971 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.762995 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763019 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763047 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763068 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763095 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763120 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763150 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763173 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763190 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763223 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763238 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763258 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763284 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763324 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763334 4794 manager.go:324] Recovery completed Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763349 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763373 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763399 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763417 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763441 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763469 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763492 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763507 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763535 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763554 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763581 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763598 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763615 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763639 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763656 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763681 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763696 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763716 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763739 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763755 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763783 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763808 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763826 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763846 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.763862 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.766762 4794 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.766847 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.766872 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.766888 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.766903 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.766919 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.766934 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.766950 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.766964 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.766982 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.767000 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.767018 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.767034 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.767051 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.767067 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.767103 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.767137 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.767151 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.767165 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.767181 4794 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.767195 4794 reconstruct.go:97] "Volume reconstruction finished" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.767208 4794 reconciler.go:26] "Reconciler: start to sync state" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.777170 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.782984 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.783057 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.783076 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.786066 4794 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.786238 4794 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.786339 4794 state_mem.go:36] "Initialized new in-memory state store" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.786544 4794 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.789937 4794 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.789998 4794 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.790120 4794 kubelet.go:2335] "Starting kubelet main sync loop" Feb 16 16:59:34 crc kubenswrapper[4794]: E0216 16:59:34.790189 4794 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 16 16:59:34 crc kubenswrapper[4794]: W0216 16:59:34.791264 4794 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Feb 16 16:59:34 crc kubenswrapper[4794]: E0216 16:59:34.791439 4794 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.806207 4794 policy_none.go:49] "None policy: Start" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.807527 4794 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.807592 4794 state_mem.go:35] "Initializing new in-memory state store" Feb 16 16:59:34 crc kubenswrapper[4794]: E0216 16:59:34.832874 4794 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.866589 4794 manager.go:334] "Starting Device Plugin manager" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.866640 4794 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.866656 4794 server.go:79] "Starting device plugin registration server" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.867218 4794 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.867235 4794 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.867445 4794 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.867574 4794 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.867587 4794 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 16 16:59:34 crc kubenswrapper[4794]: E0216 16:59:34.875232 4794 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.891135 4794 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.891250 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.892326 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.892353 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.892365 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.892485 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.892862 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.892894 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.893589 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.893609 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.893627 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.893672 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.893697 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.893705 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.893721 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.894008 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.894027 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.894405 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.894451 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.894467 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.894635 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.895148 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.895178 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.895193 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.894833 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.895476 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.896124 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.896151 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.896161 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.896245 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.896374 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.896414 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.897116 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.897149 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.897162 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.897317 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.897348 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.897429 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.897463 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.897479 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.897785 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.897812 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.897823 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.898347 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.898370 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.898383 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:34 crc kubenswrapper[4794]: E0216 16:59:34.944113 4794 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="400ms" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.968864 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.968998 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.969063 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.969098 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.969133 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.969175 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.969207 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.969253 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.969326 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.969379 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.969421 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.969467 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.969515 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.969543 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.969694 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.969734 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.970608 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.970660 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.970677 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:34 crc kubenswrapper[4794]: I0216 16:59:34.970709 4794 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 16:59:34 crc kubenswrapper[4794]: E0216 16:59:34.971289 4794 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.151:6443: connect: connection refused" node="crc" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.070899 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.071257 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.071176 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.071409 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.071613 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.071844 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.071904 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.071932 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.071949 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.071957 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.071997 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.072021 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.072041 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.072052 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.071995 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.072063 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.072110 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.072075 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.072116 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.072163 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.072191 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.072225 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.072231 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.072263 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.072359 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.072360 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.072400 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.072373 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.072458 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.072788 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.171718 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.173919 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.173969 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.173980 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.174007 4794 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 16:59:35 crc kubenswrapper[4794]: E0216 16:59:35.174536 4794 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.151:6443: connect: connection refused" node="crc" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.229648 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.250644 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 16:59:35 crc kubenswrapper[4794]: W0216 16:59:35.274225 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-90532f8121f41d12f9e05afd87e6060b6b21d088c078371f305895e58eeef00d WatchSource:0}: Error finding container 90532f8121f41d12f9e05afd87e6060b6b21d088c078371f305895e58eeef00d: Status 404 returned error can't find the container with id 90532f8121f41d12f9e05afd87e6060b6b21d088c078371f305895e58eeef00d Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.279243 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 16:59:35 crc kubenswrapper[4794]: W0216 16:59:35.287296 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-a530f3fcf79d8117222154279d822ba87664f12abee2bc0c66bc7b21378f9bd4 WatchSource:0}: Error finding container a530f3fcf79d8117222154279d822ba87664f12abee2bc0c66bc7b21378f9bd4: Status 404 returned error can't find the container with id a530f3fcf79d8117222154279d822ba87664f12abee2bc0c66bc7b21378f9bd4 Feb 16 16:59:35 crc kubenswrapper[4794]: W0216 16:59:35.298225 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-ef92d28cee79e4913fb1fbc3c090a2cd314781a8ee6b74a38273cc482e189538 WatchSource:0}: Error finding container ef92d28cee79e4913fb1fbc3c090a2cd314781a8ee6b74a38273cc482e189538: Status 404 returned error can't find the container with id ef92d28cee79e4913fb1fbc3c090a2cd314781a8ee6b74a38273cc482e189538 Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.299453 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.310438 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 16:59:35 crc kubenswrapper[4794]: W0216 16:59:35.313513 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-2843c48f47d2919b7ed25c46c1f10f08bb30a3a688c5a49d6abfbd401ef38739 WatchSource:0}: Error finding container 2843c48f47d2919b7ed25c46c1f10f08bb30a3a688c5a49d6abfbd401ef38739: Status 404 returned error can't find the container with id 2843c48f47d2919b7ed25c46c1f10f08bb30a3a688c5a49d6abfbd401ef38739 Feb 16 16:59:35 crc kubenswrapper[4794]: W0216 16:59:35.328296 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-3d4e4c5a4e62615e83ba3c1a1bb21c9c5c34aed7426ac5ce3ec58944505b632c WatchSource:0}: Error finding container 3d4e4c5a4e62615e83ba3c1a1bb21c9c5c34aed7426ac5ce3ec58944505b632c: Status 404 returned error can't find the container with id 3d4e4c5a4e62615e83ba3c1a1bb21c9c5c34aed7426ac5ce3ec58944505b632c Feb 16 16:59:35 crc kubenswrapper[4794]: E0216 16:59:35.345747 4794 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="800ms" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.575290 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.576917 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.576967 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.576980 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.577009 4794 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 16:59:35 crc kubenswrapper[4794]: E0216 16:59:35.577483 4794 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.151:6443: connect: connection refused" node="crc" Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.729927 4794 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.733032 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 06:48:17.474233028 +0000 UTC Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.796637 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a530f3fcf79d8117222154279d822ba87664f12abee2bc0c66bc7b21378f9bd4"} Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.798847 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"90532f8121f41d12f9e05afd87e6060b6b21d088c078371f305895e58eeef00d"} Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.800112 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"3d4e4c5a4e62615e83ba3c1a1bb21c9c5c34aed7426ac5ce3ec58944505b632c"} Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.801620 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2843c48f47d2919b7ed25c46c1f10f08bb30a3a688c5a49d6abfbd401ef38739"} Feb 16 16:59:35 crc kubenswrapper[4794]: I0216 16:59:35.803418 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ef92d28cee79e4913fb1fbc3c090a2cd314781a8ee6b74a38273cc482e189538"} Feb 16 16:59:35 crc kubenswrapper[4794]: W0216 16:59:35.822616 4794 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Feb 16 16:59:35 crc kubenswrapper[4794]: E0216 16:59:35.822708 4794 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:59:35 crc kubenswrapper[4794]: W0216 16:59:35.851929 4794 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Feb 16 16:59:35 crc kubenswrapper[4794]: E0216 16:59:35.852065 4794 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:59:36 crc kubenswrapper[4794]: W0216 16:59:36.086891 4794 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Feb 16 16:59:36 crc kubenswrapper[4794]: E0216 16:59:36.086990 4794 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:59:36 crc kubenswrapper[4794]: W0216 16:59:36.124881 4794 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Feb 16 16:59:36 crc kubenswrapper[4794]: E0216 16:59:36.124996 4794 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:59:36 crc kubenswrapper[4794]: E0216 16:59:36.147389 4794 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="1.6s" Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.377918 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.380879 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.380939 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.380953 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.380984 4794 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 16:59:36 crc kubenswrapper[4794]: E0216 16:59:36.381727 4794 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.151:6443: connect: connection refused" node="crc" Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.645855 4794 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 16:59:36 crc kubenswrapper[4794]: E0216 16:59:36.647436 4794 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.730089 4794 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.733200 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 17:58:35.91693076 +0000 UTC Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.811062 4794 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608" exitCode=0 Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.811232 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.811213 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608"} Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.812546 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.812604 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.812623 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.813796 4794 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9cd2379cab47da74afbb6d77ed959334cd3389aee4776e1e0cf547847a4aef93" exitCode=0 Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.813906 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9cd2379cab47da74afbb6d77ed959334cd3389aee4776e1e0cf547847a4aef93"} Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.813934 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.814737 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.814854 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.814897 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.814916 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.816242 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.816328 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.816351 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.817206 4794 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="b4b570424b4cd97b02ca03d9901d70aab76c2037d3b3799978e1a116b0c8f5e8" exitCode=0 Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.817270 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"b4b570424b4cd97b02ca03d9901d70aab76c2037d3b3799978e1a116b0c8f5e8"} Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.817415 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.818676 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.818728 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.818759 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.819872 4794 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="ba8e70eecfbf8e03bddd7c5db4b683416b21b4e717c377954dfec7ff20d134e6" exitCode=0 Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.820123 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"ba8e70eecfbf8e03bddd7c5db4b683416b21b4e717c377954dfec7ff20d134e6"} Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.820342 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.821486 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.821541 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.821567 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.825056 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce"} Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.825129 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a"} Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.825151 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb"} Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.825168 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc"} Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.825293 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.826549 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.826590 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:36 crc kubenswrapper[4794]: I0216 16:59:36.826606 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:37 crc kubenswrapper[4794]: I0216 16:59:37.515455 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 16:59:37 crc kubenswrapper[4794]: I0216 16:59:37.528629 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 16:59:37 crc kubenswrapper[4794]: I0216 16:59:37.730178 4794 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Feb 16 16:59:37 crc kubenswrapper[4794]: I0216 16:59:37.734364 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 20:55:52.672824329 +0000 UTC Feb 16 16:59:37 crc kubenswrapper[4794]: E0216 16:59:37.748046 4794 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="3.2s" Feb 16 16:59:37 crc kubenswrapper[4794]: I0216 16:59:37.832501 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"090c39431176a8d41d18c2d8583aaa438dd244ceeeebaef0ee502ab9b0958d86"} Feb 16 16:59:37 crc kubenswrapper[4794]: I0216 16:59:37.832559 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6f61dc0173699c28378af80479249a69f3056651048398a9699c1e268d386329"} Feb 16 16:59:37 crc kubenswrapper[4794]: I0216 16:59:37.832577 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3e67602098c1fa62e4147637862358b3139d1307c7f2d09fc3c715ea67520fe2"} Feb 16 16:59:37 crc kubenswrapper[4794]: I0216 16:59:37.833180 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:37 crc kubenswrapper[4794]: I0216 16:59:37.835988 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:37 crc kubenswrapper[4794]: I0216 16:59:37.836036 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:37 crc kubenswrapper[4794]: I0216 16:59:37.836051 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:37 crc kubenswrapper[4794]: I0216 16:59:37.840335 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992"} Feb 16 16:59:37 crc kubenswrapper[4794]: I0216 16:59:37.840385 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873"} Feb 16 16:59:37 crc kubenswrapper[4794]: I0216 16:59:37.840400 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d"} Feb 16 16:59:37 crc kubenswrapper[4794]: I0216 16:59:37.840416 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f"} Feb 16 16:59:37 crc kubenswrapper[4794]: I0216 16:59:37.842383 4794 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="70e876d54a1660120634c02546875dd421a2392eaf60e2cc92981e401d5a5437" exitCode=0 Feb 16 16:59:37 crc kubenswrapper[4794]: I0216 16:59:37.842463 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:37 crc kubenswrapper[4794]: I0216 16:59:37.842466 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"70e876d54a1660120634c02546875dd421a2392eaf60e2cc92981e401d5a5437"} Feb 16 16:59:37 crc kubenswrapper[4794]: I0216 16:59:37.843393 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:37 crc kubenswrapper[4794]: I0216 16:59:37.843419 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:37 crc kubenswrapper[4794]: I0216 16:59:37.843427 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:37 crc kubenswrapper[4794]: I0216 16:59:37.845455 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:37 crc kubenswrapper[4794]: I0216 16:59:37.845491 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"d6269efb5010fa7baa3f906435c74594d73213e78cb782702c1fda4e3feae5f7"} Feb 16 16:59:37 crc kubenswrapper[4794]: I0216 16:59:37.845458 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:37 crc kubenswrapper[4794]: I0216 16:59:37.846705 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:37 crc kubenswrapper[4794]: I0216 16:59:37.846715 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:37 crc kubenswrapper[4794]: I0216 16:59:37.846740 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:37 crc kubenswrapper[4794]: I0216 16:59:37.846786 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:37 crc kubenswrapper[4794]: I0216 16:59:37.846804 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:37 crc kubenswrapper[4794]: I0216 16:59:37.846806 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:37 crc kubenswrapper[4794]: E0216 16:59:37.926327 4794 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.151:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1894c8a764054e91 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 16:59:34.726610577 +0000 UTC m=+0.674705254,LastTimestamp:2026-02-16 16:59:34.726610577 +0000 UTC m=+0.674705254,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 16:59:37 crc kubenswrapper[4794]: I0216 16:59:37.982798 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:37 crc kubenswrapper[4794]: I0216 16:59:37.984060 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:37 crc kubenswrapper[4794]: I0216 16:59:37.984111 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:37 crc kubenswrapper[4794]: I0216 16:59:37.984122 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:37 crc kubenswrapper[4794]: I0216 16:59:37.984176 4794 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 16:59:37 crc kubenswrapper[4794]: E0216 16:59:37.984719 4794 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.151:6443: connect: connection refused" node="crc" Feb 16 16:59:38 crc kubenswrapper[4794]: W0216 16:59:38.217135 4794 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Feb 16 16:59:38 crc kubenswrapper[4794]: E0216 16:59:38.217243 4794 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:59:38 crc kubenswrapper[4794]: W0216 16:59:38.321220 4794 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Feb 16 16:59:38 crc kubenswrapper[4794]: E0216 16:59:38.321384 4794 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:59:38 crc kubenswrapper[4794]: I0216 16:59:38.509436 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 16:59:38 crc kubenswrapper[4794]: I0216 16:59:38.735268 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 17:03:49.189126727 +0000 UTC Feb 16 16:59:38 crc kubenswrapper[4794]: I0216 16:59:38.851003 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d"} Feb 16 16:59:38 crc kubenswrapper[4794]: I0216 16:59:38.851135 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:38 crc kubenswrapper[4794]: I0216 16:59:38.852232 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:38 crc kubenswrapper[4794]: I0216 16:59:38.852271 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:38 crc kubenswrapper[4794]: I0216 16:59:38.852286 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:38 crc kubenswrapper[4794]: I0216 16:59:38.853444 4794 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0be47eccb8ba64ca65a2054c54ade6c48c7c5e3d0cd2a6b7aff5be65e4f9f2d2" exitCode=0 Feb 16 16:59:38 crc kubenswrapper[4794]: I0216 16:59:38.853550 4794 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 16:59:38 crc kubenswrapper[4794]: I0216 16:59:38.853588 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:38 crc kubenswrapper[4794]: I0216 16:59:38.854221 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:38 crc kubenswrapper[4794]: I0216 16:59:38.854639 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0be47eccb8ba64ca65a2054c54ade6c48c7c5e3d0cd2a6b7aff5be65e4f9f2d2"} Feb 16 16:59:38 crc kubenswrapper[4794]: I0216 16:59:38.854761 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:38 crc kubenswrapper[4794]: I0216 16:59:38.855167 4794 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 16:59:38 crc kubenswrapper[4794]: I0216 16:59:38.855202 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:38 crc kubenswrapper[4794]: I0216 16:59:38.856053 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:38 crc kubenswrapper[4794]: I0216 16:59:38.856085 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:38 crc kubenswrapper[4794]: I0216 16:59:38.856098 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:38 crc kubenswrapper[4794]: I0216 16:59:38.856580 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:38 crc kubenswrapper[4794]: I0216 16:59:38.856605 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:38 crc kubenswrapper[4794]: I0216 16:59:38.856615 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:38 crc kubenswrapper[4794]: I0216 16:59:38.857126 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:38 crc kubenswrapper[4794]: I0216 16:59:38.857151 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:38 crc kubenswrapper[4794]: I0216 16:59:38.857163 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:38 crc kubenswrapper[4794]: I0216 16:59:38.857640 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:38 crc kubenswrapper[4794]: I0216 16:59:38.857664 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:38 crc kubenswrapper[4794]: I0216 16:59:38.857674 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:39 crc kubenswrapper[4794]: I0216 16:59:39.064207 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 16:59:39 crc kubenswrapper[4794]: I0216 16:59:39.074465 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 16:59:39 crc kubenswrapper[4794]: I0216 16:59:39.735767 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 23:25:57.58885962 +0000 UTC Feb 16 16:59:39 crc kubenswrapper[4794]: I0216 16:59:39.863355 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c00cf1665bc64f8a919b984cdb95be659a0a3d2afcfdfb5baf29368a6031e575"} Feb 16 16:59:39 crc kubenswrapper[4794]: I0216 16:59:39.863422 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6b5a1aad922ca5e60996a10e78e36da7d08d59f15a9f3c67207a5d8a735b340d"} Feb 16 16:59:39 crc kubenswrapper[4794]: I0216 16:59:39.863447 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"42f99a63c9f4589816dacec8ed006920bdc9dade26f88221a60210087e3bc9a2"} Feb 16 16:59:39 crc kubenswrapper[4794]: I0216 16:59:39.863460 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:39 crc kubenswrapper[4794]: I0216 16:59:39.863466 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0b3886f699203085b954431fa7149a64c3a6c2fa676d8f96c5b0a214063a9b5b"} Feb 16 16:59:39 crc kubenswrapper[4794]: I0216 16:59:39.863494 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:39 crc kubenswrapper[4794]: I0216 16:59:39.863596 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 16:59:39 crc kubenswrapper[4794]: I0216 16:59:39.864788 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:39 crc kubenswrapper[4794]: I0216 16:59:39.864854 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:39 crc kubenswrapper[4794]: I0216 16:59:39.864876 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:39 crc kubenswrapper[4794]: I0216 16:59:39.864951 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:39 crc kubenswrapper[4794]: I0216 16:59:39.865037 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:39 crc kubenswrapper[4794]: I0216 16:59:39.865106 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:40 crc kubenswrapper[4794]: I0216 16:59:40.735927 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 21:58:14.398845574 +0000 UTC Feb 16 16:59:40 crc kubenswrapper[4794]: I0216 16:59:40.848778 4794 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 16:59:40 crc kubenswrapper[4794]: I0216 16:59:40.876152 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8c31878a9ecb45ff483aa67d9e752fa84506882dc640e5d85923e09bcf30e338"} Feb 16 16:59:40 crc kubenswrapper[4794]: I0216 16:59:40.876343 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:40 crc kubenswrapper[4794]: I0216 16:59:40.876415 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:40 crc kubenswrapper[4794]: I0216 16:59:40.876348 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:40 crc kubenswrapper[4794]: I0216 16:59:40.877995 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:40 crc kubenswrapper[4794]: I0216 16:59:40.878031 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:40 crc kubenswrapper[4794]: I0216 16:59:40.878048 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:40 crc kubenswrapper[4794]: I0216 16:59:40.878050 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:40 crc kubenswrapper[4794]: I0216 16:59:40.878142 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:40 crc kubenswrapper[4794]: I0216 16:59:40.878172 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:40 crc kubenswrapper[4794]: I0216 16:59:40.878390 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:40 crc kubenswrapper[4794]: I0216 16:59:40.878447 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:40 crc kubenswrapper[4794]: I0216 16:59:40.878469 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:41 crc kubenswrapper[4794]: I0216 16:59:41.185374 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:41 crc kubenswrapper[4794]: I0216 16:59:41.186508 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:41 crc kubenswrapper[4794]: I0216 16:59:41.186545 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:41 crc kubenswrapper[4794]: I0216 16:59:41.186557 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:41 crc kubenswrapper[4794]: I0216 16:59:41.186579 4794 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 16:59:41 crc kubenswrapper[4794]: I0216 16:59:41.736547 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 00:05:14.693502277 +0000 UTC Feb 16 16:59:41 crc kubenswrapper[4794]: I0216 16:59:41.878353 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:41 crc kubenswrapper[4794]: I0216 16:59:41.879436 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:41 crc kubenswrapper[4794]: I0216 16:59:41.879480 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:41 crc kubenswrapper[4794]: I0216 16:59:41.879504 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:42 crc kubenswrapper[4794]: I0216 16:59:42.737211 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 20:50:04.690679776 +0000 UTC Feb 16 16:59:42 crc kubenswrapper[4794]: I0216 16:59:42.836237 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 16:59:42 crc kubenswrapper[4794]: I0216 16:59:42.836544 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:42 crc kubenswrapper[4794]: I0216 16:59:42.837894 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:42 crc kubenswrapper[4794]: I0216 16:59:42.837928 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:42 crc kubenswrapper[4794]: I0216 16:59:42.837939 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:43 crc kubenswrapper[4794]: I0216 16:59:43.737993 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 18:30:15.441462419 +0000 UTC Feb 16 16:59:44 crc kubenswrapper[4794]: I0216 16:59:44.109237 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 16 16:59:44 crc kubenswrapper[4794]: I0216 16:59:44.109586 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:44 crc kubenswrapper[4794]: I0216 16:59:44.111359 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:44 crc kubenswrapper[4794]: I0216 16:59:44.111399 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:44 crc kubenswrapper[4794]: I0216 16:59:44.111416 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:44 crc kubenswrapper[4794]: I0216 16:59:44.738129 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 11:08:07.760195874 +0000 UTC Feb 16 16:59:44 crc kubenswrapper[4794]: E0216 16:59:44.875563 4794 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 16:59:45 crc kubenswrapper[4794]: I0216 16:59:45.238883 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 16:59:45 crc kubenswrapper[4794]: I0216 16:59:45.239158 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:45 crc kubenswrapper[4794]: I0216 16:59:45.241037 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:45 crc kubenswrapper[4794]: I0216 16:59:45.241118 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:45 crc kubenswrapper[4794]: I0216 16:59:45.241152 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:45 crc kubenswrapper[4794]: I0216 16:59:45.739138 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 05:31:43.856004479 +0000 UTC Feb 16 16:59:46 crc kubenswrapper[4794]: I0216 16:59:46.459042 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 16:59:46 crc kubenswrapper[4794]: I0216 16:59:46.459271 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:46 crc kubenswrapper[4794]: I0216 16:59:46.460924 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:46 crc kubenswrapper[4794]: I0216 16:59:46.460984 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:46 crc kubenswrapper[4794]: I0216 16:59:46.461008 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:46 crc kubenswrapper[4794]: I0216 16:59:46.465703 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 16:59:46 crc kubenswrapper[4794]: I0216 16:59:46.739566 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 16:36:09.517455709 +0000 UTC Feb 16 16:59:46 crc kubenswrapper[4794]: I0216 16:59:46.894820 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:46 crc kubenswrapper[4794]: I0216 16:59:46.896225 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:46 crc kubenswrapper[4794]: I0216 16:59:46.896277 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:46 crc kubenswrapper[4794]: I0216 16:59:46.896290 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:47 crc kubenswrapper[4794]: I0216 16:59:47.740393 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 18:52:02.533480768 +0000 UTC Feb 16 16:59:47 crc kubenswrapper[4794]: I0216 16:59:47.993224 4794 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 16 16:59:47 crc kubenswrapper[4794]: I0216 16:59:47.993404 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 16 16:59:48 crc kubenswrapper[4794]: I0216 16:59:48.731753 4794 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 16 16:59:48 crc kubenswrapper[4794]: I0216 16:59:48.740884 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 02:10:12.595293061 +0000 UTC Feb 16 16:59:48 crc kubenswrapper[4794]: W0216 16:59:48.753402 4794 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 16 16:59:48 crc kubenswrapper[4794]: I0216 16:59:48.753594 4794 trace.go:236] Trace[865371725]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 16:59:38.751) (total time: 10002ms): Feb 16 16:59:48 crc kubenswrapper[4794]: Trace[865371725]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (16:59:48.753) Feb 16 16:59:48 crc kubenswrapper[4794]: Trace[865371725]: [10.002040543s] [10.002040543s] END Feb 16 16:59:48 crc kubenswrapper[4794]: E0216 16:59:48.753643 4794 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 16 16:59:48 crc kubenswrapper[4794]: W0216 16:59:48.838767 4794 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 16 16:59:48 crc kubenswrapper[4794]: I0216 16:59:48.838867 4794 trace.go:236] Trace[1142010930]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 16:59:38.837) (total time: 10001ms): Feb 16 16:59:48 crc kubenswrapper[4794]: Trace[1142010930]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (16:59:48.838) Feb 16 16:59:48 crc kubenswrapper[4794]: Trace[1142010930]: [10.001151404s] [10.001151404s] END Feb 16 16:59:48 crc kubenswrapper[4794]: E0216 16:59:48.838890 4794 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 16 16:59:49 crc kubenswrapper[4794]: I0216 16:59:49.459692 4794 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 16:59:49 crc kubenswrapper[4794]: I0216 16:59:49.459789 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 16:59:49 crc kubenswrapper[4794]: I0216 16:59:49.659879 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 16 16:59:49 crc kubenswrapper[4794]: I0216 16:59:49.660148 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:49 crc kubenswrapper[4794]: I0216 16:59:49.661962 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:49 crc kubenswrapper[4794]: I0216 16:59:49.662028 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:49 crc kubenswrapper[4794]: I0216 16:59:49.662050 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:49 crc kubenswrapper[4794]: I0216 16:59:49.705705 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 16 16:59:49 crc kubenswrapper[4794]: I0216 16:59:49.741203 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 20:23:49.157927567 +0000 UTC Feb 16 16:59:49 crc kubenswrapper[4794]: I0216 16:59:49.856959 4794 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 16 16:59:49 crc kubenswrapper[4794]: I0216 16:59:49.857026 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 16 16:59:49 crc kubenswrapper[4794]: I0216 16:59:49.864399 4794 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 16 16:59:49 crc kubenswrapper[4794]: I0216 16:59:49.864480 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 16 16:59:49 crc kubenswrapper[4794]: I0216 16:59:49.903280 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:49 crc kubenswrapper[4794]: I0216 16:59:49.904589 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:49 crc kubenswrapper[4794]: I0216 16:59:49.904628 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:49 crc kubenswrapper[4794]: I0216 16:59:49.904636 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:49 crc kubenswrapper[4794]: I0216 16:59:49.922805 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 16 16:59:50 crc kubenswrapper[4794]: I0216 16:59:50.742262 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 19:23:18.872988352 +0000 UTC Feb 16 16:59:50 crc kubenswrapper[4794]: I0216 16:59:50.908926 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:50 crc kubenswrapper[4794]: I0216 16:59:50.910081 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:50 crc kubenswrapper[4794]: I0216 16:59:50.910207 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:50 crc kubenswrapper[4794]: I0216 16:59:50.910227 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:51 crc kubenswrapper[4794]: I0216 16:59:51.742444 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 11:00:39.453002254 +0000 UTC Feb 16 16:59:52 crc kubenswrapper[4794]: I0216 16:59:52.742888 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 14:26:26.54797291 +0000 UTC Feb 16 16:59:52 crc kubenswrapper[4794]: I0216 16:59:52.843108 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 16:59:52 crc kubenswrapper[4794]: I0216 16:59:52.843275 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:52 crc kubenswrapper[4794]: I0216 16:59:52.844451 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:52 crc kubenswrapper[4794]: I0216 16:59:52.844483 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:52 crc kubenswrapper[4794]: I0216 16:59:52.844496 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:52 crc kubenswrapper[4794]: I0216 16:59:52.851430 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 16:59:52 crc kubenswrapper[4794]: I0216 16:59:52.913179 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:52 crc kubenswrapper[4794]: I0216 16:59:52.913979 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:52 crc kubenswrapper[4794]: I0216 16:59:52.914001 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:52 crc kubenswrapper[4794]: I0216 16:59:52.914011 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:53 crc kubenswrapper[4794]: I0216 16:59:53.743637 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 22:04:13.401322301 +0000 UTC Feb 16 16:59:54 crc kubenswrapper[4794]: I0216 16:59:54.265419 4794 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 16:59:54 crc kubenswrapper[4794]: I0216 16:59:54.744422 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 19:33:37.386818118 +0000 UTC Feb 16 16:59:54 crc kubenswrapper[4794]: E0216 16:59:54.854945 4794 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 16 16:59:54 crc kubenswrapper[4794]: E0216 16:59:54.858098 4794 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 16 16:59:54 crc kubenswrapper[4794]: I0216 16:59:54.861110 4794 trace.go:236] Trace[450222289]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 16:59:44.661) (total time: 10199ms): Feb 16 16:59:54 crc kubenswrapper[4794]: Trace[450222289]: ---"Objects listed" error: 10199ms (16:59:54.860) Feb 16 16:59:54 crc kubenswrapper[4794]: Trace[450222289]: [10.199734286s] [10.199734286s] END Feb 16 16:59:54 crc kubenswrapper[4794]: I0216 16:59:54.861536 4794 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 16:59:54 crc kubenswrapper[4794]: I0216 16:59:54.861737 4794 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 16 16:59:54 crc kubenswrapper[4794]: I0216 16:59:54.864234 4794 trace.go:236] Trace[177398657]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 16:59:42.398) (total time: 12465ms): Feb 16 16:59:54 crc kubenswrapper[4794]: Trace[177398657]: ---"Objects listed" error: 12465ms (16:59:54.864) Feb 16 16:59:54 crc kubenswrapper[4794]: Trace[177398657]: [12.465492412s] [12.465492412s] END Feb 16 16:59:54 crc kubenswrapper[4794]: I0216 16:59:54.864271 4794 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 16:59:54 crc kubenswrapper[4794]: I0216 16:59:54.869810 4794 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 16:59:54 crc kubenswrapper[4794]: I0216 16:59:54.892671 4794 csr.go:261] certificate signing request csr-862v8 is approved, waiting to be issued Feb 16 16:59:54 crc kubenswrapper[4794]: I0216 16:59:54.923606 4794 csr.go:257] certificate signing request csr-862v8 is issued Feb 16 16:59:54 crc kubenswrapper[4794]: I0216 16:59:54.930117 4794 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35434->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 16 16:59:54 crc kubenswrapper[4794]: I0216 16:59:54.930184 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35434->192.168.126.11:17697: read: connection reset by peer" Feb 16 16:59:54 crc kubenswrapper[4794]: I0216 16:59:54.930603 4794 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 16 16:59:54 crc kubenswrapper[4794]: I0216 16:59:54.930669 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 16 16:59:54 crc kubenswrapper[4794]: I0216 16:59:54.930633 4794 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:53242->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 16 16:59:54 crc kubenswrapper[4794]: I0216 16:59:54.931260 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:53242->192.168.126.11:17697: read: connection reset by peer" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.051423 4794 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.728346 4794 apiserver.go:52] "Watching apiserver" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.731941 4794 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.732377 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.732845 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.732921 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.733044 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.733530 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 16:59:55 crc kubenswrapper[4794]: E0216 16:59:55.733591 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 16:59:55 crc kubenswrapper[4794]: E0216 16:59:55.733644 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.733905 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.734478 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 16:59:55 crc kubenswrapper[4794]: E0216 16:59:55.734581 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.736359 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.736435 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.736722 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.736749 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.737492 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.737948 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.738157 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.739070 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.739548 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.744614 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 03:52:17.850673463 +0000 UTC Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.766605 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.766671 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.766710 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.767404 4794 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.768590 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.774394 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.782452 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.792674 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.795137 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.808908 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.826687 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.833764 4794 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.837640 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.850601 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.867176 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.867230 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.867254 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.867278 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.867321 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.867342 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.867373 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.867392 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.867411 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.867431 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.867453 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.867475 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.867497 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.867522 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.867543 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.867567 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.867590 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.867615 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.867636 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.867658 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.867681 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.867702 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.867723 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.867729 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.867763 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.867751 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.867886 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.867927 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.867962 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.867995 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.868026 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.868048 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.868056 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.868117 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.868145 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.868175 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.868203 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.868229 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.868252 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.868273 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.868321 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.868345 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.868367 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.868390 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.868416 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.868443 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.868472 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.868132 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.868416 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.868494 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.868598 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.868698 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.868753 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.868806 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.870052 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.870076 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.869070 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.869193 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.869393 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.869502 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.869526 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.869590 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.869592 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.869735 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.869732 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.869794 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.869840 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.869828 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.869933 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.869964 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.870159 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.870166 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.870511 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.870546 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.870551 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.870622 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.870669 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.870690 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.870785 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.870807 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.870892 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.870989 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.871004 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.871010 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.871044 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.871232 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.871386 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.871453 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.871829 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.871993 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.872038 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.872266 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.872287 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.872318 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.872335 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.872349 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.872363 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.872378 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.872392 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.872408 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.872425 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.872453 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.872475 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.872491 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.872508 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.872525 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.872541 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.872556 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.872572 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.872587 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.872606 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.872622 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.872639 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.872656 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.872675 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.872698 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.872714 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.872733 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.872748 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.872765 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.872811 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.872853 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.872873 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.872892 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.872911 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.872928 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.872981 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.872999 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873016 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873034 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873051 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873068 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873084 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873100 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873117 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873132 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873148 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873164 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873182 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873197 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873214 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873230 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873247 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873261 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873276 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873292 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873326 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873341 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873356 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873370 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873385 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873400 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873415 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873451 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873478 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873497 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873536 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873554 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873548 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873573 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873643 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873684 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873817 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873859 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873892 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873928 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873948 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.873967 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.874000 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.874035 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.874071 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.874104 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.874142 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.874175 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.874208 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.874240 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.874279 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.874294 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.874424 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.874448 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.874457 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.874491 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.874569 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.874608 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.874618 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.874678 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.874716 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.874742 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.874752 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.874858 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.874870 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.874913 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.874951 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.875057 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.875088 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.875096 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.875131 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.875165 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.875191 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.875209 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.875244 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.875288 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.875365 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.875398 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.875432 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.875468 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.875504 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.875540 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.875766 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.875806 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.875839 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.875872 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.875909 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.875949 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.875985 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.876018 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.876055 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.876088 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.876122 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.876156 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.876188 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.876220 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.876253 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.876286 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.876347 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.876449 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.876469 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.876484 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.876524 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.876561 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.876582 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.876683 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.876722 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.876738 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.876755 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.876771 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.876818 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.876859 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.876893 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.876931 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.876930 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.876963 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.877000 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.877033 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.877065 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.877079 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.877073 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.877096 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.877099 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.877209 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.877211 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.877263 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.877283 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.877326 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.877344 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.877398 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.877424 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.877447 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.877461 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.877471 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.877544 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.877572 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.877583 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.877597 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.877615 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.877638 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.877713 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.877785 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.877985 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.877907 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.878455 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.878683 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.878746 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.879125 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.879478 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.879568 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.879625 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.879646 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.880053 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.880707 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.881230 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.881561 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.889484 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.889705 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.891124 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.891906 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.891128 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.891504 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.891583 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.891691 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.891974 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.877845 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.893234 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.893344 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.893384 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.893410 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.893437 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.893475 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.893509 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.893588 4794 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.893613 4794 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.893629 4794 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.893642 4794 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.893665 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.893680 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.895600 4794 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.895660 4794 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.895690 4794 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.878138 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.895744 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.893754 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.894045 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.894203 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.894064 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.894237 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.894235 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.894362 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.894497 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.894684 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.894765 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.895942 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.894815 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.894829 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: E0216 16:59:55.895346 4794 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 16:59:55 crc kubenswrapper[4794]: E0216 16:59:55.896275 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 16:59:56.396245735 +0000 UTC m=+22.344340402 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.896290 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.896422 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.896482 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.895098 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.895778 4794 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.896563 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.896581 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.896621 4794 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.896636 4794 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.896650 4794 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.895446 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.895518 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.895536 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.896704 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.898256 4794 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.898296 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.898373 4794 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.898399 4794 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.898429 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.898451 4794 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.898470 4794 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.900292 4794 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.900375 4794 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.900398 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.900421 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.900439 4794 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.900458 4794 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.900486 4794 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.900504 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.900525 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.900544 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.900603 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.900624 4794 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.900643 4794 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.900664 4794 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.900683 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.900701 4794 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.900720 4794 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.900739 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.900757 4794 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.900778 4794 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.900796 4794 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.900813 4794 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.900827 4794 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.900840 4794 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.900851 4794 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.900863 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.900874 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.900885 4794 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.900897 4794 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.900909 4794 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.900921 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.900933 4794 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.900945 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.900956 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.900968 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.900980 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.900992 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.901005 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.901016 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.901028 4794 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.901042 4794 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.901055 4794 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.901070 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.901084 4794 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.901097 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.901108 4794 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.901120 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.901134 4794 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.901146 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.901158 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.901170 4794 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.901182 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.901195 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.901209 4794 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.901223 4794 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.901238 4794 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.901253 4794 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.901270 4794 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.901285 4794 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.901322 4794 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.901338 4794 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.901351 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.901363 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.901376 4794 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.901388 4794 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.901400 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.901411 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.901424 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.896838 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.897014 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.897041 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: E0216 16:59:55.897106 4794 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 16:59:55 crc kubenswrapper[4794]: E0216 16:59:55.901544 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 16:59:56.401523028 +0000 UTC m=+22.349617685 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.897449 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: E0216 16:59:55.903541 4794 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 16:59:55 crc kubenswrapper[4794]: E0216 16:59:55.903563 4794 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 16:59:55 crc kubenswrapper[4794]: E0216 16:59:55.903574 4794 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 16:59:55 crc kubenswrapper[4794]: E0216 16:59:55.903631 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 16:59:56.403614455 +0000 UTC m=+22.351709102 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.903770 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.910537 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.910800 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.911022 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.911451 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.911464 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.912090 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.912372 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.913218 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.913831 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: E0216 16:59:55.914167 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 16:59:56.414144681 +0000 UTC m=+22.362239338 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.920573 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: E0216 16:59:55.920768 4794 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 16:59:55 crc kubenswrapper[4794]: E0216 16:59:55.920788 4794 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 16:59:55 crc kubenswrapper[4794]: E0216 16:59:55.920800 4794 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 16:59:55 crc kubenswrapper[4794]: E0216 16:59:55.920843 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 16:59:56.420828392 +0000 UTC m=+22.368923039 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.921433 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.921675 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.922106 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.922133 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.922199 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.922093 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.921726 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.921624 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.921946 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.922654 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.924184 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.924325 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.924350 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.924450 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.924529 4794 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-16 16:54:54 +0000 UTC, rotation deadline is 2026-11-15 09:18:22.41769031 +0000 UTC Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.924610 4794 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6520h18m26.493104036s for next certificate rotation Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.925045 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.928752 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.932944 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.936141 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.937497 4794 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d" exitCode=255 Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.937545 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d"} Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.942953 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.943088 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.943385 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.946379 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.946952 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.948560 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.948617 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.949401 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.949670 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.949823 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.951673 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.952313 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.953529 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.957030 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.957127 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.957476 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.958066 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.958467 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.959728 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.962714 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.962971 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.963214 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.963441 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.963596 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.963627 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.964000 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.963785 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.964089 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.964331 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.965073 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.966190 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.966392 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.966504 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.966679 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.966694 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.966744 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.966748 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.966779 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.966955 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.967046 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.967051 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.967245 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.967295 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.967544 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.968091 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.968352 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.968573 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.968606 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.968937 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.968955 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.969108 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.969194 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.969602 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.969834 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.970232 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.971190 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.972012 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.972351 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.979601 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.981610 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.987805 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 16:59:55 crc kubenswrapper[4794]: I0216 16:59:55.995604 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.001881 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.001954 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.001969 4794 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.001983 4794 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002004 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002041 4794 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002052 4794 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002063 4794 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002073 4794 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002084 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002082 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002098 4794 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002171 4794 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002192 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002211 4794 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002228 4794 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002244 4794 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002261 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002277 4794 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002292 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002338 4794 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002355 4794 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002367 4794 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002380 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002391 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002403 4794 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002415 4794 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002427 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002439 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002453 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002465 4794 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002478 4794 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002499 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002512 4794 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002524 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002536 4794 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002548 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002562 4794 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002574 4794 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002585 4794 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002596 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002608 4794 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002620 4794 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002632 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002644 4794 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002655 4794 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002667 4794 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002679 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002690 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002702 4794 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002714 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002859 4794 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002894 4794 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002961 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002976 4794 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002987 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.002998 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003011 4794 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003191 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003225 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003243 4794 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003259 4794 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003276 4794 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003293 4794 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003325 4794 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003339 4794 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003350 4794 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003362 4794 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003378 4794 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003391 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003403 4794 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003415 4794 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003427 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003438 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003450 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003462 4794 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003474 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003485 4794 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003498 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003510 4794 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003522 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003534 4794 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003549 4794 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003563 4794 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003574 4794 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003587 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003599 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003610 4794 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003622 4794 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003633 4794 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003645 4794 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003658 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003670 4794 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003683 4794 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003723 4794 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003734 4794 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003746 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003757 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003769 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003780 4794 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003792 4794 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003803 4794 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003815 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.003826 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.026027 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.026342 4794 scope.go:117] "RemoveContainer" containerID="a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.054337 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 16:59:56 crc kubenswrapper[4794]: W0216 16:59:56.067061 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-731f22461df268562eff28db5a220df857853f9b6283295089e144d471abcd69 WatchSource:0}: Error finding container 731f22461df268562eff28db5a220df857853f9b6283295089e144d471abcd69: Status 404 returned error can't find the container with id 731f22461df268562eff28db5a220df857853f9b6283295089e144d471abcd69 Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.070280 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.088781 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 16:59:56 crc kubenswrapper[4794]: W0216 16:59:56.106882 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-608d740d0ca4d0c15ec047e5d9d585e4f0e65f6def30359c4fc6ec485967af69 WatchSource:0}: Error finding container 608d740d0ca4d0c15ec047e5d9d585e4f0e65f6def30359c4fc6ec485967af69: Status 404 returned error can't find the container with id 608d740d0ca4d0c15ec047e5d9d585e4f0e65f6def30359c4fc6ec485967af69 Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.246816 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-8q7xf"] Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.247250 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-fk74m"] Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.248046 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-fk74m" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.248570 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.248912 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-zwhdn"] Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.249197 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9krvl"] Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.249848 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.250231 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.250865 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.258038 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.258129 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-tqtvb"] Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.258286 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.258661 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tqtvb" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.258877 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.259325 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.259573 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.259714 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.259933 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.260073 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.260201 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.260089 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.260358 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.260420 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.260590 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.260784 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.260928 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.261133 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.261168 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.261272 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.264072 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.264642 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.273635 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.277829 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.299974 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.335401 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.380880 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.408358 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d985e4f1-78bb-43f9-b86c-cd47831d602c-ovn-node-metrics-cert\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.408411 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpr45\" (UniqueName: \"kubernetes.io/projected/d985e4f1-78bb-43f9-b86c-cd47831d602c-kube-api-access-dpr45\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.408436 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2d17fb0b-381a-46a1-8bba-33daee594e18-proxy-tls\") pod \"machine-config-daemon-8q7xf\" (UID: \"2d17fb0b-381a-46a1-8bba-33daee594e18\") " pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.408457 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-var-lib-openvswitch\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.408475 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-node-log\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.408493 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-host-var-lib-kubelet\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.408512 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-kubelet\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.408530 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d985e4f1-78bb-43f9-b86c-cd47831d602c-env-overrides\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.408545 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfvks\" (UniqueName: \"kubernetes.io/projected/b325454b-7201-4221-a07a-6093f1245d66-kube-api-access-kfvks\") pod \"multus-additional-cni-plugins-fk74m\" (UID: \"b325454b-7201-4221-a07a-6093f1245d66\") " pod="openshift-multus/multus-additional-cni-plugins-fk74m" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.408560 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-host-run-netns\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.408621 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-run-netns\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.408637 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-run-systemd\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.408691 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-os-release\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.408758 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b325454b-7201-4221-a07a-6093f1245d66-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-fk74m\" (UID: \"b325454b-7201-4221-a07a-6093f1245d66\") " pod="openshift-multus/multus-additional-cni-plugins-fk74m" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.408776 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-cnibin\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.408791 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d985e4f1-78bb-43f9-b86c-cd47831d602c-ovnkube-config\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.408806 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b325454b-7201-4221-a07a-6093f1245d66-os-release\") pod \"multus-additional-cni-plugins-fk74m\" (UID: \"b325454b-7201-4221-a07a-6093f1245d66\") " pod="openshift-multus/multus-additional-cni-plugins-fk74m" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.408819 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-multus-socket-dir-parent\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.408833 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-multus-daemon-config\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.408850 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-system-cni-dir\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.408863 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-host-run-multus-certs\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.408876 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-etc-kubernetes\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.408895 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-run-ovn\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.408912 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-host-run-k8s-cni-cncf-io\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.408934 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-slash\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.408952 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-cni-bin\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.408970 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-cni-netd\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.408988 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b325454b-7201-4221-a07a-6093f1245d66-system-cni-dir\") pod \"multus-additional-cni-plugins-fk74m\" (UID: \"b325454b-7201-4221-a07a-6093f1245d66\") " pod="openshift-multus/multus-additional-cni-plugins-fk74m" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.409013 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.409030 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2d17fb0b-381a-46a1-8bba-33daee594e18-mcd-auth-proxy-config\") pod \"machine-config-daemon-8q7xf\" (UID: \"2d17fb0b-381a-46a1-8bba-33daee594e18\") " pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.409045 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-host-var-lib-cni-multus\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.409061 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7860ec44-a894-441d-b76a-2a88fa8441ab-hosts-file\") pod \"node-resolver-tqtvb\" (UID: \"7860ec44-a894-441d-b76a-2a88fa8441ab\") " pod="openshift-dns/node-resolver-tqtvb" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.409076 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2d17fb0b-381a-46a1-8bba-33daee594e18-rootfs\") pod \"machine-config-daemon-8q7xf\" (UID: \"2d17fb0b-381a-46a1-8bba-33daee594e18\") " pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.409091 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztkjz\" (UniqueName: \"kubernetes.io/projected/2d17fb0b-381a-46a1-8bba-33daee594e18-kube-api-access-ztkjz\") pod \"machine-config-daemon-8q7xf\" (UID: \"2d17fb0b-381a-46a1-8bba-33daee594e18\") " pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.409105 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-run-ovn-kubernetes\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.409121 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b325454b-7201-4221-a07a-6093f1245d66-cnibin\") pod \"multus-additional-cni-plugins-fk74m\" (UID: \"b325454b-7201-4221-a07a-6093f1245d66\") " pod="openshift-multus/multus-additional-cni-plugins-fk74m" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.409135 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-multus-cni-dir\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.409148 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pk7g\" (UniqueName: \"kubernetes.io/projected/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-kube-api-access-9pk7g\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.409164 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d985e4f1-78bb-43f9-b86c-cd47831d602c-ovnkube-script-lib\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.409179 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-hostroot\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.409195 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b325454b-7201-4221-a07a-6093f1245d66-tuning-conf-dir\") pod \"multus-additional-cni-plugins-fk74m\" (UID: \"b325454b-7201-4221-a07a-6093f1245d66\") " pod="openshift-multus/multus-additional-cni-plugins-fk74m" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.409208 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-multus-conf-dir\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.409226 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.409241 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-etc-openvswitch\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.409256 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.409273 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b325454b-7201-4221-a07a-6093f1245d66-cni-binary-copy\") pod \"multus-additional-cni-plugins-fk74m\" (UID: \"b325454b-7201-4221-a07a-6093f1245d66\") " pod="openshift-multus/multus-additional-cni-plugins-fk74m" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.409286 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-log-socket\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.409315 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvswt\" (UniqueName: \"kubernetes.io/projected/7860ec44-a894-441d-b76a-2a88fa8441ab-kube-api-access-kvswt\") pod \"node-resolver-tqtvb\" (UID: \"7860ec44-a894-441d-b76a-2a88fa8441ab\") " pod="openshift-dns/node-resolver-tqtvb" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.409337 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-run-openvswitch\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.409355 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.409414 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-systemd-units\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.409443 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-cni-binary-copy\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.409463 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-host-var-lib-cni-bin\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: E0216 16:59:56.409649 4794 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 16:59:56 crc kubenswrapper[4794]: E0216 16:59:56.409694 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 16:59:57.409680603 +0000 UTC m=+23.357775250 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 16:59:56 crc kubenswrapper[4794]: E0216 16:59:56.410111 4794 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 16:59:56 crc kubenswrapper[4794]: E0216 16:59:56.410134 4794 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 16:59:56 crc kubenswrapper[4794]: E0216 16:59:56.410145 4794 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 16:59:56 crc kubenswrapper[4794]: E0216 16:59:56.410171 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 16:59:57.410162156 +0000 UTC m=+23.358256803 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 16:59:56 crc kubenswrapper[4794]: E0216 16:59:56.410233 4794 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 16:59:56 crc kubenswrapper[4794]: E0216 16:59:56.410253 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 16:59:57.410247718 +0000 UTC m=+23.358342365 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.424359 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.452886 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.461955 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.466528 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.469859 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.478530 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.484860 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.493164 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.503404 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.510821 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.510945 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-host-run-k8s-cni-cncf-io\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511000 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-host-run-k8s-cni-cncf-io\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: E0216 16:59:56.511031 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 16:59:57.511004253 +0000 UTC m=+23.459098900 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511101 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-slash\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511127 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-cni-bin\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511147 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-cni-netd\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511162 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b325454b-7201-4221-a07a-6093f1245d66-system-cni-dir\") pod \"multus-additional-cni-plugins-fk74m\" (UID: \"b325454b-7201-4221-a07a-6093f1245d66\") " pod="openshift-multus/multus-additional-cni-plugins-fk74m" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511193 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2d17fb0b-381a-46a1-8bba-33daee594e18-mcd-auth-proxy-config\") pod \"machine-config-daemon-8q7xf\" (UID: \"2d17fb0b-381a-46a1-8bba-33daee594e18\") " pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511211 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-host-var-lib-cni-multus\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511212 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-cni-bin\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511230 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-multus-cni-dir\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511251 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-cni-netd\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511317 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-host-var-lib-cni-multus\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511332 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7860ec44-a894-441d-b76a-2a88fa8441ab-hosts-file\") pod \"node-resolver-tqtvb\" (UID: \"7860ec44-a894-441d-b76a-2a88fa8441ab\") " pod="openshift-dns/node-resolver-tqtvb" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511192 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-slash\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511268 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7860ec44-a894-441d-b76a-2a88fa8441ab-hosts-file\") pod \"node-resolver-tqtvb\" (UID: \"7860ec44-a894-441d-b76a-2a88fa8441ab\") " pod="openshift-dns/node-resolver-tqtvb" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511222 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b325454b-7201-4221-a07a-6093f1245d66-system-cni-dir\") pod \"multus-additional-cni-plugins-fk74m\" (UID: \"b325454b-7201-4221-a07a-6093f1245d66\") " pod="openshift-multus/multus-additional-cni-plugins-fk74m" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511371 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2d17fb0b-381a-46a1-8bba-33daee594e18-rootfs\") pod \"machine-config-daemon-8q7xf\" (UID: \"2d17fb0b-381a-46a1-8bba-33daee594e18\") " pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511420 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztkjz\" (UniqueName: \"kubernetes.io/projected/2d17fb0b-381a-46a1-8bba-33daee594e18-kube-api-access-ztkjz\") pod \"machine-config-daemon-8q7xf\" (UID: \"2d17fb0b-381a-46a1-8bba-33daee594e18\") " pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511441 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-multus-cni-dir\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511443 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-run-ovn-kubernetes\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511467 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-run-ovn-kubernetes\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511466 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2d17fb0b-381a-46a1-8bba-33daee594e18-rootfs\") pod \"machine-config-daemon-8q7xf\" (UID: \"2d17fb0b-381a-46a1-8bba-33daee594e18\") " pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511479 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b325454b-7201-4221-a07a-6093f1245d66-cnibin\") pod \"multus-additional-cni-plugins-fk74m\" (UID: \"b325454b-7201-4221-a07a-6093f1245d66\") " pod="openshift-multus/multus-additional-cni-plugins-fk74m" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511499 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-hostroot\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511506 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b325454b-7201-4221-a07a-6093f1245d66-cnibin\") pod \"multus-additional-cni-plugins-fk74m\" (UID: \"b325454b-7201-4221-a07a-6093f1245d66\") " pod="openshift-multus/multus-additional-cni-plugins-fk74m" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511516 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pk7g\" (UniqueName: \"kubernetes.io/projected/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-kube-api-access-9pk7g\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511540 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d985e4f1-78bb-43f9-b86c-cd47831d602c-ovnkube-script-lib\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511557 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b325454b-7201-4221-a07a-6093f1245d66-cni-binary-copy\") pod \"multus-additional-cni-plugins-fk74m\" (UID: \"b325454b-7201-4221-a07a-6093f1245d66\") " pod="openshift-multus/multus-additional-cni-plugins-fk74m" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511580 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b325454b-7201-4221-a07a-6093f1245d66-tuning-conf-dir\") pod \"multus-additional-cni-plugins-fk74m\" (UID: \"b325454b-7201-4221-a07a-6093f1245d66\") " pod="openshift-multus/multus-additional-cni-plugins-fk74m" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511600 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-multus-conf-dir\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511630 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-etc-openvswitch\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511648 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511669 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-run-openvswitch\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511692 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-log-socket\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511710 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvswt\" (UniqueName: \"kubernetes.io/projected/7860ec44-a894-441d-b76a-2a88fa8441ab-kube-api-access-kvswt\") pod \"node-resolver-tqtvb\" (UID: \"7860ec44-a894-441d-b76a-2a88fa8441ab\") " pod="openshift-dns/node-resolver-tqtvb" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511541 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-hostroot\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511738 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-systemd-units\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511756 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-cni-binary-copy\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511766 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511770 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-host-var-lib-cni-bin\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511789 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-host-var-lib-cni-bin\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511797 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-node-log\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511813 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-run-openvswitch\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511818 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d985e4f1-78bb-43f9-b86c-cd47831d602c-ovn-node-metrics-cert\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511835 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-log-socket\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511842 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpr45\" (UniqueName: \"kubernetes.io/projected/d985e4f1-78bb-43f9-b86c-cd47831d602c-kube-api-access-dpr45\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511866 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2d17fb0b-381a-46a1-8bba-33daee594e18-proxy-tls\") pod \"machine-config-daemon-8q7xf\" (UID: \"2d17fb0b-381a-46a1-8bba-33daee594e18\") " pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511887 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-var-lib-openvswitch\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511905 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfvks\" (UniqueName: \"kubernetes.io/projected/b325454b-7201-4221-a07a-6093f1245d66-kube-api-access-kfvks\") pod \"multus-additional-cni-plugins-fk74m\" (UID: \"b325454b-7201-4221-a07a-6093f1245d66\") " pod="openshift-multus/multus-additional-cni-plugins-fk74m" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511914 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2d17fb0b-381a-46a1-8bba-33daee594e18-mcd-auth-proxy-config\") pod \"machine-config-daemon-8q7xf\" (UID: \"2d17fb0b-381a-46a1-8bba-33daee594e18\") " pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511942 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-host-var-lib-kubelet\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511921 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-host-var-lib-kubelet\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511968 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-multus-conf-dir\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511991 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-etc-openvswitch\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.511995 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-kubelet\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.512009 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-node-log\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.512020 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d985e4f1-78bb-43f9-b86c-cd47831d602c-env-overrides\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.512052 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-os-release\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.512069 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-host-run-netns\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.512120 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-systemd-units\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.512132 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-kubelet\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.512184 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-var-lib-openvswitch\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.512353 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-os-release\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.512393 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.512412 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-run-netns\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.512484 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-run-systemd\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: E0216 16:59:56.512487 4794 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.512503 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b325454b-7201-4221-a07a-6093f1245d66-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-fk74m\" (UID: \"b325454b-7201-4221-a07a-6093f1245d66\") " pod="openshift-multus/multus-additional-cni-plugins-fk74m" Feb 16 16:59:56 crc kubenswrapper[4794]: E0216 16:59:56.512509 4794 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 16:59:56 crc kubenswrapper[4794]: E0216 16:59:56.512524 4794 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.512424 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-host-run-netns\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.512549 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-run-systemd\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.512448 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d985e4f1-78bb-43f9-b86c-cd47831d602c-env-overrides\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: E0216 16:59:56.512570 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 16:59:57.512556505 +0000 UTC m=+23.460651232 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.512517 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d985e4f1-78bb-43f9-b86c-cd47831d602c-ovnkube-script-lib\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.512442 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-run-netns\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.512621 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-cnibin\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.512641 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d985e4f1-78bb-43f9-b86c-cd47831d602c-ovnkube-config\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.512695 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-cnibin\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.512720 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b325454b-7201-4221-a07a-6093f1245d66-os-release\") pod \"multus-additional-cni-plugins-fk74m\" (UID: \"b325454b-7201-4221-a07a-6093f1245d66\") " pod="openshift-multus/multus-additional-cni-plugins-fk74m" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.512737 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-multus-socket-dir-parent\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.512753 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-multus-daemon-config\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.513408 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-run-ovn\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.513469 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-run-ovn\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.512894 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-cni-binary-copy\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.512933 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b325454b-7201-4221-a07a-6093f1245d66-tuning-conf-dir\") pod \"multus-additional-cni-plugins-fk74m\" (UID: \"b325454b-7201-4221-a07a-6093f1245d66\") " pod="openshift-multus/multus-additional-cni-plugins-fk74m" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.513215 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b325454b-7201-4221-a07a-6093f1245d66-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-fk74m\" (UID: \"b325454b-7201-4221-a07a-6093f1245d66\") " pod="openshift-multus/multus-additional-cni-plugins-fk74m" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.513223 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b325454b-7201-4221-a07a-6093f1245d66-cni-binary-copy\") pod \"multus-additional-cni-plugins-fk74m\" (UID: \"b325454b-7201-4221-a07a-6093f1245d66\") " pod="openshift-multus/multus-additional-cni-plugins-fk74m" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.513335 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d985e4f1-78bb-43f9-b86c-cd47831d602c-ovnkube-config\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.513365 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-multus-daemon-config\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.512802 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b325454b-7201-4221-a07a-6093f1245d66-os-release\") pod \"multus-additional-cni-plugins-fk74m\" (UID: \"b325454b-7201-4221-a07a-6093f1245d66\") " pod="openshift-multus/multus-additional-cni-plugins-fk74m" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.512832 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-multus-socket-dir-parent\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.513435 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-system-cni-dir\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.513590 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-host-run-multus-certs\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.513605 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-etc-kubernetes\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.513651 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-system-cni-dir\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.513676 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-host-run-multus-certs\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.513726 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-etc-kubernetes\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.516388 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2d17fb0b-381a-46a1-8bba-33daee594e18-proxy-tls\") pod \"machine-config-daemon-8q7xf\" (UID: \"2d17fb0b-381a-46a1-8bba-33daee594e18\") " pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.516677 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.516767 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d985e4f1-78bb-43f9-b86c-cd47831d602c-ovn-node-metrics-cert\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.527172 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.528978 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvswt\" (UniqueName: \"kubernetes.io/projected/7860ec44-a894-441d-b76a-2a88fa8441ab-kube-api-access-kvswt\") pod \"node-resolver-tqtvb\" (UID: \"7860ec44-a894-441d-b76a-2a88fa8441ab\") " pod="openshift-dns/node-resolver-tqtvb" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.529404 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztkjz\" (UniqueName: \"kubernetes.io/projected/2d17fb0b-381a-46a1-8bba-33daee594e18-kube-api-access-ztkjz\") pod \"machine-config-daemon-8q7xf\" (UID: \"2d17fb0b-381a-46a1-8bba-33daee594e18\") " pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.529636 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpr45\" (UniqueName: \"kubernetes.io/projected/d985e4f1-78bb-43f9-b86c-cd47831d602c-kube-api-access-dpr45\") pod \"ovnkube-node-9krvl\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.530700 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pk7g\" (UniqueName: \"kubernetes.io/projected/f6f074ad-d6ce-4c47-aa3c-196e4ad30e64-kube-api-access-9pk7g\") pod \"multus-zwhdn\" (UID: \"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\") " pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.531481 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfvks\" (UniqueName: \"kubernetes.io/projected/b325454b-7201-4221-a07a-6093f1245d66-kube-api-access-kfvks\") pod \"multus-additional-cni-plugins-fk74m\" (UID: \"b325454b-7201-4221-a07a-6093f1245d66\") " pod="openshift-multus/multus-additional-cni-plugins-fk74m" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.538572 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.547860 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.564709 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.572230 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.581809 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.589814 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-fk74m" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.590630 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.607622 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.610952 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.626538 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.630189 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.636195 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.641223 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-zwhdn" Feb 16 16:59:56 crc kubenswrapper[4794]: W0216 16:59:56.649917 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd985e4f1_78bb_43f9_b86c_cd47831d602c.slice/crio-dfe3c1a24efa8b004629e7b97cbe7e033c0465c2275173213104298f4abc7c5b WatchSource:0}: Error finding container dfe3c1a24efa8b004629e7b97cbe7e033c0465c2275173213104298f4abc7c5b: Status 404 returned error can't find the container with id dfe3c1a24efa8b004629e7b97cbe7e033c0465c2275173213104298f4abc7c5b Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.650237 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.652248 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tqtvb" Feb 16 16:59:56 crc kubenswrapper[4794]: W0216 16:59:56.675637 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6f074ad_d6ce_4c47_aa3c_196e4ad30e64.slice/crio-e9bb91f324c88b66e463c4d1f3baeb932ea1a3d25c7acd4eb2be70a1f93268f2 WatchSource:0}: Error finding container e9bb91f324c88b66e463c4d1f3baeb932ea1a3d25c7acd4eb2be70a1f93268f2: Status 404 returned error can't find the container with id e9bb91f324c88b66e463c4d1f3baeb932ea1a3d25c7acd4eb2be70a1f93268f2 Feb 16 16:59:56 crc kubenswrapper[4794]: W0216 16:59:56.694630 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7860ec44_a894_441d_b76a_2a88fa8441ab.slice/crio-007f5c4a9731f2deda4ca3586d85fbf1ca1cf82eb703c6608f515cef9af83a83 WatchSource:0}: Error finding container 007f5c4a9731f2deda4ca3586d85fbf1ca1cf82eb703c6608f515cef9af83a83: Status 404 returned error can't find the container with id 007f5c4a9731f2deda4ca3586d85fbf1ca1cf82eb703c6608f515cef9af83a83 Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.745629 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 02:16:27.900950779 +0000 UTC Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.796521 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.797325 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.798733 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.799360 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.800431 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.800993 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.801581 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.802763 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.803529 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.805193 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.805706 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.807369 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.807938 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.808592 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.809646 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.810141 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.811623 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.812160 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.813591 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.814383 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.815155 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.816478 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.817038 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.818524 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.819016 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.820796 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.821772 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.822385 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.824978 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.827227 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.827912 4794 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.829369 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.831844 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.832476 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.833970 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.836085 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.837091 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.838014 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.839044 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.840651 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.841289 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.842536 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.843364 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.844502 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.844971 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.846026 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.846664 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.847876 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.848558 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.849495 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.850039 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.850670 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.851739 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.852255 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.945105 4794 generic.go:334] "Generic (PLEG): container finished" podID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerID="f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592" exitCode=0 Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.945235 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" event={"ID":"d985e4f1-78bb-43f9-b86c-cd47831d602c","Type":"ContainerDied","Data":"f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592"} Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.945279 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" event={"ID":"d985e4f1-78bb-43f9-b86c-cd47831d602c","Type":"ContainerStarted","Data":"dfe3c1a24efa8b004629e7b97cbe7e033c0465c2275173213104298f4abc7c5b"} Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.948185 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" event={"ID":"b325454b-7201-4221-a07a-6093f1245d66","Type":"ContainerStarted","Data":"df59792ab840c1853f4f48bff8d7076696a51158d0dac73dbafb351469510114"} Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.955478 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453"} Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.955524 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97"} Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.955535 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"608d740d0ca4d0c15ec047e5d9d585e4f0e65f6def30359c4fc6ec485967af69"} Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.961547 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tqtvb" event={"ID":"7860ec44-a894-441d-b76a-2a88fa8441ab","Type":"ContainerStarted","Data":"007f5c4a9731f2deda4ca3586d85fbf1ca1cf82eb703c6608f515cef9af83a83"} Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.963532 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52"} Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.963565 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"731f22461df268562eff28db5a220df857853f9b6283295089e144d471abcd69"} Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.965899 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.967419 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:56Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.968264 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38"} Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.968625 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.970990 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zwhdn" event={"ID":"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64","Type":"ContainerStarted","Data":"9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757"} Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.971022 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zwhdn" event={"ID":"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64","Type":"ContainerStarted","Data":"e9bb91f324c88b66e463c4d1f3baeb932ea1a3d25c7acd4eb2be70a1f93268f2"} Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.973084 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerStarted","Data":"97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a"} Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.973123 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerStarted","Data":"5b8e5823303bc154c014ca298f300cecb24b4365fa1dc74e0aadd87dd4a8d103"} Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.974196 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"c02b1e3f10dbba24e2bbdc8d8639fcc4b22838025a4e648bcf66c68960311fc2"} Feb 16 16:59:56 crc kubenswrapper[4794]: E0216 16:59:56.982961 4794 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.984623 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:56Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:56 crc kubenswrapper[4794]: I0216 16:59:56.997699 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:56Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.010280 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:57Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.028282 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:57Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.047767 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:57Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.072436 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:57Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.091448 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:57Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.108542 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:57Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.122139 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:57Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.139817 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:57Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.152378 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:57Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.174943 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:57Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.191338 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:57Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.209645 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:57Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.223140 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:57Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.237099 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:57Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.253649 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:57Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.275187 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:57Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.289387 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:57Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.301485 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:57Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.316085 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:57Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.332087 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:57Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.353487 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:57Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.367760 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:57Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.381984 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:57Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.427426 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.427483 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.427527 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 16:59:57 crc kubenswrapper[4794]: E0216 16:59:57.427623 4794 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 16:59:57 crc kubenswrapper[4794]: E0216 16:59:57.427678 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 16:59:59.427662217 +0000 UTC m=+25.375756864 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 16:59:57 crc kubenswrapper[4794]: E0216 16:59:57.427784 4794 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 16:59:57 crc kubenswrapper[4794]: E0216 16:59:57.427806 4794 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 16:59:57 crc kubenswrapper[4794]: E0216 16:59:57.427834 4794 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 16:59:57 crc kubenswrapper[4794]: E0216 16:59:57.427851 4794 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 16:59:57 crc kubenswrapper[4794]: E0216 16:59:57.427877 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 16:59:59.427856232 +0000 UTC m=+25.375950889 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 16:59:57 crc kubenswrapper[4794]: E0216 16:59:57.427925 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 16:59:59.427890703 +0000 UTC m=+25.375985440 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.528117 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.528233 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 16:59:57 crc kubenswrapper[4794]: E0216 16:59:57.528332 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 16:59:59.528292889 +0000 UTC m=+25.476387536 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 16:59:57 crc kubenswrapper[4794]: E0216 16:59:57.528422 4794 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 16:59:57 crc kubenswrapper[4794]: E0216 16:59:57.528442 4794 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 16:59:57 crc kubenswrapper[4794]: E0216 16:59:57.528455 4794 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 16:59:57 crc kubenswrapper[4794]: E0216 16:59:57.528510 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 16:59:59.528495194 +0000 UTC m=+25.476589851 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.746585 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 13:30:44.632378409 +0000 UTC Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.791070 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.791138 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.791155 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 16:59:57 crc kubenswrapper[4794]: E0216 16:59:57.791185 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 16:59:57 crc kubenswrapper[4794]: E0216 16:59:57.791246 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 16:59:57 crc kubenswrapper[4794]: E0216 16:59:57.791365 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.982950 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerStarted","Data":"02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166"} Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.984596 4794 generic.go:334] "Generic (PLEG): container finished" podID="b325454b-7201-4221-a07a-6093f1245d66" containerID="62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974" exitCode=0 Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.984670 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" event={"ID":"b325454b-7201-4221-a07a-6093f1245d66","Type":"ContainerDied","Data":"62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974"} Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.987786 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tqtvb" event={"ID":"7860ec44-a894-441d-b76a-2a88fa8441ab","Type":"ContainerStarted","Data":"20d16146c9c352c4085083323776d5be036f92c66357a810bf908ac40e229192"} Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.993077 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" event={"ID":"d985e4f1-78bb-43f9-b86c-cd47831d602c","Type":"ContainerStarted","Data":"bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0"} Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.993121 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" event={"ID":"d985e4f1-78bb-43f9-b86c-cd47831d602c","Type":"ContainerStarted","Data":"fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1"} Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.993140 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" event={"ID":"d985e4f1-78bb-43f9-b86c-cd47831d602c","Type":"ContainerStarted","Data":"ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9"} Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.993156 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" event={"ID":"d985e4f1-78bb-43f9-b86c-cd47831d602c","Type":"ContainerStarted","Data":"69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184"} Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.993170 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" event={"ID":"d985e4f1-78bb-43f9-b86c-cd47831d602c","Type":"ContainerStarted","Data":"c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb"} Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.993185 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" event={"ID":"d985e4f1-78bb-43f9-b86c-cd47831d602c","Type":"ContainerStarted","Data":"0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0"} Feb 16 16:59:57 crc kubenswrapper[4794]: I0216 16:59:57.997294 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:57Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.014698 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.034758 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.048221 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.058679 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.070315 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.085936 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.094673 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.106055 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.117349 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.130088 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.146783 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.157422 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.168570 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.178512 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.193810 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.209886 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.222087 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d16146c9c352c4085083323776d5be036f92c66357a810bf908ac40e229192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.235001 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.246650 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.260488 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.279524 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.295108 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.307510 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.322286 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.336237 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.521413 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-w6ttl"] Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.521737 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-w6ttl" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.524072 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.524539 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.524658 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.525670 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.534133 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.544411 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w6ttl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf36d1cc-c61d-4339-91a7-579ff74019aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rdjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w6ttl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.560037 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.571905 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.582814 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.594808 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.606286 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.619001 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.633025 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.640851 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rdjg\" (UniqueName: \"kubernetes.io/projected/bf36d1cc-c61d-4339-91a7-579ff74019aa-kube-api-access-6rdjg\") pod \"node-ca-w6ttl\" (UID: \"bf36d1cc-c61d-4339-91a7-579ff74019aa\") " pod="openshift-image-registry/node-ca-w6ttl" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.640914 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bf36d1cc-c61d-4339-91a7-579ff74019aa-host\") pod \"node-ca-w6ttl\" (UID: \"bf36d1cc-c61d-4339-91a7-579ff74019aa\") " pod="openshift-image-registry/node-ca-w6ttl" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.640944 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bf36d1cc-c61d-4339-91a7-579ff74019aa-serviceca\") pod \"node-ca-w6ttl\" (UID: \"bf36d1cc-c61d-4339-91a7-579ff74019aa\") " pod="openshift-image-registry/node-ca-w6ttl" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.659434 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.674489 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d16146c9c352c4085083323776d5be036f92c66357a810bf908ac40e229192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.688616 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.721453 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.742139 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rdjg\" (UniqueName: \"kubernetes.io/projected/bf36d1cc-c61d-4339-91a7-579ff74019aa-kube-api-access-6rdjg\") pod \"node-ca-w6ttl\" (UID: \"bf36d1cc-c61d-4339-91a7-579ff74019aa\") " pod="openshift-image-registry/node-ca-w6ttl" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.742198 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bf36d1cc-c61d-4339-91a7-579ff74019aa-host\") pod \"node-ca-w6ttl\" (UID: \"bf36d1cc-c61d-4339-91a7-579ff74019aa\") " pod="openshift-image-registry/node-ca-w6ttl" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.742234 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bf36d1cc-c61d-4339-91a7-579ff74019aa-serviceca\") pod \"node-ca-w6ttl\" (UID: \"bf36d1cc-c61d-4339-91a7-579ff74019aa\") " pod="openshift-image-registry/node-ca-w6ttl" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.742374 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bf36d1cc-c61d-4339-91a7-579ff74019aa-host\") pod \"node-ca-w6ttl\" (UID: \"bf36d1cc-c61d-4339-91a7-579ff74019aa\") " pod="openshift-image-registry/node-ca-w6ttl" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.743052 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bf36d1cc-c61d-4339-91a7-579ff74019aa-serviceca\") pod \"node-ca-w6ttl\" (UID: \"bf36d1cc-c61d-4339-91a7-579ff74019aa\") " pod="openshift-image-registry/node-ca-w6ttl" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.747189 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 04:09:58.609864954 +0000 UTC Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.758706 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:58Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.788497 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rdjg\" (UniqueName: \"kubernetes.io/projected/bf36d1cc-c61d-4339-91a7-579ff74019aa-kube-api-access-6rdjg\") pod \"node-ca-w6ttl\" (UID: \"bf36d1cc-c61d-4339-91a7-579ff74019aa\") " pod="openshift-image-registry/node-ca-w6ttl" Feb 16 16:59:58 crc kubenswrapper[4794]: I0216 16:59:58.840432 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-w6ttl" Feb 16 16:59:58 crc kubenswrapper[4794]: W0216 16:59:58.854278 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf36d1cc_c61d_4339_91a7_579ff74019aa.slice/crio-8ceba4404833942f21f9f1e6b02b13accbc8c93f5864841a5145696d2c99964f WatchSource:0}: Error finding container 8ceba4404833942f21f9f1e6b02b13accbc8c93f5864841a5145696d2c99964f: Status 404 returned error can't find the container with id 8ceba4404833942f21f9f1e6b02b13accbc8c93f5864841a5145696d2c99964f Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.000878 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" event={"ID":"b325454b-7201-4221-a07a-6093f1245d66","Type":"ContainerStarted","Data":"b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332"} Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.002125 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"44bf0cc1807b2cbce57729fe5a58675ff9124110cb52d7b0d8ae4a309098b433"} Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.003405 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-w6ttl" event={"ID":"bf36d1cc-c61d-4339-91a7-579ff74019aa","Type":"ContainerStarted","Data":"8ceba4404833942f21f9f1e6b02b13accbc8c93f5864841a5145696d2c99964f"} Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.013781 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:59Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.032965 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:59Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.046204 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:59Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.059547 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:59Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.073802 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:59Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.087706 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:59Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.101008 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:59Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.115191 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:59Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.139982 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:59Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.181167 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:59Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.232329 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:59Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.277264 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d16146c9c352c4085083323776d5be036f92c66357a810bf908ac40e229192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:59Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.299713 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:59Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.338387 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w6ttl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf36d1cc-c61d-4339-91a7-579ff74019aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rdjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w6ttl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:59Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.381723 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:59Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.418426 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w6ttl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf36d1cc-c61d-4339-91a7-579ff74019aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rdjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w6ttl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:59Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.449838 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.449891 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.449939 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 16:59:59 crc kubenswrapper[4794]: E0216 16:59:59.450009 4794 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 16:59:59 crc kubenswrapper[4794]: E0216 16:59:59.450045 4794 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 16:59:59 crc kubenswrapper[4794]: E0216 16:59:59.450049 4794 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 16:59:59 crc kubenswrapper[4794]: E0216 16:59:59.450059 4794 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 16:59:59 crc kubenswrapper[4794]: E0216 16:59:59.450103 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:03.450087566 +0000 UTC m=+29.398182213 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 16:59:59 crc kubenswrapper[4794]: E0216 16:59:59.450122 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:03.450113907 +0000 UTC m=+29.398208554 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 16:59:59 crc kubenswrapper[4794]: E0216 16:59:59.450135 4794 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 16:59:59 crc kubenswrapper[4794]: E0216 16:59:59.450176 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:03.450158418 +0000 UTC m=+29.398253145 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.459749 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:59Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.503270 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:59Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.540011 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:59Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.550587 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.550723 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 16:59:59 crc kubenswrapper[4794]: E0216 16:59:59.550806 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:00:03.55077768 +0000 UTC m=+29.498872337 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 16:59:59 crc kubenswrapper[4794]: E0216 16:59:59.550826 4794 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 16:59:59 crc kubenswrapper[4794]: E0216 16:59:59.550841 4794 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 16:59:59 crc kubenswrapper[4794]: E0216 16:59:59.550852 4794 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 16:59:59 crc kubenswrapper[4794]: E0216 16:59:59.550894 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:03.550882973 +0000 UTC m=+29.498977620 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.579520 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bf0cc1807b2cbce57729fe5a58675ff9124110cb52d7b0d8ae4a309098b433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:59Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.620779 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:59Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.659108 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:59Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.699543 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:59Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.738378 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:59Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.747460 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 17:48:57.062340305 +0000 UTC Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.780522 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:59Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.791178 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.791197 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.791198 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 16:59:59 crc kubenswrapper[4794]: E0216 16:59:59.791276 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 16:59:59 crc kubenswrapper[4794]: E0216 16:59:59.791446 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 16:59:59 crc kubenswrapper[4794]: E0216 16:59:59.791599 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.819719 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:59Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.864978 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:59Z is after 2025-08-24T17:21:41Z" Feb 16 16:59:59 crc kubenswrapper[4794]: I0216 16:59:59.901256 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d16146c9c352c4085083323776d5be036f92c66357a810bf908ac40e229192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T16:59:59Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:00 crc kubenswrapper[4794]: I0216 17:00:00.012987 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" event={"ID":"d985e4f1-78bb-43f9-b86c-cd47831d602c","Type":"ContainerStarted","Data":"ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0"} Feb 16 17:00:00 crc kubenswrapper[4794]: I0216 17:00:00.016833 4794 generic.go:334] "Generic (PLEG): container finished" podID="b325454b-7201-4221-a07a-6093f1245d66" containerID="b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332" exitCode=0 Feb 16 17:00:00 crc kubenswrapper[4794]: I0216 17:00:00.016959 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" event={"ID":"b325454b-7201-4221-a07a-6093f1245d66","Type":"ContainerDied","Data":"b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332"} Feb 16 17:00:00 crc kubenswrapper[4794]: I0216 17:00:00.019617 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-w6ttl" event={"ID":"bf36d1cc-c61d-4339-91a7-579ff74019aa","Type":"ContainerStarted","Data":"2938850604a36c75c38eae180a2a231162f98c256191b74e63abcf08c8027924"} Feb 16 17:00:00 crc kubenswrapper[4794]: I0216 17:00:00.042547 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:00Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:00 crc kubenswrapper[4794]: I0216 17:00:00.058729 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:00Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:00 crc kubenswrapper[4794]: I0216 17:00:00.078344 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:00Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:00 crc kubenswrapper[4794]: I0216 17:00:00.100285 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:00Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:00 crc kubenswrapper[4794]: I0216 17:00:00.133846 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:00Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:00 crc kubenswrapper[4794]: I0216 17:00:00.146296 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d16146c9c352c4085083323776d5be036f92c66357a810bf908ac40e229192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:00Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:00 crc kubenswrapper[4794]: I0216 17:00:00.178480 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:00Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:00 crc kubenswrapper[4794]: I0216 17:00:00.221056 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w6ttl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf36d1cc-c61d-4339-91a7-579ff74019aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rdjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w6ttl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:00Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:00 crc kubenswrapper[4794]: I0216 17:00:00.259970 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:00Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:00 crc kubenswrapper[4794]: I0216 17:00:00.302151 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:00Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:00 crc kubenswrapper[4794]: I0216 17:00:00.340365 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:00Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:00 crc kubenswrapper[4794]: I0216 17:00:00.377668 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bf0cc1807b2cbce57729fe5a58675ff9124110cb52d7b0d8ae4a309098b433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:00Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:00 crc kubenswrapper[4794]: I0216 17:00:00.418286 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:00Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:00 crc kubenswrapper[4794]: I0216 17:00:00.460099 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:00Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:00 crc kubenswrapper[4794]: I0216 17:00:00.502394 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:00Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:00 crc kubenswrapper[4794]: I0216 17:00:00.540103 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:00Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:00 crc kubenswrapper[4794]: I0216 17:00:00.577974 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bf0cc1807b2cbce57729fe5a58675ff9124110cb52d7b0d8ae4a309098b433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:00Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:00 crc kubenswrapper[4794]: I0216 17:00:00.625868 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:00Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:00 crc kubenswrapper[4794]: I0216 17:00:00.659634 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:00Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:00 crc kubenswrapper[4794]: I0216 17:00:00.701830 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d16146c9c352c4085083323776d5be036f92c66357a810bf908ac40e229192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:00Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:00 crc kubenswrapper[4794]: I0216 17:00:00.745218 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:00Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:00 crc kubenswrapper[4794]: I0216 17:00:00.748071 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 22:43:38.128687023 +0000 UTC Feb 16 17:00:00 crc kubenswrapper[4794]: I0216 17:00:00.779029 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:00Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:00 crc kubenswrapper[4794]: I0216 17:00:00.822051 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:00Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:00 crc kubenswrapper[4794]: I0216 17:00:00.860893 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:00Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:00 crc kubenswrapper[4794]: I0216 17:00:00.909432 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:00Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:00 crc kubenswrapper[4794]: I0216 17:00:00.938690 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:00Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:00 crc kubenswrapper[4794]: I0216 17:00:00.979029 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w6ttl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf36d1cc-c61d-4339-91a7-579ff74019aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2938850604a36c75c38eae180a2a231162f98c256191b74e63abcf08c8027924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rdjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w6ttl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:00Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.024122 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:01Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.026463 4794 generic.go:334] "Generic (PLEG): container finished" podID="b325454b-7201-4221-a07a-6093f1245d66" containerID="b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375" exitCode=0 Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.026502 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" event={"ID":"b325454b-7201-4221-a07a-6093f1245d66","Type":"ContainerDied","Data":"b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375"} Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.065003 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:01Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.101993 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:01Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.139750 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:01Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.181000 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:01Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.224691 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:01Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.256992 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d16146c9c352c4085083323776d5be036f92c66357a810bf908ac40e229192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:01Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.258235 4794 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.260953 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.261020 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.261033 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.261228 4794 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.317822 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:01Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.331161 4794 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.332082 4794 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.333206 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.333240 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.333252 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.333268 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.333278 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:01Z","lastTransitionTime":"2026-02-16T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:01 crc kubenswrapper[4794]: E0216 17:00:01.346141 4794 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ccf280c0-9a33-46bd-be2c-0dca34f382e0\\\",\\\"systemUUID\\\":\\\"b3d0f632-3e25-45db-ae26-e5b3ec8421a1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:01Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.350043 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.350075 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.350086 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.350104 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.350117 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:01Z","lastTransitionTime":"2026-02-16T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:01 crc kubenswrapper[4794]: E0216 17:00:01.361839 4794 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ccf280c0-9a33-46bd-be2c-0dca34f382e0\\\",\\\"systemUUID\\\":\\\"b3d0f632-3e25-45db-ae26-e5b3ec8421a1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:01Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.365776 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.365818 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.365831 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.365848 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.365861 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:01Z","lastTransitionTime":"2026-02-16T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.375861 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w6ttl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf36d1cc-c61d-4339-91a7-579ff74019aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2938850604a36c75c38eae180a2a231162f98c256191b74e63abcf08c8027924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rdjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w6ttl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:01Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:01 crc kubenswrapper[4794]: E0216 17:00:01.379562 4794 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ccf280c0-9a33-46bd-be2c-0dca34f382e0\\\",\\\"systemUUID\\\":\\\"b3d0f632-3e25-45db-ae26-e5b3ec8421a1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:01Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.382543 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.382573 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.382582 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.382595 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.382606 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:01Z","lastTransitionTime":"2026-02-16T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:01 crc kubenswrapper[4794]: E0216 17:00:01.395263 4794 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ccf280c0-9a33-46bd-be2c-0dca34f382e0\\\",\\\"systemUUID\\\":\\\"b3d0f632-3e25-45db-ae26-e5b3ec8421a1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:01Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.398066 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.398099 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.398108 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.398120 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.398129 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:01Z","lastTransitionTime":"2026-02-16T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:01 crc kubenswrapper[4794]: E0216 17:00:01.412598 4794 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ccf280c0-9a33-46bd-be2c-0dca34f382e0\\\",\\\"systemUUID\\\":\\\"b3d0f632-3e25-45db-ae26-e5b3ec8421a1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:01Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:01 crc kubenswrapper[4794]: E0216 17:00:01.412929 4794 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.414449 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.414480 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.414490 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.414508 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.414519 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:01Z","lastTransitionTime":"2026-02-16T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.421351 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:01Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.462723 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:01Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.504844 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:01Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.517566 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.517871 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.518039 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.518177 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.518350 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:01Z","lastTransitionTime":"2026-02-16T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.543023 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bf0cc1807b2cbce57729fe5a58675ff9124110cb52d7b0d8ae4a309098b433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:01Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.581970 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:01Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.621704 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.622006 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.622164 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.622381 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.622567 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:01Z","lastTransitionTime":"2026-02-16T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.629076 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:01Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.724997 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.725455 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.725741 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.726574 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.726713 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:01Z","lastTransitionTime":"2026-02-16T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.748261 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 05:13:17.371602918 +0000 UTC Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.790707 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.790758 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:01 crc kubenswrapper[4794]: E0216 17:00:01.790900 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.791014 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:01 crc kubenswrapper[4794]: E0216 17:00:01.791189 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:01 crc kubenswrapper[4794]: E0216 17:00:01.791409 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.829273 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.829331 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.829346 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.829366 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.829381 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:01Z","lastTransitionTime":"2026-02-16T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.932002 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.932350 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.932361 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.932381 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:01 crc kubenswrapper[4794]: I0216 17:00:01.932393 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:01Z","lastTransitionTime":"2026-02-16T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.034535 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.034609 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.034633 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.034666 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.034689 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:02Z","lastTransitionTime":"2026-02-16T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.039693 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" event={"ID":"d985e4f1-78bb-43f9-b86c-cd47831d602c","Type":"ContainerStarted","Data":"3705b165cdf9225556adb5c2effd58475ac7d4189b45799b2b6722ca8fac13de"} Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.040701 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.040742 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.046444 4794 generic.go:334] "Generic (PLEG): container finished" podID="b325454b-7201-4221-a07a-6093f1245d66" containerID="6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971" exitCode=0 Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.046492 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" event={"ID":"b325454b-7201-4221-a07a-6093f1245d66","Type":"ContainerDied","Data":"6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971"} Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.060136 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:02Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.075005 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:02Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.085081 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:02Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.098854 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:02Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.116993 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3705b165cdf9225556adb5c2effd58475ac7d4189b45799b2b6722ca8fac13de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:02Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.118745 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.118886 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.128608 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d16146c9c352c4085083323776d5be036f92c66357a810bf908ac40e229192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:02Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.141699 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.141729 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.141738 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.141770 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.141782 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:02Z","lastTransitionTime":"2026-02-16T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.143600 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:02Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.158200 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w6ttl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf36d1cc-c61d-4339-91a7-579ff74019aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2938850604a36c75c38eae180a2a231162f98c256191b74e63abcf08c8027924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rdjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w6ttl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:02Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.174428 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:02Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.189145 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:02Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.215722 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:02Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.229270 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bf0cc1807b2cbce57729fe5a58675ff9124110cb52d7b0d8ae4a309098b433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:02Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.239884 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:02Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.243579 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.243648 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.243666 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.243694 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.243712 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:02Z","lastTransitionTime":"2026-02-16T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.254847 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:02Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.269750 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:02Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.287881 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:02Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.299952 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:02Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.342111 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bf0cc1807b2cbce57729fe5a58675ff9124110cb52d7b0d8ae4a309098b433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:02Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.345887 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.345915 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.345927 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.345942 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.345953 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:02Z","lastTransitionTime":"2026-02-16T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.381167 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:02Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.420107 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:02Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.448037 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.448061 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.448068 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.448081 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.448089 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:02Z","lastTransitionTime":"2026-02-16T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.458775 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:02Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.499064 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:02Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.537909 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:02Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.550593 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.550669 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.550730 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.550750 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.550762 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:02Z","lastTransitionTime":"2026-02-16T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.580432 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:02Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.624030 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3705b165cdf9225556adb5c2effd58475ac7d4189b45799b2b6722ca8fac13de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:02Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.653259 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.653289 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.653314 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.653338 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.653351 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:02Z","lastTransitionTime":"2026-02-16T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.659336 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d16146c9c352c4085083323776d5be036f92c66357a810bf908ac40e229192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:02Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.699149 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:02Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.737473 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w6ttl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf36d1cc-c61d-4339-91a7-579ff74019aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2938850604a36c75c38eae180a2a231162f98c256191b74e63abcf08c8027924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rdjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w6ttl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:02Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.748680 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 14:09:29.782791875 +0000 UTC Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.755692 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.755744 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.755764 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.755789 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.755812 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:02Z","lastTransitionTime":"2026-02-16T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.858761 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.858807 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.858819 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.858836 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.858848 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:02Z","lastTransitionTime":"2026-02-16T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.961950 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.962027 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.962052 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.962082 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:02 crc kubenswrapper[4794]: I0216 17:00:02.962100 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:02Z","lastTransitionTime":"2026-02-16T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.053402 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" event={"ID":"b325454b-7201-4221-a07a-6093f1245d66","Type":"ContainerStarted","Data":"4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595"} Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.053493 4794 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.065276 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.065356 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.065375 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.065401 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.065424 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:03Z","lastTransitionTime":"2026-02-16T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.069777 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bf0cc1807b2cbce57729fe5a58675ff9124110cb52d7b0d8ae4a309098b433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:03Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.083052 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:03Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.099622 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:03Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.116830 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:03Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.132785 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:03Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.147440 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:03Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.163674 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:03Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.167868 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.167912 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.167924 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.167944 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.167957 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:03Z","lastTransitionTime":"2026-02-16T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.186945 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3705b165cdf9225556adb5c2effd58475ac7d4189b45799b2b6722ca8fac13de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:03Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.206478 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d16146c9c352c4085083323776d5be036f92c66357a810bf908ac40e229192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:03Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.228186 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:03Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.246270 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:03Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.257809 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:03Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.266761 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w6ttl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf36d1cc-c61d-4339-91a7-579ff74019aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2938850604a36c75c38eae180a2a231162f98c256191b74e63abcf08c8027924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rdjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w6ttl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:03Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.270275 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.270331 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.270343 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.270359 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.270370 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:03Z","lastTransitionTime":"2026-02-16T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.298258 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:03Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.373283 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.373344 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.373357 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.373372 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.373381 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:03Z","lastTransitionTime":"2026-02-16T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.476462 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.476515 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.476534 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.476556 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.476572 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:03Z","lastTransitionTime":"2026-02-16T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.490178 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.490232 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.490370 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:03 crc kubenswrapper[4794]: E0216 17:00:03.490607 4794 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:00:03 crc kubenswrapper[4794]: E0216 17:00:03.490651 4794 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:00:03 crc kubenswrapper[4794]: E0216 17:00:03.490666 4794 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:03 crc kubenswrapper[4794]: E0216 17:00:03.490672 4794 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:00:03 crc kubenswrapper[4794]: E0216 17:00:03.490614 4794 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:00:03 crc kubenswrapper[4794]: E0216 17:00:03.490728 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:11.490708222 +0000 UTC m=+37.438802929 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:03 crc kubenswrapper[4794]: E0216 17:00:03.490752 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:11.490740803 +0000 UTC m=+37.438835460 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:00:03 crc kubenswrapper[4794]: E0216 17:00:03.490768 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:11.490759553 +0000 UTC m=+37.438854310 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.579197 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.579240 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.579249 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.579263 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.579273 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:03Z","lastTransitionTime":"2026-02-16T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.591025 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:00:03 crc kubenswrapper[4794]: E0216 17:00:03.591184 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:00:11.591163719 +0000 UTC m=+37.539258366 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.591277 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:03 crc kubenswrapper[4794]: E0216 17:00:03.591390 4794 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:00:03 crc kubenswrapper[4794]: E0216 17:00:03.591413 4794 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:00:03 crc kubenswrapper[4794]: E0216 17:00:03.591612 4794 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:03 crc kubenswrapper[4794]: E0216 17:00:03.591653 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:11.591643532 +0000 UTC m=+37.539738179 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.681870 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.681934 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.681951 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.681978 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.681996 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:03Z","lastTransitionTime":"2026-02-16T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.749677 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 16:55:03.46057737 +0000 UTC Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.785229 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.785281 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.785317 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.785342 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.785356 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:03Z","lastTransitionTime":"2026-02-16T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.790759 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.790788 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.790794 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:03 crc kubenswrapper[4794]: E0216 17:00:03.790923 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:03 crc kubenswrapper[4794]: E0216 17:00:03.791065 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:03 crc kubenswrapper[4794]: E0216 17:00:03.791196 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.889191 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.889640 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.889850 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.890068 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.890350 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:03Z","lastTransitionTime":"2026-02-16T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.993449 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.993733 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.993822 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.993936 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:03 crc kubenswrapper[4794]: I0216 17:00:03.994043 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:03Z","lastTransitionTime":"2026-02-16T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.063233 4794 generic.go:334] "Generic (PLEG): container finished" podID="b325454b-7201-4221-a07a-6093f1245d66" containerID="4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595" exitCode=0 Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.063441 4794 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.063578 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" event={"ID":"b325454b-7201-4221-a07a-6093f1245d66","Type":"ContainerDied","Data":"4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595"} Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.090582 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:04Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.097711 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.097758 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.097772 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.097792 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.097806 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:04Z","lastTransitionTime":"2026-02-16T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.104893 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:04Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.118254 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bf0cc1807b2cbce57729fe5a58675ff9124110cb52d7b0d8ae4a309098b433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:04Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.133347 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:04Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.147289 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:04Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.156688 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d16146c9c352c4085083323776d5be036f92c66357a810bf908ac40e229192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:04Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.170394 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:04Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.182895 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:04Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.193528 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:04Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.199381 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.199411 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.199419 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.199432 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.199441 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:04Z","lastTransitionTime":"2026-02-16T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.208282 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:04Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.225513 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3705b165cdf9225556adb5c2effd58475ac7d4189b45799b2b6722ca8fac13de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:04Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.238247 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:04Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.247910 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w6ttl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf36d1cc-c61d-4339-91a7-579ff74019aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2938850604a36c75c38eae180a2a231162f98c256191b74e63abcf08c8027924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rdjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w6ttl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:04Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.260063 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:04Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.302538 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.302588 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.302599 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.302613 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.302622 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:04Z","lastTransitionTime":"2026-02-16T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.405058 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.405128 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.405147 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.405171 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.405188 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:04Z","lastTransitionTime":"2026-02-16T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.507878 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.507914 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.507923 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.507937 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.507945 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:04Z","lastTransitionTime":"2026-02-16T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.584876 4794 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.609728 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.609975 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.610068 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.610144 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.610207 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:04Z","lastTransitionTime":"2026-02-16T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.711962 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.711996 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.712004 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.712018 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.712028 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:04Z","lastTransitionTime":"2026-02-16T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.750317 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 08:30:18.411569019 +0000 UTC Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.801132 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d16146c9c352c4085083323776d5be036f92c66357a810bf908ac40e229192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:04Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.813423 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:04Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.814496 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.814575 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.814598 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.814626 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.814648 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:04Z","lastTransitionTime":"2026-02-16T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.826345 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:04Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.839009 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:04Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.853053 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:04Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.873174 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3705b165cdf9225556adb5c2effd58475ac7d4189b45799b2b6722ca8fac13de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:04Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.889062 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:04Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.900601 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w6ttl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf36d1cc-c61d-4339-91a7-579ff74019aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2938850604a36c75c38eae180a2a231162f98c256191b74e63abcf08c8027924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rdjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w6ttl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:04Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.915121 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:04Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.916116 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.916170 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.916182 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.916198 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.916234 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:04Z","lastTransitionTime":"2026-02-16T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.928868 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:04Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.944467 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:04Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.959294 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bf0cc1807b2cbce57729fe5a58675ff9124110cb52d7b0d8ae4a309098b433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:04Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.969473 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:04Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:04 crc kubenswrapper[4794]: I0216 17:00:04.982108 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:04Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.017800 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.017829 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.017840 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.017855 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.017865 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:05Z","lastTransitionTime":"2026-02-16T17:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.067602 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9krvl_d985e4f1-78bb-43f9-b86c-cd47831d602c/ovnkube-controller/0.log" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.069991 4794 generic.go:334] "Generic (PLEG): container finished" podID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerID="3705b165cdf9225556adb5c2effd58475ac7d4189b45799b2b6722ca8fac13de" exitCode=1 Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.070049 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" event={"ID":"d985e4f1-78bb-43f9-b86c-cd47831d602c","Type":"ContainerDied","Data":"3705b165cdf9225556adb5c2effd58475ac7d4189b45799b2b6722ca8fac13de"} Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.070654 4794 scope.go:117] "RemoveContainer" containerID="3705b165cdf9225556adb5c2effd58475ac7d4189b45799b2b6722ca8fac13de" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.073472 4794 generic.go:334] "Generic (PLEG): container finished" podID="b325454b-7201-4221-a07a-6093f1245d66" containerID="e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c" exitCode=0 Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.073518 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" event={"ID":"b325454b-7201-4221-a07a-6093f1245d66","Type":"ContainerDied","Data":"e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c"} Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.087013 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:05Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.098784 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w6ttl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf36d1cc-c61d-4339-91a7-579ff74019aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2938850604a36c75c38eae180a2a231162f98c256191b74e63abcf08c8027924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rdjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w6ttl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:05Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.110299 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:05Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.120464 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:05Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.120880 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.120958 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.121384 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.121434 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.121449 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:05Z","lastTransitionTime":"2026-02-16T17:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.131131 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:05Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.143537 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:05Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.154151 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:05Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.165126 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bf0cc1807b2cbce57729fe5a58675ff9124110cb52d7b0d8ae4a309098b433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:05Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.182455 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:05Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.204121 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3705b165cdf9225556adb5c2effd58475ac7d4189b45799b2b6722ca8fac13de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3705b165cdf9225556adb5c2effd58475ac7d4189b45799b2b6722ca8fac13de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:05Z\\\",\\\"message\\\":\\\"rom k8s.io/client-go/informers/factory.go:160\\\\nI0216 17:00:05.036601 6055 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 17:00:05.036779 6055 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 17:00:05.036796 6055 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 17:00:05.036808 6055 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 17:00:05.036839 6055 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 17:00:05.036860 6055 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 17:00:05.036863 6055 factory.go:656] Stopping watch factory\\\\nI0216 17:00:05.036874 6055 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 17:00:05.036882 6055 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 17:00:05.036881 6055 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 17:00:05.037033 6055 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 17:00:05.037088 6055 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:05Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.214642 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d16146c9c352c4085083323776d5be036f92c66357a810bf908ac40e229192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:05Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.227751 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.227686 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:05Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.227798 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.227839 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.227855 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.227866 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:05Z","lastTransitionTime":"2026-02-16T17:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.238066 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:05Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.251370 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:05Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.261377 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:05Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.270246 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:05Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.278800 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bf0cc1807b2cbce57729fe5a58675ff9124110cb52d7b0d8ae4a309098b433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:05Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.289559 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:05Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.300833 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:05Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.312046 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:05Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.323974 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:05Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.330543 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.330583 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.330593 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.330621 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.330633 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:05Z","lastTransitionTime":"2026-02-16T17:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.337040 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:05Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.351159 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:05Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.388977 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3705b165cdf9225556adb5c2effd58475ac7d4189b45799b2b6722ca8fac13de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3705b165cdf9225556adb5c2effd58475ac7d4189b45799b2b6722ca8fac13de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:05Z\\\",\\\"message\\\":\\\"rom k8s.io/client-go/informers/factory.go:160\\\\nI0216 17:00:05.036601 6055 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 17:00:05.036779 6055 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 17:00:05.036796 6055 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 17:00:05.036808 6055 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 17:00:05.036839 6055 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 17:00:05.036860 6055 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 17:00:05.036863 6055 factory.go:656] Stopping watch factory\\\\nI0216 17:00:05.036874 6055 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 17:00:05.036882 6055 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 17:00:05.036881 6055 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 17:00:05.037033 6055 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 17:00:05.037088 6055 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:05Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.416678 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d16146c9c352c4085083323776d5be036f92c66357a810bf908ac40e229192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:05Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.435529 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.435647 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.435762 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.435796 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.435816 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:05Z","lastTransitionTime":"2026-02-16T17:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.460988 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:05Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.500715 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w6ttl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf36d1cc-c61d-4339-91a7-579ff74019aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2938850604a36c75c38eae180a2a231162f98c256191b74e63abcf08c8027924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rdjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w6ttl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:05Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.539439 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.539483 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.539496 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.539512 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.539524 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:05Z","lastTransitionTime":"2026-02-16T17:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.546970 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:05Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.642107 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.642171 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.642189 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.642215 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.642233 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:05Z","lastTransitionTime":"2026-02-16T17:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.744552 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.744599 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.744611 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.744623 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.744632 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:05Z","lastTransitionTime":"2026-02-16T17:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.750969 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 23:47:10.549402479 +0000 UTC Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.790525 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.790558 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:05 crc kubenswrapper[4794]: E0216 17:00:05.790642 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.790536 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:05 crc kubenswrapper[4794]: E0216 17:00:05.790733 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:05 crc kubenswrapper[4794]: E0216 17:00:05.790792 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.846764 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.846812 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.846823 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.846843 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.846856 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:05Z","lastTransitionTime":"2026-02-16T17:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.949414 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.949487 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.949515 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.949546 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:05 crc kubenswrapper[4794]: I0216 17:00:05.949571 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:05Z","lastTransitionTime":"2026-02-16T17:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.052333 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.052375 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.052383 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.052398 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.052409 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:06Z","lastTransitionTime":"2026-02-16T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.081448 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9krvl_d985e4f1-78bb-43f9-b86c-cd47831d602c/ovnkube-controller/0.log" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.085371 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" event={"ID":"d985e4f1-78bb-43f9-b86c-cd47831d602c","Type":"ContainerStarted","Data":"b524bb791d010cf75bcb882df8ca49071114861eabc6437d1348bc94f74c3dbf"} Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.085595 4794 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.089730 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" event={"ID":"b325454b-7201-4221-a07a-6093f1245d66","Type":"ContainerStarted","Data":"a0dbad4f63495ccf97b4852e3878b155e281c37662322d28b442acb4d2748e79"} Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.099811 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.109700 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w6ttl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf36d1cc-c61d-4339-91a7-579ff74019aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2938850604a36c75c38eae180a2a231162f98c256191b74e63abcf08c8027924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rdjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w6ttl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.121858 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.136874 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.152293 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.155261 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.155319 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.155327 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.155342 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.155351 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:06Z","lastTransitionTime":"2026-02-16T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.165874 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bf0cc1807b2cbce57729fe5a58675ff9124110cb52d7b0d8ae4a309098b433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.178961 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.194111 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.210116 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d16146c9c352c4085083323776d5be036f92c66357a810bf908ac40e229192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.231036 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.244749 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.257295 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.257339 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.257348 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.257361 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.257371 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:06Z","lastTransitionTime":"2026-02-16T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.258356 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.271618 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.288661 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b524bb791d010cf75bcb882df8ca49071114861eabc6437d1348bc94f74c3dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3705b165cdf9225556adb5c2effd58475ac7d4189b45799b2b6722ca8fac13de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:05Z\\\",\\\"message\\\":\\\"rom k8s.io/client-go/informers/factory.go:160\\\\nI0216 17:00:05.036601 6055 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 17:00:05.036779 6055 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 17:00:05.036796 6055 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 17:00:05.036808 6055 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 17:00:05.036839 6055 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 17:00:05.036860 6055 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 17:00:05.036863 6055 factory.go:656] Stopping watch factory\\\\nI0216 17:00:05.036874 6055 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 17:00:05.036882 6055 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 17:00:05.036881 6055 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 17:00:05.037033 6055 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 17:00:05.037088 6055 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.298006 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w6ttl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf36d1cc-c61d-4339-91a7-579ff74019aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2938850604a36c75c38eae180a2a231162f98c256191b74e63abcf08c8027924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rdjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w6ttl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.308634 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.321930 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.333875 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.345160 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bf0cc1807b2cbce57729fe5a58675ff9124110cb52d7b0d8ae4a309098b433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.357162 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.359970 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.360031 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.360040 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.360059 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.360069 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:06Z","lastTransitionTime":"2026-02-16T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.378782 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.418982 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.458270 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.461951 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.462099 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.462169 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.462236 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.462293 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:06Z","lastTransitionTime":"2026-02-16T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.505343 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.548008 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dbad4f63495ccf97b4852e3878b155e281c37662322d28b442acb4d2748e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.564503 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.564543 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.564553 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.564569 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.564580 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:06Z","lastTransitionTime":"2026-02-16T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.583663 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b524bb791d010cf75bcb882df8ca49071114861eabc6437d1348bc94f74c3dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3705b165cdf9225556adb5c2effd58475ac7d4189b45799b2b6722ca8fac13de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:05Z\\\",\\\"message\\\":\\\"rom k8s.io/client-go/informers/factory.go:160\\\\nI0216 17:00:05.036601 6055 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 17:00:05.036779 6055 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 17:00:05.036796 6055 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 17:00:05.036808 6055 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 17:00:05.036839 6055 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 17:00:05.036860 6055 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 17:00:05.036863 6055 factory.go:656] Stopping watch factory\\\\nI0216 17:00:05.036874 6055 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 17:00:05.036882 6055 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 17:00:05.036881 6055 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 17:00:05.037033 6055 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 17:00:05.037088 6055 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.619279 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d16146c9c352c4085083323776d5be036f92c66357a810bf908ac40e229192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.664383 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.667054 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.667091 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.667104 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.667121 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.667141 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:06Z","lastTransitionTime":"2026-02-16T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.751450 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 01:16:45.867825883 +0000 UTC Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.770116 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.770167 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.770183 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.770205 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.770220 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:06Z","lastTransitionTime":"2026-02-16T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.872502 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.872550 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.872560 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.872580 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.872596 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:06Z","lastTransitionTime":"2026-02-16T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.975229 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.975273 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.975282 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.975321 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:06 crc kubenswrapper[4794]: I0216 17:00:06.975331 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:06Z","lastTransitionTime":"2026-02-16T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.077561 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.077604 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.077615 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.077632 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.077645 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:07Z","lastTransitionTime":"2026-02-16T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.094281 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9krvl_d985e4f1-78bb-43f9-b86c-cd47831d602c/ovnkube-controller/1.log" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.094837 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9krvl_d985e4f1-78bb-43f9-b86c-cd47831d602c/ovnkube-controller/0.log" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.098316 4794 generic.go:334] "Generic (PLEG): container finished" podID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerID="b524bb791d010cf75bcb882df8ca49071114861eabc6437d1348bc94f74c3dbf" exitCode=1 Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.098431 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" event={"ID":"d985e4f1-78bb-43f9-b86c-cd47831d602c","Type":"ContainerDied","Data":"b524bb791d010cf75bcb882df8ca49071114861eabc6437d1348bc94f74c3dbf"} Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.098574 4794 scope.go:117] "RemoveContainer" containerID="3705b165cdf9225556adb5c2effd58475ac7d4189b45799b2b6722ca8fac13de" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.099272 4794 scope.go:117] "RemoveContainer" containerID="b524bb791d010cf75bcb882df8ca49071114861eabc6437d1348bc94f74c3dbf" Feb 16 17:00:07 crc kubenswrapper[4794]: E0216 17:00:07.099491 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-9krvl_openshift-ovn-kubernetes(d985e4f1-78bb-43f9-b86c-cd47831d602c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.119395 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:07Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.137269 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:07Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.151811 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bf0cc1807b2cbce57729fe5a58675ff9124110cb52d7b0d8ae4a309098b433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:07Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.165546 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:07Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.177250 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:07Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.179676 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.179705 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.179714 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.179730 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.179740 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:07Z","lastTransitionTime":"2026-02-16T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.192980 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:07Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.206453 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:07Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.218804 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:07Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.234630 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dbad4f63495ccf97b4852e3878b155e281c37662322d28b442acb4d2748e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:07Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.257434 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b524bb791d010cf75bcb882df8ca49071114861eabc6437d1348bc94f74c3dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3705b165cdf9225556adb5c2effd58475ac7d4189b45799b2b6722ca8fac13de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:05Z\\\",\\\"message\\\":\\\"rom k8s.io/client-go/informers/factory.go:160\\\\nI0216 17:00:05.036601 6055 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 17:00:05.036779 6055 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 17:00:05.036796 6055 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 17:00:05.036808 6055 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 17:00:05.036839 6055 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 17:00:05.036860 6055 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 17:00:05.036863 6055 factory.go:656] Stopping watch factory\\\\nI0216 17:00:05.036874 6055 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 17:00:05.036882 6055 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 17:00:05.036881 6055 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 17:00:05.037033 6055 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 17:00:05.037088 6055 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b524bb791d010cf75bcb882df8ca49071114861eabc6437d1348bc94f74c3dbf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"message\\\":\\\"]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0216 17:00:06.059578 6228 services_controller.go:452] Built service openshift-machine-api/cluster-autoscaler-operator per-node LB for network=default: []services.LB{}\\\\nI0216 17:00:06.059590 6228 services_controller.go:453] Built service openshift-machine-api/cluster-autoscaler-operator template LB for network=default: []services.LB{}\\\\nI0216 17:00:06.059565 6228 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:07Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.269032 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d16146c9c352c4085083323776d5be036f92c66357a810bf908ac40e229192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:07Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.281238 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:07Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.282048 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.282100 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.282118 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.282138 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.282168 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:07Z","lastTransitionTime":"2026-02-16T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.295947 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w6ttl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf36d1cc-c61d-4339-91a7-579ff74019aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2938850604a36c75c38eae180a2a231162f98c256191b74e63abcf08c8027924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rdjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w6ttl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:07Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.309799 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:07Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.384752 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.384800 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.384817 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.384839 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.384856 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:07Z","lastTransitionTime":"2026-02-16T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.488097 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.488147 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.488158 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.488175 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.488187 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:07Z","lastTransitionTime":"2026-02-16T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.590221 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.590290 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.590335 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.590363 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.590380 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:07Z","lastTransitionTime":"2026-02-16T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.692805 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.692867 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.692885 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.692912 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.692930 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:07Z","lastTransitionTime":"2026-02-16T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.751673 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 02:16:26.783720191 +0000 UTC Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.791059 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.791130 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.791083 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:07 crc kubenswrapper[4794]: E0216 17:00:07.791238 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:07 crc kubenswrapper[4794]: E0216 17:00:07.791461 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:07 crc kubenswrapper[4794]: E0216 17:00:07.791602 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.795562 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.795620 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.795639 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.795662 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.795679 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:07Z","lastTransitionTime":"2026-02-16T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.898742 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.898786 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.898806 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.898834 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.898857 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:07Z","lastTransitionTime":"2026-02-16T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:07 crc kubenswrapper[4794]: I0216 17:00:07.997609 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.001688 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.001723 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.001733 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.001748 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.001760 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:08Z","lastTransitionTime":"2026-02-16T17:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.041394 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:08Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.064632 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:08Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.081460 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bf0cc1807b2cbce57729fe5a58675ff9124110cb52d7b0d8ae4a309098b433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:08Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.094917 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:08Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.103198 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.103224 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.103241 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.103261 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.103275 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:08Z","lastTransitionTime":"2026-02-16T17:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.103810 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9krvl_d985e4f1-78bb-43f9-b86c-cd47831d602c/ovnkube-controller/1.log" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.109173 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:08Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.124122 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:08Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.135388 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:08Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.145721 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:08Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.159845 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dbad4f63495ccf97b4852e3878b155e281c37662322d28b442acb4d2748e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:08Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.181504 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b524bb791d010cf75bcb882df8ca49071114861eabc6437d1348bc94f74c3dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3705b165cdf9225556adb5c2effd58475ac7d4189b45799b2b6722ca8fac13de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:05Z\\\",\\\"message\\\":\\\"rom k8s.io/client-go/informers/factory.go:160\\\\nI0216 17:00:05.036601 6055 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 17:00:05.036779 6055 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 17:00:05.036796 6055 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 17:00:05.036808 6055 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 17:00:05.036839 6055 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 17:00:05.036860 6055 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 17:00:05.036863 6055 factory.go:656] Stopping watch factory\\\\nI0216 17:00:05.036874 6055 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 17:00:05.036882 6055 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 17:00:05.036881 6055 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 17:00:05.037033 6055 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 17:00:05.037088 6055 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b524bb791d010cf75bcb882df8ca49071114861eabc6437d1348bc94f74c3dbf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"message\\\":\\\"]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0216 17:00:06.059578 6228 services_controller.go:452] Built service openshift-machine-api/cluster-autoscaler-operator per-node LB for network=default: []services.LB{}\\\\nI0216 17:00:06.059590 6228 services_controller.go:453] Built service openshift-machine-api/cluster-autoscaler-operator template LB for network=default: []services.LB{}\\\\nI0216 17:00:06.059565 6228 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:08Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.191991 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d16146c9c352c4085083323776d5be036f92c66357a810bf908ac40e229192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:08Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.205496 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.205545 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.205559 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.205579 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.205594 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:08Z","lastTransitionTime":"2026-02-16T17:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.207823 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:08Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.220428 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w6ttl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf36d1cc-c61d-4339-91a7-579ff74019aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2938850604a36c75c38eae180a2a231162f98c256191b74e63abcf08c8027924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rdjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w6ttl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:08Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.233629 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:08Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.308212 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.308269 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.308292 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.308369 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.308393 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:08Z","lastTransitionTime":"2026-02-16T17:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.410799 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.410849 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.410863 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.410883 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.410898 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:08Z","lastTransitionTime":"2026-02-16T17:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.512689 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.512720 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.512737 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.512765 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.512777 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:08Z","lastTransitionTime":"2026-02-16T17:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.615538 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.615567 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.615577 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.615592 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.615601 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:08Z","lastTransitionTime":"2026-02-16T17:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.718318 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.718541 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.718639 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.718753 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.718837 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:08Z","lastTransitionTime":"2026-02-16T17:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.751827 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 17:37:52.015334414 +0000 UTC Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.821174 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.821210 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.821234 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.821247 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.821255 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:08Z","lastTransitionTime":"2026-02-16T17:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.926817 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.926866 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.926878 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.926900 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.926912 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:08Z","lastTransitionTime":"2026-02-16T17:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.944289 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cmzfs"] Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.944766 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cmzfs" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.946740 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.947056 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.960441 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:08Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.971995 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:08Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.983058 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bf0cc1807b2cbce57729fe5a58675ff9124110cb52d7b0d8ae4a309098b433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:08Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:08 crc kubenswrapper[4794]: I0216 17:00:08.994846 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:08Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.005197 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:09Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.014420 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d16146c9c352c4085083323776d5be036f92c66357a810bf908ac40e229192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:09Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.026680 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:09Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.028964 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.028997 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.029006 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.029021 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.029032 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:09Z","lastTransitionTime":"2026-02-16T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.040775 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:09Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.045082 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/237c381f-d225-4a4b-8bc9-6c03ee09015f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-cmzfs\" (UID: \"237c381f-d225-4a4b-8bc9-6c03ee09015f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cmzfs" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.045204 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/237c381f-d225-4a4b-8bc9-6c03ee09015f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-cmzfs\" (UID: \"237c381f-d225-4a4b-8bc9-6c03ee09015f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cmzfs" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.045240 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/237c381f-d225-4a4b-8bc9-6c03ee09015f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-cmzfs\" (UID: \"237c381f-d225-4a4b-8bc9-6c03ee09015f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cmzfs" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.045273 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fnpn\" (UniqueName: \"kubernetes.io/projected/237c381f-d225-4a4b-8bc9-6c03ee09015f-kube-api-access-4fnpn\") pod \"ovnkube-control-plane-749d76644c-cmzfs\" (UID: \"237c381f-d225-4a4b-8bc9-6c03ee09015f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cmzfs" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.051602 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:09Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.068533 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dbad4f63495ccf97b4852e3878b155e281c37662322d28b442acb4d2748e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:09Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.091422 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b524bb791d010cf75bcb882df8ca49071114861eabc6437d1348bc94f74c3dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3705b165cdf9225556adb5c2effd58475ac7d4189b45799b2b6722ca8fac13de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:05Z\\\",\\\"message\\\":\\\"rom k8s.io/client-go/informers/factory.go:160\\\\nI0216 17:00:05.036601 6055 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 17:00:05.036779 6055 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 17:00:05.036796 6055 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 17:00:05.036808 6055 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 17:00:05.036839 6055 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 17:00:05.036860 6055 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 17:00:05.036863 6055 factory.go:656] Stopping watch factory\\\\nI0216 17:00:05.036874 6055 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 17:00:05.036882 6055 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 17:00:05.036881 6055 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 17:00:05.037033 6055 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 17:00:05.037088 6055 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b524bb791d010cf75bcb882df8ca49071114861eabc6437d1348bc94f74c3dbf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"message\\\":\\\"]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0216 17:00:06.059578 6228 services_controller.go:452] Built service openshift-machine-api/cluster-autoscaler-operator per-node LB for network=default: []services.LB{}\\\\nI0216 17:00:06.059590 6228 services_controller.go:453] Built service openshift-machine-api/cluster-autoscaler-operator template LB for network=default: []services.LB{}\\\\nI0216 17:00:06.059565 6228 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:09Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.101194 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:09Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.110224 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w6ttl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf36d1cc-c61d-4339-91a7-579ff74019aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2938850604a36c75c38eae180a2a231162f98c256191b74e63abcf08c8027924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rdjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w6ttl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:09Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.120396 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cmzfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"237c381f-d225-4a4b-8bc9-6c03ee09015f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fnpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fnpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cmzfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:09Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.131698 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.131736 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.131747 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.131764 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.131776 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:09Z","lastTransitionTime":"2026-02-16T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.133114 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:09Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.146583 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/237c381f-d225-4a4b-8bc9-6c03ee09015f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-cmzfs\" (UID: \"237c381f-d225-4a4b-8bc9-6c03ee09015f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cmzfs" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.146635 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/237c381f-d225-4a4b-8bc9-6c03ee09015f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-cmzfs\" (UID: \"237c381f-d225-4a4b-8bc9-6c03ee09015f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cmzfs" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.146666 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/237c381f-d225-4a4b-8bc9-6c03ee09015f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-cmzfs\" (UID: \"237c381f-d225-4a4b-8bc9-6c03ee09015f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cmzfs" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.146688 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fnpn\" (UniqueName: \"kubernetes.io/projected/237c381f-d225-4a4b-8bc9-6c03ee09015f-kube-api-access-4fnpn\") pod \"ovnkube-control-plane-749d76644c-cmzfs\" (UID: \"237c381f-d225-4a4b-8bc9-6c03ee09015f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cmzfs" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.147416 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/237c381f-d225-4a4b-8bc9-6c03ee09015f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-cmzfs\" (UID: \"237c381f-d225-4a4b-8bc9-6c03ee09015f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cmzfs" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.147635 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/237c381f-d225-4a4b-8bc9-6c03ee09015f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-cmzfs\" (UID: \"237c381f-d225-4a4b-8bc9-6c03ee09015f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cmzfs" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.154817 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/237c381f-d225-4a4b-8bc9-6c03ee09015f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-cmzfs\" (UID: \"237c381f-d225-4a4b-8bc9-6c03ee09015f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cmzfs" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.162389 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fnpn\" (UniqueName: \"kubernetes.io/projected/237c381f-d225-4a4b-8bc9-6c03ee09015f-kube-api-access-4fnpn\") pod \"ovnkube-control-plane-749d76644c-cmzfs\" (UID: \"237c381f-d225-4a4b-8bc9-6c03ee09015f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cmzfs" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.233968 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.234004 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.234015 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.234031 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.234043 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:09Z","lastTransitionTime":"2026-02-16T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.256907 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cmzfs" Feb 16 17:00:09 crc kubenswrapper[4794]: W0216 17:00:09.268662 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod237c381f_d225_4a4b_8bc9_6c03ee09015f.slice/crio-f095fd83cb8c2c18241e5c97634897ffb29b8fa6054e220d4284053352834e9a WatchSource:0}: Error finding container f095fd83cb8c2c18241e5c97634897ffb29b8fa6054e220d4284053352834e9a: Status 404 returned error can't find the container with id f095fd83cb8c2c18241e5c97634897ffb29b8fa6054e220d4284053352834e9a Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.336390 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.336425 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.336437 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.336452 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.336463 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:09Z","lastTransitionTime":"2026-02-16T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.440522 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.440584 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.440598 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.440625 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.440642 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:09Z","lastTransitionTime":"2026-02-16T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.544407 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.544459 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.544471 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.544486 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.544497 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:09Z","lastTransitionTime":"2026-02-16T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.647021 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.647058 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.647069 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.647119 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.647131 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:09Z","lastTransitionTime":"2026-02-16T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.749636 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.749675 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.749684 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.749700 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.749710 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:09Z","lastTransitionTime":"2026-02-16T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.751951 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 10:29:27.6746879 +0000 UTC Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.791104 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.791195 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:09 crc kubenswrapper[4794]: E0216 17:00:09.791268 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.791401 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:09 crc kubenswrapper[4794]: E0216 17:00:09.791404 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:09 crc kubenswrapper[4794]: E0216 17:00:09.791702 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.852188 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.852239 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.852248 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.852262 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.852273 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:09Z","lastTransitionTime":"2026-02-16T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.956994 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.957035 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.957047 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.957064 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:09 crc kubenswrapper[4794]: I0216 17:00:09.957074 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:09Z","lastTransitionTime":"2026-02-16T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.059498 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.059539 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.059549 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.059565 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.059575 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:10Z","lastTransitionTime":"2026-02-16T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.116335 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cmzfs" event={"ID":"237c381f-d225-4a4b-8bc9-6c03ee09015f","Type":"ContainerStarted","Data":"61b6dad949f71a170816c56dbc1ad2c99e88e7ecf1043d74bb077950f135eeed"} Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.116416 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cmzfs" event={"ID":"237c381f-d225-4a4b-8bc9-6c03ee09015f","Type":"ContainerStarted","Data":"ca731f62fcf2b8c8e68925b2b13cf1f61cab4b77425c85820278f710f4d8c939"} Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.116432 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cmzfs" event={"ID":"237c381f-d225-4a4b-8bc9-6c03ee09015f","Type":"ContainerStarted","Data":"f095fd83cb8c2c18241e5c97634897ffb29b8fa6054e220d4284053352834e9a"} Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.132484 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:10Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.150070 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:10Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.162286 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.162334 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.162343 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.162359 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.162369 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:10Z","lastTransitionTime":"2026-02-16T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.164629 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:10Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.175980 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bf0cc1807b2cbce57729fe5a58675ff9124110cb52d7b0d8ae4a309098b433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:10Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.187449 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:10Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.208727 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b524bb791d010cf75bcb882df8ca49071114861eabc6437d1348bc94f74c3dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3705b165cdf9225556adb5c2effd58475ac7d4189b45799b2b6722ca8fac13de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:05Z\\\",\\\"message\\\":\\\"rom k8s.io/client-go/informers/factory.go:160\\\\nI0216 17:00:05.036601 6055 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 17:00:05.036779 6055 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 17:00:05.036796 6055 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 17:00:05.036808 6055 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 17:00:05.036839 6055 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 17:00:05.036860 6055 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 17:00:05.036863 6055 factory.go:656] Stopping watch factory\\\\nI0216 17:00:05.036874 6055 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 17:00:05.036882 6055 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 17:00:05.036881 6055 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 17:00:05.037033 6055 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 17:00:05.037088 6055 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b524bb791d010cf75bcb882df8ca49071114861eabc6437d1348bc94f74c3dbf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"message\\\":\\\"]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0216 17:00:06.059578 6228 services_controller.go:452] Built service openshift-machine-api/cluster-autoscaler-operator per-node LB for network=default: []services.LB{}\\\\nI0216 17:00:06.059590 6228 services_controller.go:453] Built service openshift-machine-api/cluster-autoscaler-operator template LB for network=default: []services.LB{}\\\\nI0216 17:00:06.059565 6228 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:10Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.218903 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d16146c9c352c4085083323776d5be036f92c66357a810bf908ac40e229192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:10Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.233486 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:10Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.245785 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:10Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.257701 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:10Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.264682 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.264726 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.264735 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.264751 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.264767 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:10Z","lastTransitionTime":"2026-02-16T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.273568 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dbad4f63495ccf97b4852e3878b155e281c37662322d28b442acb4d2748e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:10Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.284478 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:10Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.294800 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w6ttl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf36d1cc-c61d-4339-91a7-579ff74019aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2938850604a36c75c38eae180a2a231162f98c256191b74e63abcf08c8027924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rdjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w6ttl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:10Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.305953 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cmzfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"237c381f-d225-4a4b-8bc9-6c03ee09015f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca731f62fcf2b8c8e68925b2b13cf1f61cab4b77425c85820278f710f4d8c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fnpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61b6dad949f71a170816c56dbc1ad2c99e88e7ecf1043d74bb077950f135eeed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fnpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cmzfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:10Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.318413 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:10Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.366923 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-tf698"] Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.367211 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.367284 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.367295 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.367341 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.367354 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:10Z","lastTransitionTime":"2026-02-16T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.367515 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:00:10 crc kubenswrapper[4794]: E0216 17:00:10.367585 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.383884 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:10Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.395129 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:10Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.405059 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bf0cc1807b2cbce57729fe5a58675ff9124110cb52d7b0d8ae4a309098b433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:10Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.418707 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:10Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.431091 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:10Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.443610 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:10Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.456510 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:10Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.461023 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/894bff1b-b8b9-4c28-8ffe-0e0469958227-metrics-certs\") pod \"network-metrics-daemon-tf698\" (UID: \"894bff1b-b8b9-4c28-8ffe-0e0469958227\") " pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.461342 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zns6k\" (UniqueName: \"kubernetes.io/projected/894bff1b-b8b9-4c28-8ffe-0e0469958227-kube-api-access-zns6k\") pod \"network-metrics-daemon-tf698\" (UID: \"894bff1b-b8b9-4c28-8ffe-0e0469958227\") " pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.469267 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.469342 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.469353 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.469370 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.469381 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:10Z","lastTransitionTime":"2026-02-16T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.471896 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:10Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.490530 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dbad4f63495ccf97b4852e3878b155e281c37662322d28b442acb4d2748e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:10Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.512848 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b524bb791d010cf75bcb882df8ca49071114861eabc6437d1348bc94f74c3dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3705b165cdf9225556adb5c2effd58475ac7d4189b45799b2b6722ca8fac13de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:05Z\\\",\\\"message\\\":\\\"rom k8s.io/client-go/informers/factory.go:160\\\\nI0216 17:00:05.036601 6055 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 17:00:05.036779 6055 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 17:00:05.036796 6055 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 17:00:05.036808 6055 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 17:00:05.036839 6055 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 17:00:05.036860 6055 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 17:00:05.036863 6055 factory.go:656] Stopping watch factory\\\\nI0216 17:00:05.036874 6055 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 17:00:05.036882 6055 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 17:00:05.036881 6055 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 17:00:05.037033 6055 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 17:00:05.037088 6055 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b524bb791d010cf75bcb882df8ca49071114861eabc6437d1348bc94f74c3dbf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"message\\\":\\\"]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0216 17:00:06.059578 6228 services_controller.go:452] Built service openshift-machine-api/cluster-autoscaler-operator per-node LB for network=default: []services.LB{}\\\\nI0216 17:00:06.059590 6228 services_controller.go:453] Built service openshift-machine-api/cluster-autoscaler-operator template LB for network=default: []services.LB{}\\\\nI0216 17:00:06.059565 6228 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:10Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.524002 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d16146c9c352c4085083323776d5be036f92c66357a810bf908ac40e229192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:10Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.534605 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tf698" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894bff1b-b8b9-4c28-8ffe-0e0469958227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zns6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zns6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tf698\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:10Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.546730 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:10Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.555708 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w6ttl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf36d1cc-c61d-4339-91a7-579ff74019aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2938850604a36c75c38eae180a2a231162f98c256191b74e63abcf08c8027924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rdjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w6ttl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:10Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.562654 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zns6k\" (UniqueName: \"kubernetes.io/projected/894bff1b-b8b9-4c28-8ffe-0e0469958227-kube-api-access-zns6k\") pod \"network-metrics-daemon-tf698\" (UID: \"894bff1b-b8b9-4c28-8ffe-0e0469958227\") " pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.562725 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/894bff1b-b8b9-4c28-8ffe-0e0469958227-metrics-certs\") pod \"network-metrics-daemon-tf698\" (UID: \"894bff1b-b8b9-4c28-8ffe-0e0469958227\") " pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:00:10 crc kubenswrapper[4794]: E0216 17:00:10.562871 4794 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:00:10 crc kubenswrapper[4794]: E0216 17:00:10.562960 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/894bff1b-b8b9-4c28-8ffe-0e0469958227-metrics-certs podName:894bff1b-b8b9-4c28-8ffe-0e0469958227 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:11.06294176 +0000 UTC m=+37.011036407 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/894bff1b-b8b9-4c28-8ffe-0e0469958227-metrics-certs") pod "network-metrics-daemon-tf698" (UID: "894bff1b-b8b9-4c28-8ffe-0e0469958227") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.567854 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cmzfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"237c381f-d225-4a4b-8bc9-6c03ee09015f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca731f62fcf2b8c8e68925b2b13cf1f61cab4b77425c85820278f710f4d8c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fnpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61b6dad949f71a170816c56dbc1ad2c99e88e7ecf1043d74bb077950f135eeed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fnpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cmzfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:10Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.571475 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.571507 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.571517 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.571533 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.571544 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:10Z","lastTransitionTime":"2026-02-16T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.578016 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zns6k\" (UniqueName: \"kubernetes.io/projected/894bff1b-b8b9-4c28-8ffe-0e0469958227-kube-api-access-zns6k\") pod \"network-metrics-daemon-tf698\" (UID: \"894bff1b-b8b9-4c28-8ffe-0e0469958227\") " pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.581532 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:10Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.674042 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.674089 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.674099 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.674115 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.674126 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:10Z","lastTransitionTime":"2026-02-16T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.752752 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 20:37:47.226412 +0000 UTC Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.776622 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.776657 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.776668 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.776699 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.776708 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:10Z","lastTransitionTime":"2026-02-16T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.879338 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.879415 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.879427 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.879444 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.879456 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:10Z","lastTransitionTime":"2026-02-16T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.981852 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.981912 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.981923 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.981938 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:10 crc kubenswrapper[4794]: I0216 17:00:10.981948 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:10Z","lastTransitionTime":"2026-02-16T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.068936 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/894bff1b-b8b9-4c28-8ffe-0e0469958227-metrics-certs\") pod \"network-metrics-daemon-tf698\" (UID: \"894bff1b-b8b9-4c28-8ffe-0e0469958227\") " pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:00:11 crc kubenswrapper[4794]: E0216 17:00:11.069110 4794 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:00:11 crc kubenswrapper[4794]: E0216 17:00:11.069210 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/894bff1b-b8b9-4c28-8ffe-0e0469958227-metrics-certs podName:894bff1b-b8b9-4c28-8ffe-0e0469958227 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:12.069185843 +0000 UTC m=+38.017280540 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/894bff1b-b8b9-4c28-8ffe-0e0469958227-metrics-certs") pod "network-metrics-daemon-tf698" (UID: "894bff1b-b8b9-4c28-8ffe-0e0469958227") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.084065 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.084134 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.084147 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.084165 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.084179 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:11Z","lastTransitionTime":"2026-02-16T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.187521 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.187724 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.187733 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.187746 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.187754 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:11Z","lastTransitionTime":"2026-02-16T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.290606 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.290665 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.290679 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.290700 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.290769 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:11Z","lastTransitionTime":"2026-02-16T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.393340 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.393398 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.393409 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.393426 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.393435 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:11Z","lastTransitionTime":"2026-02-16T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.496099 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.496146 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.496159 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.496178 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.496188 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:11Z","lastTransitionTime":"2026-02-16T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.574072 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.574140 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.574167 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:11 crc kubenswrapper[4794]: E0216 17:00:11.574201 4794 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:00:11 crc kubenswrapper[4794]: E0216 17:00:11.574275 4794 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:00:11 crc kubenswrapper[4794]: E0216 17:00:11.574291 4794 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:00:11 crc kubenswrapper[4794]: E0216 17:00:11.574337 4794 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:00:11 crc kubenswrapper[4794]: E0216 17:00:11.574348 4794 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:11 crc kubenswrapper[4794]: E0216 17:00:11.574321 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:27.574274513 +0000 UTC m=+53.522369180 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:00:11 crc kubenswrapper[4794]: E0216 17:00:11.574412 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:27.574396707 +0000 UTC m=+53.522491424 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:00:11 crc kubenswrapper[4794]: E0216 17:00:11.574426 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:27.574420397 +0000 UTC m=+53.522515144 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.598481 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.598514 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.598522 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.598535 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.598545 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:11Z","lastTransitionTime":"2026-02-16T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.675610 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:00:11 crc kubenswrapper[4794]: E0216 17:00:11.675777 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:00:27.675754938 +0000 UTC m=+53.623849585 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.675816 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:11 crc kubenswrapper[4794]: E0216 17:00:11.675976 4794 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:00:11 crc kubenswrapper[4794]: E0216 17:00:11.675993 4794 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:00:11 crc kubenswrapper[4794]: E0216 17:00:11.676004 4794 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:11 crc kubenswrapper[4794]: E0216 17:00:11.676047 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:27.676037206 +0000 UTC m=+53.624131853 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.700643 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.700679 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.700689 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.700706 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.700717 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:11Z","lastTransitionTime":"2026-02-16T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.753274 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 22:33:59.937340518 +0000 UTC Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.791108 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.791144 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.791181 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.791157 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:11 crc kubenswrapper[4794]: E0216 17:00:11.791292 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:11 crc kubenswrapper[4794]: E0216 17:00:11.791403 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:11 crc kubenswrapper[4794]: E0216 17:00:11.791504 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:00:11 crc kubenswrapper[4794]: E0216 17:00:11.791564 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.795614 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.795644 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.795655 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.795672 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.795684 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:11Z","lastTransitionTime":"2026-02-16T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:11 crc kubenswrapper[4794]: E0216 17:00:11.813609 4794 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ccf280c0-9a33-46bd-be2c-0dca34f382e0\\\",\\\"systemUUID\\\":\\\"b3d0f632-3e25-45db-ae26-e5b3ec8421a1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:11Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.817773 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.817799 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.817811 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.817824 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.817835 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:11Z","lastTransitionTime":"2026-02-16T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:11 crc kubenswrapper[4794]: E0216 17:00:11.835793 4794 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ccf280c0-9a33-46bd-be2c-0dca34f382e0\\\",\\\"systemUUID\\\":\\\"b3d0f632-3e25-45db-ae26-e5b3ec8421a1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:11Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.842852 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.842921 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.842941 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.842983 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.843016 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:11Z","lastTransitionTime":"2026-02-16T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:11 crc kubenswrapper[4794]: E0216 17:00:11.856520 4794 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ccf280c0-9a33-46bd-be2c-0dca34f382e0\\\",\\\"systemUUID\\\":\\\"b3d0f632-3e25-45db-ae26-e5b3ec8421a1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:11Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.860751 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.860782 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.860792 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.860806 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.860815 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:11Z","lastTransitionTime":"2026-02-16T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:11 crc kubenswrapper[4794]: E0216 17:00:11.872999 4794 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ccf280c0-9a33-46bd-be2c-0dca34f382e0\\\",\\\"systemUUID\\\":\\\"b3d0f632-3e25-45db-ae26-e5b3ec8421a1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:11Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.876166 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.876193 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.876204 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.876216 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.876223 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:11Z","lastTransitionTime":"2026-02-16T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:11 crc kubenswrapper[4794]: E0216 17:00:11.890452 4794 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ccf280c0-9a33-46bd-be2c-0dca34f382e0\\\",\\\"systemUUID\\\":\\\"b3d0f632-3e25-45db-ae26-e5b3ec8421a1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:11Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:11 crc kubenswrapper[4794]: E0216 17:00:11.891069 4794 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.892603 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.892657 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.892676 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.892697 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.892713 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:11Z","lastTransitionTime":"2026-02-16T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.995652 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.995710 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.995722 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.995741 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:11 crc kubenswrapper[4794]: I0216 17:00:11.995753 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:11Z","lastTransitionTime":"2026-02-16T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.079592 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/894bff1b-b8b9-4c28-8ffe-0e0469958227-metrics-certs\") pod \"network-metrics-daemon-tf698\" (UID: \"894bff1b-b8b9-4c28-8ffe-0e0469958227\") " pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:00:12 crc kubenswrapper[4794]: E0216 17:00:12.079742 4794 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:00:12 crc kubenswrapper[4794]: E0216 17:00:12.079795 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/894bff1b-b8b9-4c28-8ffe-0e0469958227-metrics-certs podName:894bff1b-b8b9-4c28-8ffe-0e0469958227 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:14.079781955 +0000 UTC m=+40.027876602 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/894bff1b-b8b9-4c28-8ffe-0e0469958227-metrics-certs") pod "network-metrics-daemon-tf698" (UID: "894bff1b-b8b9-4c28-8ffe-0e0469958227") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.097960 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.097996 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.098007 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.098022 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.098033 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:12Z","lastTransitionTime":"2026-02-16T17:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.200636 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.200676 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.200686 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.200701 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.200711 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:12Z","lastTransitionTime":"2026-02-16T17:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.302865 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.302920 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.302930 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.302946 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.302957 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:12Z","lastTransitionTime":"2026-02-16T17:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.405904 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.405942 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.405951 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.405968 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.405979 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:12Z","lastTransitionTime":"2026-02-16T17:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.508421 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.508471 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.508484 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.508502 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.508514 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:12Z","lastTransitionTime":"2026-02-16T17:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.611192 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.611232 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.611244 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.611260 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.611272 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:12Z","lastTransitionTime":"2026-02-16T17:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.713706 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.713760 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.713770 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.713788 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.713801 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:12Z","lastTransitionTime":"2026-02-16T17:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.754413 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 15:42:52.79812677 +0000 UTC Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.815890 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.815919 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.815928 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.815939 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.815949 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:12Z","lastTransitionTime":"2026-02-16T17:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.918640 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.918672 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.918681 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.918693 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:12 crc kubenswrapper[4794]: I0216 17:00:12.918704 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:12Z","lastTransitionTime":"2026-02-16T17:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.021391 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.021468 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.021495 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.021523 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.021547 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:13Z","lastTransitionTime":"2026-02-16T17:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.124649 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.124707 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.124730 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.124756 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.124775 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:13Z","lastTransitionTime":"2026-02-16T17:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.227409 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.227460 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.227475 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.227492 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.227506 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:13Z","lastTransitionTime":"2026-02-16T17:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.330214 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.330260 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.330271 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.330288 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.330328 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:13Z","lastTransitionTime":"2026-02-16T17:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.432462 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.432502 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.432513 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.432529 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.432541 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:13Z","lastTransitionTime":"2026-02-16T17:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.535460 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.535505 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.535520 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.535544 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.535561 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:13Z","lastTransitionTime":"2026-02-16T17:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.637454 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.637499 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.637510 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.637528 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.637541 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:13Z","lastTransitionTime":"2026-02-16T17:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.740279 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.740353 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.740369 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.740394 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.740410 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:13Z","lastTransitionTime":"2026-02-16T17:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.754675 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 20:09:31.527008341 +0000 UTC Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.790520 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.790548 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.790600 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:13 crc kubenswrapper[4794]: E0216 17:00:13.790664 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:13 crc kubenswrapper[4794]: E0216 17:00:13.790766 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:13 crc kubenswrapper[4794]: E0216 17:00:13.790905 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.790475 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:13 crc kubenswrapper[4794]: E0216 17:00:13.791135 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.843633 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.843667 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.843678 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.843692 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.843705 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:13Z","lastTransitionTime":"2026-02-16T17:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.946161 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.946198 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.946209 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.946226 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:13 crc kubenswrapper[4794]: I0216 17:00:13.946237 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:13Z","lastTransitionTime":"2026-02-16T17:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.048888 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.048924 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.048936 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.048953 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.048967 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:14Z","lastTransitionTime":"2026-02-16T17:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.100859 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/894bff1b-b8b9-4c28-8ffe-0e0469958227-metrics-certs\") pod \"network-metrics-daemon-tf698\" (UID: \"894bff1b-b8b9-4c28-8ffe-0e0469958227\") " pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:00:14 crc kubenswrapper[4794]: E0216 17:00:14.100996 4794 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:00:14 crc kubenswrapper[4794]: E0216 17:00:14.101047 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/894bff1b-b8b9-4c28-8ffe-0e0469958227-metrics-certs podName:894bff1b-b8b9-4c28-8ffe-0e0469958227 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:18.101033123 +0000 UTC m=+44.049127770 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/894bff1b-b8b9-4c28-8ffe-0e0469958227-metrics-certs") pod "network-metrics-daemon-tf698" (UID: "894bff1b-b8b9-4c28-8ffe-0e0469958227") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.151060 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.151102 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.151113 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.151129 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.151141 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:14Z","lastTransitionTime":"2026-02-16T17:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.254200 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.254250 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.254266 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.254288 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.254332 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:14Z","lastTransitionTime":"2026-02-16T17:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.357118 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.357153 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.357161 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.357175 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.357185 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:14Z","lastTransitionTime":"2026-02-16T17:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.460661 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.460710 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.460722 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.460741 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.460753 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:14Z","lastTransitionTime":"2026-02-16T17:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.564407 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.564498 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.564521 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.564554 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.564578 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:14Z","lastTransitionTime":"2026-02-16T17:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.667145 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.667356 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.667386 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.667421 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.667446 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:14Z","lastTransitionTime":"2026-02-16T17:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.756179 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 07:34:19.273752043 +0000 UTC Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.770601 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.770661 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.770672 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.770695 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.770710 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:14Z","lastTransitionTime":"2026-02-16T17:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.808085 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:14Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.821962 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w6ttl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf36d1cc-c61d-4339-91a7-579ff74019aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2938850604a36c75c38eae180a2a231162f98c256191b74e63abcf08c8027924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rdjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w6ttl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:14Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.838054 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cmzfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"237c381f-d225-4a4b-8bc9-6c03ee09015f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca731f62fcf2b8c8e68925b2b13cf1f61cab4b77425c85820278f710f4d8c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fnpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61b6dad949f71a170816c56dbc1ad2c99e88e7ecf1043d74bb077950f135eeed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fnpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cmzfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:14Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.853180 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:14Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.872608 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:14Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.873620 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.873726 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.873785 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.873862 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.873934 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:14Z","lastTransitionTime":"2026-02-16T17:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.897374 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:14Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.911709 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:14Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.923169 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bf0cc1807b2cbce57729fe5a58675ff9124110cb52d7b0d8ae4a309098b433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:14Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.933855 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:14Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.959594 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b524bb791d010cf75bcb882df8ca49071114861eabc6437d1348bc94f74c3dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3705b165cdf9225556adb5c2effd58475ac7d4189b45799b2b6722ca8fac13de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:05Z\\\",\\\"message\\\":\\\"rom k8s.io/client-go/informers/factory.go:160\\\\nI0216 17:00:05.036601 6055 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 17:00:05.036779 6055 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 17:00:05.036796 6055 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 17:00:05.036808 6055 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 17:00:05.036839 6055 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 17:00:05.036860 6055 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 17:00:05.036863 6055 factory.go:656] Stopping watch factory\\\\nI0216 17:00:05.036874 6055 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 17:00:05.036882 6055 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 17:00:05.036881 6055 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 17:00:05.037033 6055 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 17:00:05.037088 6055 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b524bb791d010cf75bcb882df8ca49071114861eabc6437d1348bc94f74c3dbf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"message\\\":\\\"]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0216 17:00:06.059578 6228 services_controller.go:452] Built service openshift-machine-api/cluster-autoscaler-operator per-node LB for network=default: []services.LB{}\\\\nI0216 17:00:06.059590 6228 services_controller.go:453] Built service openshift-machine-api/cluster-autoscaler-operator template LB for network=default: []services.LB{}\\\\nI0216 17:00:06.059565 6228 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:14Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.970466 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d16146c9c352c4085083323776d5be036f92c66357a810bf908ac40e229192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:14Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.976806 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.977107 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.977222 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.977476 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.977604 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:14Z","lastTransitionTime":"2026-02-16T17:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.983112 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tf698" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894bff1b-b8b9-4c28-8ffe-0e0469958227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zns6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zns6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tf698\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:14Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:14 crc kubenswrapper[4794]: I0216 17:00:14.999374 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:14Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.013653 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:15Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.024630 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:15Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.038510 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dbad4f63495ccf97b4852e3878b155e281c37662322d28b442acb4d2748e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:15Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.080272 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.080344 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.080367 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.080387 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.080401 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:15Z","lastTransitionTime":"2026-02-16T17:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.183377 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.183420 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.183430 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.183447 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.183458 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:15Z","lastTransitionTime":"2026-02-16T17:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.286644 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.286729 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.287032 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.287390 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.287479 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:15Z","lastTransitionTime":"2026-02-16T17:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.390159 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.390230 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.390251 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.390739 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.391004 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:15Z","lastTransitionTime":"2026-02-16T17:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.493520 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.493552 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.493563 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.493583 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.493594 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:15Z","lastTransitionTime":"2026-02-16T17:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.595712 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.595760 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.595770 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.595786 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.595797 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:15Z","lastTransitionTime":"2026-02-16T17:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.698219 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.698276 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.698289 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.698321 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.698334 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:15Z","lastTransitionTime":"2026-02-16T17:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.756318 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 07:01:25.057447987 +0000 UTC Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.790687 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.790789 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:15 crc kubenswrapper[4794]: E0216 17:00:15.790831 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.790852 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.790914 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:15 crc kubenswrapper[4794]: E0216 17:00:15.790949 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:15 crc kubenswrapper[4794]: E0216 17:00:15.791015 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:15 crc kubenswrapper[4794]: E0216 17:00:15.791109 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.800863 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.800906 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.800918 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.800934 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.800947 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:15Z","lastTransitionTime":"2026-02-16T17:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.903910 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.903972 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.903992 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.904018 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:15 crc kubenswrapper[4794]: I0216 17:00:15.904037 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:15Z","lastTransitionTime":"2026-02-16T17:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.007003 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.007075 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.007105 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.007126 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.007141 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:16Z","lastTransitionTime":"2026-02-16T17:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.109487 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.109544 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.109563 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.109588 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.109605 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:16Z","lastTransitionTime":"2026-02-16T17:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.212358 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.212426 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.212450 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.212479 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.212503 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:16Z","lastTransitionTime":"2026-02-16T17:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.315816 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.315865 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.315877 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.315899 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.315913 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:16Z","lastTransitionTime":"2026-02-16T17:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.418499 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.418530 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.418541 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.418559 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.418570 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:16Z","lastTransitionTime":"2026-02-16T17:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.521198 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.521237 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.521244 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.521258 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.521268 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:16Z","lastTransitionTime":"2026-02-16T17:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.630200 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.630338 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.630369 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.630408 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.630541 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:16Z","lastTransitionTime":"2026-02-16T17:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.733452 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.733541 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.733559 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.733613 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.733634 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:16Z","lastTransitionTime":"2026-02-16T17:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.756933 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 15:09:07.657911788 +0000 UTC Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.836473 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.836521 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.836537 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.836558 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.836572 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:16Z","lastTransitionTime":"2026-02-16T17:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.939439 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.939491 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.939509 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.939533 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:16 crc kubenswrapper[4794]: I0216 17:00:16.939550 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:16Z","lastTransitionTime":"2026-02-16T17:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.042720 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.042788 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.042812 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.042842 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.042863 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:17Z","lastTransitionTime":"2026-02-16T17:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.145417 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.145457 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.145468 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.145487 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.145500 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:17Z","lastTransitionTime":"2026-02-16T17:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.247923 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.247976 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.247991 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.248014 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.248031 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:17Z","lastTransitionTime":"2026-02-16T17:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.350106 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.350148 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.350160 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.350175 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.350186 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:17Z","lastTransitionTime":"2026-02-16T17:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.453298 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.453357 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.453367 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.453384 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.453398 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:17Z","lastTransitionTime":"2026-02-16T17:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.555681 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.555747 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.555764 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.555791 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.555810 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:17Z","lastTransitionTime":"2026-02-16T17:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.658216 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.658264 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.658276 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.658295 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.658331 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:17Z","lastTransitionTime":"2026-02-16T17:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.757507 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 09:35:06.432138724 +0000 UTC Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.760735 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.760791 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.760805 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.760826 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.760845 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:17Z","lastTransitionTime":"2026-02-16T17:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.791054 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.791067 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.791142 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.791216 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:17 crc kubenswrapper[4794]: E0216 17:00:17.791376 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:00:17 crc kubenswrapper[4794]: E0216 17:00:17.791462 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:17 crc kubenswrapper[4794]: E0216 17:00:17.791925 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:17 crc kubenswrapper[4794]: E0216 17:00:17.791997 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.863708 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.863748 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.863780 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.863801 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.863812 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:17Z","lastTransitionTime":"2026-02-16T17:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.966743 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.966799 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.966816 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.966841 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:17 crc kubenswrapper[4794]: I0216 17:00:17.966866 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:17Z","lastTransitionTime":"2026-02-16T17:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.070020 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.070056 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.070064 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.070095 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.070111 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:18Z","lastTransitionTime":"2026-02-16T17:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.146069 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/894bff1b-b8b9-4c28-8ffe-0e0469958227-metrics-certs\") pod \"network-metrics-daemon-tf698\" (UID: \"894bff1b-b8b9-4c28-8ffe-0e0469958227\") " pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:00:18 crc kubenswrapper[4794]: E0216 17:00:18.146279 4794 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:00:18 crc kubenswrapper[4794]: E0216 17:00:18.146461 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/894bff1b-b8b9-4c28-8ffe-0e0469958227-metrics-certs podName:894bff1b-b8b9-4c28-8ffe-0e0469958227 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:26.146432397 +0000 UTC m=+52.094527074 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/894bff1b-b8b9-4c28-8ffe-0e0469958227-metrics-certs") pod "network-metrics-daemon-tf698" (UID: "894bff1b-b8b9-4c28-8ffe-0e0469958227") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.173226 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.173264 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.173276 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.173291 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.173323 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:18Z","lastTransitionTime":"2026-02-16T17:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.277057 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.277160 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.277196 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.277220 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.277232 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:18Z","lastTransitionTime":"2026-02-16T17:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.380329 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.380379 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.380397 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.380415 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.380429 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:18Z","lastTransitionTime":"2026-02-16T17:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.483349 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.483380 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.483388 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.483401 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.483410 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:18Z","lastTransitionTime":"2026-02-16T17:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.585949 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.586001 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.586014 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.586035 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.586047 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:18Z","lastTransitionTime":"2026-02-16T17:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.688809 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.688858 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.688869 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.688886 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.688897 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:18Z","lastTransitionTime":"2026-02-16T17:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.757767 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 00:41:52.937356074 +0000 UTC Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.791532 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.791579 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.791589 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.791605 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.791617 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:18Z","lastTransitionTime":"2026-02-16T17:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.893041 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.894371 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.894431 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.894448 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.894478 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.894499 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:18Z","lastTransitionTime":"2026-02-16T17:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.894816 4794 scope.go:117] "RemoveContainer" containerID="b524bb791d010cf75bcb882df8ca49071114861eabc6437d1348bc94f74c3dbf" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.912861 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:18Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.931098 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:18Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.952019 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:18Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.968682 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dbad4f63495ccf97b4852e3878b155e281c37662322d28b442acb4d2748e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:18Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.987136 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b524bb791d010cf75bcb882df8ca49071114861eabc6437d1348bc94f74c3dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b524bb791d010cf75bcb882df8ca49071114861eabc6437d1348bc94f74c3dbf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"message\\\":\\\"]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0216 17:00:06.059578 6228 services_controller.go:452] Built service openshift-machine-api/cluster-autoscaler-operator per-node LB for network=default: []services.LB{}\\\\nI0216 17:00:06.059590 6228 services_controller.go:453] Built service openshift-machine-api/cluster-autoscaler-operator template LB for network=default: []services.LB{}\\\\nI0216 17:00:06.059565 6228 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-9krvl_openshift-ovn-kubernetes(d985e4f1-78bb-43f9-b86c-cd47831d602c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:18Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.998671 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.998704 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.998716 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.998732 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:18 crc kubenswrapper[4794]: I0216 17:00:18.998743 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:18Z","lastTransitionTime":"2026-02-16T17:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.003056 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d16146c9c352c4085083323776d5be036f92c66357a810bf908ac40e229192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:19Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.015179 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tf698" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894bff1b-b8b9-4c28-8ffe-0e0469958227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zns6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zns6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tf698\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:19Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.026635 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:19Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.038822 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w6ttl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf36d1cc-c61d-4339-91a7-579ff74019aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2938850604a36c75c38eae180a2a231162f98c256191b74e63abcf08c8027924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rdjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w6ttl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:19Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.049436 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cmzfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"237c381f-d225-4a4b-8bc9-6c03ee09015f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca731f62fcf2b8c8e68925b2b13cf1f61cab4b77425c85820278f710f4d8c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fnpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61b6dad949f71a170816c56dbc1ad2c99e88e7ecf1043d74bb077950f135eeed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fnpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cmzfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:19Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.063098 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:19Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.077902 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:19Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.089583 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:19Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.100122 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bf0cc1807b2cbce57729fe5a58675ff9124110cb52d7b0d8ae4a309098b433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:19Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.100844 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.100891 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.100902 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.100919 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.100930 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:19Z","lastTransitionTime":"2026-02-16T17:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.112432 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:19Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.128841 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:19Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.204111 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.204161 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.204172 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.204191 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.204207 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:19Z","lastTransitionTime":"2026-02-16T17:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.307160 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.307202 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.307214 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.307229 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.307241 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:19Z","lastTransitionTime":"2026-02-16T17:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.409441 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.409488 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.409500 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.409519 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.409531 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:19Z","lastTransitionTime":"2026-02-16T17:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.512341 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.512382 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.512393 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.512435 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.512449 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:19Z","lastTransitionTime":"2026-02-16T17:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.614995 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.615247 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.615266 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.615284 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.615297 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:19Z","lastTransitionTime":"2026-02-16T17:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.718191 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.718267 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.718283 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.718316 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.718328 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:19Z","lastTransitionTime":"2026-02-16T17:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.758755 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 15:51:32.474029964 +0000 UTC Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.791335 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:00:19 crc kubenswrapper[4794]: E0216 17:00:19.791457 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.791490 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.791590 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:19 crc kubenswrapper[4794]: E0216 17:00:19.791632 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.791510 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:19 crc kubenswrapper[4794]: E0216 17:00:19.791729 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:19 crc kubenswrapper[4794]: E0216 17:00:19.791788 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.820734 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.820776 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.820788 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.820803 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.820813 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:19Z","lastTransitionTime":"2026-02-16T17:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.923434 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.923464 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.923473 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.923486 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:19 crc kubenswrapper[4794]: I0216 17:00:19.923495 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:19Z","lastTransitionTime":"2026-02-16T17:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.025814 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.025839 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.025848 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.025860 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.025869 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:20Z","lastTransitionTime":"2026-02-16T17:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.128473 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.128527 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.128545 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.128567 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.128584 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:20Z","lastTransitionTime":"2026-02-16T17:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.150726 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9krvl_d985e4f1-78bb-43f9-b86c-cd47831d602c/ovnkube-controller/1.log" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.153298 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" event={"ID":"d985e4f1-78bb-43f9-b86c-cd47831d602c","Type":"ContainerStarted","Data":"9a35f79cea3289726aad21e06cdbef120a4acb566394eb9f0939efb5600609ea"} Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.153978 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.172635 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.194524 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.210494 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.224924 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bf0cc1807b2cbce57729fe5a58675ff9124110cb52d7b0d8ae4a309098b433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.231234 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.231270 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.231283 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.231321 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.231339 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:20Z","lastTransitionTime":"2026-02-16T17:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.238182 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.257285 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a35f79cea3289726aad21e06cdbef120a4acb566394eb9f0939efb5600609ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b524bb791d010cf75bcb882df8ca49071114861eabc6437d1348bc94f74c3dbf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"message\\\":\\\"]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0216 17:00:06.059578 6228 services_controller.go:452] Built service openshift-machine-api/cluster-autoscaler-operator per-node LB for network=default: []services.LB{}\\\\nI0216 17:00:06.059590 6228 services_controller.go:453] Built service openshift-machine-api/cluster-autoscaler-operator template LB for network=default: []services.LB{}\\\\nI0216 17:00:06.059565 6228 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.272734 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d16146c9c352c4085083323776d5be036f92c66357a810bf908ac40e229192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.285747 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tf698" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894bff1b-b8b9-4c28-8ffe-0e0469958227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zns6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zns6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tf698\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.301813 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.312634 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.325642 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.333700 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.333773 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.333793 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.333816 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.333833 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:20Z","lastTransitionTime":"2026-02-16T17:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.339256 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dbad4f63495ccf97b4852e3878b155e281c37662322d28b442acb4d2748e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.349668 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.362611 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w6ttl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf36d1cc-c61d-4339-91a7-579ff74019aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2938850604a36c75c38eae180a2a231162f98c256191b74e63abcf08c8027924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rdjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w6ttl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.373987 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cmzfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"237c381f-d225-4a4b-8bc9-6c03ee09015f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca731f62fcf2b8c8e68925b2b13cf1f61cab4b77425c85820278f710f4d8c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fnpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61b6dad949f71a170816c56dbc1ad2c99e88e7ecf1043d74bb077950f135eeed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fnpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cmzfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.386484 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.436109 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.436155 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.436169 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.436187 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.436203 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:20Z","lastTransitionTime":"2026-02-16T17:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.539008 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.539070 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.539082 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.539097 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.539109 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:20Z","lastTransitionTime":"2026-02-16T17:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.641916 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.641957 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.641972 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.641989 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.642001 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:20Z","lastTransitionTime":"2026-02-16T17:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.745526 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.745599 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.745631 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.745662 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.745684 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:20Z","lastTransitionTime":"2026-02-16T17:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.759006 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 20:17:00.431351877 +0000 UTC Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.847917 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.847985 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.848011 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.848037 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.848056 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:20Z","lastTransitionTime":"2026-02-16T17:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.950838 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.950904 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.950916 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.950933 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:20 crc kubenswrapper[4794]: I0216 17:00:20.950945 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:20Z","lastTransitionTime":"2026-02-16T17:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.053282 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.053363 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.053375 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.053390 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.053401 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:21Z","lastTransitionTime":"2026-02-16T17:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.155792 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.155838 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.155853 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.155873 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.155892 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:21Z","lastTransitionTime":"2026-02-16T17:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.158121 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9krvl_d985e4f1-78bb-43f9-b86c-cd47831d602c/ovnkube-controller/2.log" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.158861 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9krvl_d985e4f1-78bb-43f9-b86c-cd47831d602c/ovnkube-controller/1.log" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.162409 4794 generic.go:334] "Generic (PLEG): container finished" podID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerID="9a35f79cea3289726aad21e06cdbef120a4acb566394eb9f0939efb5600609ea" exitCode=1 Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.162452 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" event={"ID":"d985e4f1-78bb-43f9-b86c-cd47831d602c","Type":"ContainerDied","Data":"9a35f79cea3289726aad21e06cdbef120a4acb566394eb9f0939efb5600609ea"} Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.162495 4794 scope.go:117] "RemoveContainer" containerID="b524bb791d010cf75bcb882df8ca49071114861eabc6437d1348bc94f74c3dbf" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.163000 4794 scope.go:117] "RemoveContainer" containerID="9a35f79cea3289726aad21e06cdbef120a4acb566394eb9f0939efb5600609ea" Feb 16 17:00:21 crc kubenswrapper[4794]: E0216 17:00:21.163145 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9krvl_openshift-ovn-kubernetes(d985e4f1-78bb-43f9-b86c-cd47831d602c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.177432 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.187493 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w6ttl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf36d1cc-c61d-4339-91a7-579ff74019aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2938850604a36c75c38eae180a2a231162f98c256191b74e63abcf08c8027924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rdjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w6ttl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.199237 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cmzfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"237c381f-d225-4a4b-8bc9-6c03ee09015f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca731f62fcf2b8c8e68925b2b13cf1f61cab4b77425c85820278f710f4d8c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fnpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61b6dad949f71a170816c56dbc1ad2c99e88e7ecf1043d74bb077950f135eeed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fnpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cmzfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.214118 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.229186 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.250556 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.257934 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.258013 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.258036 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.258066 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.258091 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:21Z","lastTransitionTime":"2026-02-16T17:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.264421 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bf0cc1807b2cbce57729fe5a58675ff9124110cb52d7b0d8ae4a309098b433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.288557 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.299458 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.310473 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d16146c9c352c4085083323776d5be036f92c66357a810bf908ac40e229192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.321713 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tf698" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894bff1b-b8b9-4c28-8ffe-0e0469958227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zns6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zns6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tf698\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.333639 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.344547 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.356925 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.363909 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.363963 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.363977 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.363997 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.364012 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:21Z","lastTransitionTime":"2026-02-16T17:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.375562 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dbad4f63495ccf97b4852e3878b155e281c37662322d28b442acb4d2748e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.393583 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a35f79cea3289726aad21e06cdbef120a4acb566394eb9f0939efb5600609ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b524bb791d010cf75bcb882df8ca49071114861eabc6437d1348bc94f74c3dbf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"message\\\":\\\"]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0216 17:00:06.059578 6228 services_controller.go:452] Built service openshift-machine-api/cluster-autoscaler-operator per-node LB for network=default: []services.LB{}\\\\nI0216 17:00:06.059590 6228 services_controller.go:453] Built service openshift-machine-api/cluster-autoscaler-operator template LB for network=default: []services.LB{}\\\\nI0216 17:00:06.059565 6228 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a35f79cea3289726aad21e06cdbef120a4acb566394eb9f0939efb5600609ea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"message\\\":\\\"cs-daemon-tf698\\\\\\\", UID:\\\\\\\"894bff1b-b8b9-4c28-8ffe-0e0469958227\\\\\\\", APIVersion:\\\\\\\"v1\\\\\\\", ResourceVersion:\\\\\\\"26909\\\\\\\", FieldPath:\\\\\\\"\\\\\\\"}): type: 'Warning' reason: 'ErrorAddingResource' addLogicalPort failed for openshift-multus/network-metrics-daemon-tf698: failed to update pod openshift-multus/network-metrics-daemon-tf698: Internal error occurred: failed calling webhook \\\\\\\"pod.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/pod?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z\\\\nI0216 17:00:20.291733 6450 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 17:00:20.291760 6450 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 17:00:20.291816 6450 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 17:00:20.291837 6450 factory.go:656] Stopping watch factory\\\\nI0216 17:00:20.291853 6450 ovnkube.go:599] Stopped ovnkube\\\\nI0216 17:00:20.291877 6450 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0216 17:00:20.291880 6450 handler.go:208] Removed *v1.Node event handler 2\\\\nF0216 17:00:20.291948 6450 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.468102 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.468141 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.468152 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.468166 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.468175 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:21Z","lastTransitionTime":"2026-02-16T17:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.570582 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.570631 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.570640 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.570653 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.570664 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:21Z","lastTransitionTime":"2026-02-16T17:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.672896 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.672939 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.672948 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.672966 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.672984 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:21Z","lastTransitionTime":"2026-02-16T17:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.760014 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 11:35:20.757191855 +0000 UTC Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.775757 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.775838 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.775866 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.775892 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.775905 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:21Z","lastTransitionTime":"2026-02-16T17:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.791172 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.791247 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:00:21 crc kubenswrapper[4794]: E0216 17:00:21.791355 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.791477 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:21 crc kubenswrapper[4794]: E0216 17:00:21.791544 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.791488 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:21 crc kubenswrapper[4794]: E0216 17:00:21.791694 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:21 crc kubenswrapper[4794]: E0216 17:00:21.791911 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.879020 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.879060 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.879072 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.879089 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.879101 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:21Z","lastTransitionTime":"2026-02-16T17:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.981953 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.981998 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.982012 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.982032 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:21 crc kubenswrapper[4794]: I0216 17:00:21.982046 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:21Z","lastTransitionTime":"2026-02-16T17:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.084850 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.084906 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.084919 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.084937 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.084952 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:22Z","lastTransitionTime":"2026-02-16T17:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.148186 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.148257 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.148271 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.148290 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.148331 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:22Z","lastTransitionTime":"2026-02-16T17:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:22 crc kubenswrapper[4794]: E0216 17:00:22.167886 4794 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ccf280c0-9a33-46bd-be2c-0dca34f382e0\\\",\\\"systemUUID\\\":\\\"b3d0f632-3e25-45db-ae26-e5b3ec8421a1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.169983 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9krvl_d985e4f1-78bb-43f9-b86c-cd47831d602c/ovnkube-controller/2.log" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.175561 4794 scope.go:117] "RemoveContainer" containerID="9a35f79cea3289726aad21e06cdbef120a4acb566394eb9f0939efb5600609ea" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.175669 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.175712 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.175734 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:22 crc kubenswrapper[4794]: E0216 17:00:22.175746 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9krvl_openshift-ovn-kubernetes(d985e4f1-78bb-43f9-b86c-cd47831d602c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.175767 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.175792 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:22Z","lastTransitionTime":"2026-02-16T17:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.188007 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d16146c9c352c4085083323776d5be036f92c66357a810bf908ac40e229192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4794]: E0216 17:00:22.195265 4794 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ccf280c0-9a33-46bd-be2c-0dca34f382e0\\\",\\\"systemUUID\\\":\\\"b3d0f632-3e25-45db-ae26-e5b3ec8421a1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.199255 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.199296 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.199339 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.199360 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.199378 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:22Z","lastTransitionTime":"2026-02-16T17:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.203051 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tf698" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894bff1b-b8b9-4c28-8ffe-0e0469958227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zns6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zns6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tf698\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4794]: E0216 17:00:22.218761 4794 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ccf280c0-9a33-46bd-be2c-0dca34f382e0\\\",\\\"systemUUID\\\":\\\"b3d0f632-3e25-45db-ae26-e5b3ec8421a1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.223872 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.224335 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.224524 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.224704 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.224862 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:22Z","lastTransitionTime":"2026-02-16T17:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.226234 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4794]: E0216 17:00:22.240021 4794 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ccf280c0-9a33-46bd-be2c-0dca34f382e0\\\",\\\"systemUUID\\\":\\\"b3d0f632-3e25-45db-ae26-e5b3ec8421a1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.243652 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.244501 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.244537 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.244548 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.244563 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.244577 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:22Z","lastTransitionTime":"2026-02-16T17:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.256030 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4794]: E0216 17:00:22.258350 4794 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ccf280c0-9a33-46bd-be2c-0dca34f382e0\\\",\\\"systemUUID\\\":\\\"b3d0f632-3e25-45db-ae26-e5b3ec8421a1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4794]: E0216 17:00:22.258651 4794 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.260071 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.260105 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.260117 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.260132 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.260143 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:22Z","lastTransitionTime":"2026-02-16T17:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.273288 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dbad4f63495ccf97b4852e3878b155e281c37662322d28b442acb4d2748e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.296778 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a35f79cea3289726aad21e06cdbef120a4acb566394eb9f0939efb5600609ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a35f79cea3289726aad21e06cdbef120a4acb566394eb9f0939efb5600609ea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"message\\\":\\\"cs-daemon-tf698\\\\\\\", UID:\\\\\\\"894bff1b-b8b9-4c28-8ffe-0e0469958227\\\\\\\", APIVersion:\\\\\\\"v1\\\\\\\", ResourceVersion:\\\\\\\"26909\\\\\\\", FieldPath:\\\\\\\"\\\\\\\"}): type: 'Warning' reason: 'ErrorAddingResource' addLogicalPort failed for openshift-multus/network-metrics-daemon-tf698: failed to update pod openshift-multus/network-metrics-daemon-tf698: Internal error occurred: failed calling webhook \\\\\\\"pod.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/pod?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z\\\\nI0216 17:00:20.291733 6450 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 17:00:20.291760 6450 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 17:00:20.291816 6450 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 17:00:20.291837 6450 factory.go:656] Stopping watch factory\\\\nI0216 17:00:20.291853 6450 ovnkube.go:599] Stopped ovnkube\\\\nI0216 17:00:20.291877 6450 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0216 17:00:20.291880 6450 handler.go:208] Removed *v1.Node event handler 2\\\\nF0216 17:00:20.291948 6450 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9krvl_openshift-ovn-kubernetes(d985e4f1-78bb-43f9-b86c-cd47831d602c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.308632 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.318321 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w6ttl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf36d1cc-c61d-4339-91a7-579ff74019aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2938850604a36c75c38eae180a2a231162f98c256191b74e63abcf08c8027924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rdjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w6ttl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.327718 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cmzfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"237c381f-d225-4a4b-8bc9-6c03ee09015f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca731f62fcf2b8c8e68925b2b13cf1f61cab4b77425c85820278f710f4d8c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fnpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61b6dad949f71a170816c56dbc1ad2c99e88e7ecf1043d74bb077950f135eeed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fnpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cmzfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.339535 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.353579 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.362817 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.362856 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.362866 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.362883 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.362894 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:22Z","lastTransitionTime":"2026-02-16T17:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.370260 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.384962 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bf0cc1807b2cbce57729fe5a58675ff9124110cb52d7b0d8ae4a309098b433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.408636 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.426091 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.465878 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.465916 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.465929 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.465946 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.465959 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:22Z","lastTransitionTime":"2026-02-16T17:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.568476 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.568515 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.568525 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.568541 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.568552 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:22Z","lastTransitionTime":"2026-02-16T17:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.671041 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.671077 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.671087 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.671103 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.671114 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:22Z","lastTransitionTime":"2026-02-16T17:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.761013 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 21:38:42.802248301 +0000 UTC Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.773792 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.773852 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.773864 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.773880 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.773891 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:22Z","lastTransitionTime":"2026-02-16T17:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.876271 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.876366 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.876382 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.876396 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.876405 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:22Z","lastTransitionTime":"2026-02-16T17:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.979296 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.979400 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.979426 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.979455 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:22 crc kubenswrapper[4794]: I0216 17:00:22.979475 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:22Z","lastTransitionTime":"2026-02-16T17:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.081225 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.081275 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.081290 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.081338 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.081354 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:23Z","lastTransitionTime":"2026-02-16T17:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.183258 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.183514 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.183587 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.183656 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.183719 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:23Z","lastTransitionTime":"2026-02-16T17:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.286513 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.287040 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.287156 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.287251 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.287353 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:23Z","lastTransitionTime":"2026-02-16T17:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.390445 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.390502 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.390520 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.390543 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.390563 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:23Z","lastTransitionTime":"2026-02-16T17:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.492683 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.492717 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.492728 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.492742 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.492754 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:23Z","lastTransitionTime":"2026-02-16T17:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.595029 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.595081 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.595099 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.595116 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.595126 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:23Z","lastTransitionTime":"2026-02-16T17:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.697796 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.697878 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.697898 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.697926 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.697949 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:23Z","lastTransitionTime":"2026-02-16T17:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.761451 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 08:41:04.432124669 +0000 UTC Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.790768 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.790787 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.790887 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.791042 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:00:23 crc kubenswrapper[4794]: E0216 17:00:23.791211 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:23 crc kubenswrapper[4794]: E0216 17:00:23.791286 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:23 crc kubenswrapper[4794]: E0216 17:00:23.791374 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:23 crc kubenswrapper[4794]: E0216 17:00:23.791783 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.800384 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.800425 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.800442 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.800460 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.800473 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:23Z","lastTransitionTime":"2026-02-16T17:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.903230 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.903274 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.903286 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.903319 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:23 crc kubenswrapper[4794]: I0216 17:00:23.903332 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:23Z","lastTransitionTime":"2026-02-16T17:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.005755 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.005846 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.005870 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.005900 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.005926 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:24Z","lastTransitionTime":"2026-02-16T17:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.108802 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.108860 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.108873 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.108896 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.108907 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:24Z","lastTransitionTime":"2026-02-16T17:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.212006 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.212060 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.212070 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.212086 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.212097 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:24Z","lastTransitionTime":"2026-02-16T17:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.314980 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.315019 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.315028 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.315040 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.315050 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:24Z","lastTransitionTime":"2026-02-16T17:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.418015 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.418095 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.418131 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.418163 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.418184 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:24Z","lastTransitionTime":"2026-02-16T17:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.520507 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.520584 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.520610 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.520641 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.520666 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:24Z","lastTransitionTime":"2026-02-16T17:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.622362 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.622406 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.622414 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.622426 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.622435 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:24Z","lastTransitionTime":"2026-02-16T17:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.724919 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.724979 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.724995 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.725011 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.725022 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:24Z","lastTransitionTime":"2026-02-16T17:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.762195 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 07:55:33.910264015 +0000 UTC Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.805452 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:24Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.820277 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:24Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.827898 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.827932 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.827941 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.827954 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.827967 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:24Z","lastTransitionTime":"2026-02-16T17:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.834718 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:24Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.851699 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:24Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.867714 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bf0cc1807b2cbce57729fe5a58675ff9124110cb52d7b0d8ae4a309098b433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:24Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.889532 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dbad4f63495ccf97b4852e3878b155e281c37662322d28b442acb4d2748e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:24Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.908144 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a35f79cea3289726aad21e06cdbef120a4acb566394eb9f0939efb5600609ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a35f79cea3289726aad21e06cdbef120a4acb566394eb9f0939efb5600609ea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"message\\\":\\\"cs-daemon-tf698\\\\\\\", UID:\\\\\\\"894bff1b-b8b9-4c28-8ffe-0e0469958227\\\\\\\", APIVersion:\\\\\\\"v1\\\\\\\", ResourceVersion:\\\\\\\"26909\\\\\\\", FieldPath:\\\\\\\"\\\\\\\"}): type: 'Warning' reason: 'ErrorAddingResource' addLogicalPort failed for openshift-multus/network-metrics-daemon-tf698: failed to update pod openshift-multus/network-metrics-daemon-tf698: Internal error occurred: failed calling webhook \\\\\\\"pod.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/pod?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z\\\\nI0216 17:00:20.291733 6450 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 17:00:20.291760 6450 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 17:00:20.291816 6450 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 17:00:20.291837 6450 factory.go:656] Stopping watch factory\\\\nI0216 17:00:20.291853 6450 ovnkube.go:599] Stopped ovnkube\\\\nI0216 17:00:20.291877 6450 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0216 17:00:20.291880 6450 handler.go:208] Removed *v1.Node event handler 2\\\\nF0216 17:00:20.291948 6450 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9krvl_openshift-ovn-kubernetes(d985e4f1-78bb-43f9-b86c-cd47831d602c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:24Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.920598 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d16146c9c352c4085083323776d5be036f92c66357a810bf908ac40e229192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:24Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.930765 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.930790 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.930798 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.930810 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.930818 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:24Z","lastTransitionTime":"2026-02-16T17:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.934198 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tf698" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894bff1b-b8b9-4c28-8ffe-0e0469958227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zns6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zns6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tf698\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:24Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.952809 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:24Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.965749 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:24Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.981407 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:24Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:24 crc kubenswrapper[4794]: I0216 17:00:24.994071 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:24Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.005820 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w6ttl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf36d1cc-c61d-4339-91a7-579ff74019aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2938850604a36c75c38eae180a2a231162f98c256191b74e63abcf08c8027924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rdjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w6ttl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:25Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.017035 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cmzfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"237c381f-d225-4a4b-8bc9-6c03ee09015f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca731f62fcf2b8c8e68925b2b13cf1f61cab4b77425c85820278f710f4d8c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fnpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61b6dad949f71a170816c56dbc1ad2c99e88e7ecf1043d74bb077950f135eeed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fnpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cmzfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:25Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.029494 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:25Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.036187 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.036235 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.036245 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.036261 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.036271 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:25Z","lastTransitionTime":"2026-02-16T17:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.138258 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.138295 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.138322 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.138339 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.138350 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:25Z","lastTransitionTime":"2026-02-16T17:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.241949 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.242017 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.242039 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.242067 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.242087 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:25Z","lastTransitionTime":"2026-02-16T17:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.245089 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.256163 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.260455 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:25Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.277556 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:25Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.291022 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:25Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.307117 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bf0cc1807b2cbce57729fe5a58675ff9124110cb52d7b0d8ae4a309098b433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:25Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.320584 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:25Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.334083 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:25Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.345374 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.345418 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.345429 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.345445 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.345458 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:25Z","lastTransitionTime":"2026-02-16T17:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.348447 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:25Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.360473 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:25Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.371495 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:25Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.384995 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dbad4f63495ccf97b4852e3878b155e281c37662322d28b442acb4d2748e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:25Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.402042 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a35f79cea3289726aad21e06cdbef120a4acb566394eb9f0939efb5600609ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a35f79cea3289726aad21e06cdbef120a4acb566394eb9f0939efb5600609ea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"message\\\":\\\"cs-daemon-tf698\\\\\\\", UID:\\\\\\\"894bff1b-b8b9-4c28-8ffe-0e0469958227\\\\\\\", APIVersion:\\\\\\\"v1\\\\\\\", ResourceVersion:\\\\\\\"26909\\\\\\\", FieldPath:\\\\\\\"\\\\\\\"}): type: 'Warning' reason: 'ErrorAddingResource' addLogicalPort failed for openshift-multus/network-metrics-daemon-tf698: failed to update pod openshift-multus/network-metrics-daemon-tf698: Internal error occurred: failed calling webhook \\\\\\\"pod.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/pod?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z\\\\nI0216 17:00:20.291733 6450 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 17:00:20.291760 6450 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 17:00:20.291816 6450 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 17:00:20.291837 6450 factory.go:656] Stopping watch factory\\\\nI0216 17:00:20.291853 6450 ovnkube.go:599] Stopped ovnkube\\\\nI0216 17:00:20.291877 6450 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0216 17:00:20.291880 6450 handler.go:208] Removed *v1.Node event handler 2\\\\nF0216 17:00:20.291948 6450 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9krvl_openshift-ovn-kubernetes(d985e4f1-78bb-43f9-b86c-cd47831d602c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:25Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.413559 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d16146c9c352c4085083323776d5be036f92c66357a810bf908ac40e229192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:25Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.426083 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tf698" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894bff1b-b8b9-4c28-8ffe-0e0469958227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zns6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zns6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tf698\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:25Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.439384 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:25Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.448000 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.448084 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.448098 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.448128 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.448144 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:25Z","lastTransitionTime":"2026-02-16T17:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.448825 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w6ttl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf36d1cc-c61d-4339-91a7-579ff74019aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2938850604a36c75c38eae180a2a231162f98c256191b74e63abcf08c8027924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rdjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w6ttl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:25Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.459578 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cmzfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"237c381f-d225-4a4b-8bc9-6c03ee09015f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca731f62fcf2b8c8e68925b2b13cf1f61cab4b77425c85820278f710f4d8c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fnpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61b6dad949f71a170816c56dbc1ad2c99e88e7ecf1043d74bb077950f135eeed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fnpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cmzfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:25Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.549846 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.549892 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.549901 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.549917 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.549926 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:25Z","lastTransitionTime":"2026-02-16T17:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.652183 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.652215 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.652233 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.652252 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.652264 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:25Z","lastTransitionTime":"2026-02-16T17:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.755512 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.755560 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.755573 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.755591 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.755602 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:25Z","lastTransitionTime":"2026-02-16T17:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.762817 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 21:06:22.402035157 +0000 UTC Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.790501 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.790604 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:00:25 crc kubenswrapper[4794]: E0216 17:00:25.790657 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.790712 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.790766 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:25 crc kubenswrapper[4794]: E0216 17:00:25.790739 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:00:25 crc kubenswrapper[4794]: E0216 17:00:25.790883 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:25 crc kubenswrapper[4794]: E0216 17:00:25.790930 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.857338 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.857374 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.857382 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.857397 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.857407 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:25Z","lastTransitionTime":"2026-02-16T17:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.960036 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.960076 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.960086 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.960104 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:25 crc kubenswrapper[4794]: I0216 17:00:25.960113 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:25Z","lastTransitionTime":"2026-02-16T17:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.063623 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.063699 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.063723 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.063749 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.063768 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:26Z","lastTransitionTime":"2026-02-16T17:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.166083 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.166126 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.166138 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.166152 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.166161 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:26Z","lastTransitionTime":"2026-02-16T17:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.231520 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/894bff1b-b8b9-4c28-8ffe-0e0469958227-metrics-certs\") pod \"network-metrics-daemon-tf698\" (UID: \"894bff1b-b8b9-4c28-8ffe-0e0469958227\") " pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:00:26 crc kubenswrapper[4794]: E0216 17:00:26.231664 4794 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:00:26 crc kubenswrapper[4794]: E0216 17:00:26.231738 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/894bff1b-b8b9-4c28-8ffe-0e0469958227-metrics-certs podName:894bff1b-b8b9-4c28-8ffe-0e0469958227 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:42.231725657 +0000 UTC m=+68.179820304 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/894bff1b-b8b9-4c28-8ffe-0e0469958227-metrics-certs") pod "network-metrics-daemon-tf698" (UID: "894bff1b-b8b9-4c28-8ffe-0e0469958227") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.268977 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.269014 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.269025 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.269041 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.269052 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:26Z","lastTransitionTime":"2026-02-16T17:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.371190 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.371360 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.371379 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.371395 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.371406 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:26Z","lastTransitionTime":"2026-02-16T17:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.473809 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.473860 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.473877 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.473897 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.473912 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:26Z","lastTransitionTime":"2026-02-16T17:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.576579 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.576623 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.576632 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.576647 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.576658 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:26Z","lastTransitionTime":"2026-02-16T17:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.679744 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.679781 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.679789 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.679804 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.679813 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:26Z","lastTransitionTime":"2026-02-16T17:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.763819 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 23:52:43.010776805 +0000 UTC Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.783029 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.783079 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.783091 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.783110 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.783123 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:26Z","lastTransitionTime":"2026-02-16T17:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.886813 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.886964 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.887040 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.887116 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.887154 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:26Z","lastTransitionTime":"2026-02-16T17:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.989682 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.989729 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.989744 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.989765 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:26 crc kubenswrapper[4794]: I0216 17:00:26.989781 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:26Z","lastTransitionTime":"2026-02-16T17:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.092443 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.092492 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.092505 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.092523 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.092535 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:27Z","lastTransitionTime":"2026-02-16T17:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.220103 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.220131 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.220140 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.220153 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.220163 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:27Z","lastTransitionTime":"2026-02-16T17:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.322127 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.322161 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.322171 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.322187 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.322198 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:27Z","lastTransitionTime":"2026-02-16T17:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.424558 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.424614 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.424630 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.424653 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.424669 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:27Z","lastTransitionTime":"2026-02-16T17:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.528093 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.528153 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.528170 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.528194 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.528213 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:27Z","lastTransitionTime":"2026-02-16T17:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.627042 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.627085 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.627113 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:27 crc kubenswrapper[4794]: E0216 17:00:27.627181 4794 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:00:27 crc kubenswrapper[4794]: E0216 17:00:27.627227 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:59.62721429 +0000 UTC m=+85.575308937 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:00:27 crc kubenswrapper[4794]: E0216 17:00:27.627240 4794 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:00:27 crc kubenswrapper[4794]: E0216 17:00:27.627405 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:59.627382764 +0000 UTC m=+85.575477481 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:00:27 crc kubenswrapper[4794]: E0216 17:00:27.627487 4794 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:00:27 crc kubenswrapper[4794]: E0216 17:00:27.627540 4794 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:00:27 crc kubenswrapper[4794]: E0216 17:00:27.627562 4794 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:27 crc kubenswrapper[4794]: E0216 17:00:27.627672 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:59.627638191 +0000 UTC m=+85.575732878 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.630594 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.630639 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.630658 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.630679 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.630693 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:27Z","lastTransitionTime":"2026-02-16T17:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.728105 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.728260 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:27 crc kubenswrapper[4794]: E0216 17:00:27.728385 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:00:59.728367435 +0000 UTC m=+85.676462082 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:00:27 crc kubenswrapper[4794]: E0216 17:00:27.728570 4794 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:00:27 crc kubenswrapper[4794]: E0216 17:00:27.728632 4794 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:00:27 crc kubenswrapper[4794]: E0216 17:00:27.728654 4794 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:27 crc kubenswrapper[4794]: E0216 17:00:27.728750 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:59.728724645 +0000 UTC m=+85.676819332 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.733729 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.733777 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.733789 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.733811 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.733833 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:27Z","lastTransitionTime":"2026-02-16T17:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.764139 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 02:55:41.81988114 +0000 UTC Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.790589 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.790590 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.790599 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.790821 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:27 crc kubenswrapper[4794]: E0216 17:00:27.790943 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:27 crc kubenswrapper[4794]: E0216 17:00:27.791143 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:27 crc kubenswrapper[4794]: E0216 17:00:27.791271 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:00:27 crc kubenswrapper[4794]: E0216 17:00:27.791535 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.836564 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.836644 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.836669 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.836701 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.836725 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:27Z","lastTransitionTime":"2026-02-16T17:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.939991 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.940076 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.940100 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.940132 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:27 crc kubenswrapper[4794]: I0216 17:00:27.940153 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:27Z","lastTransitionTime":"2026-02-16T17:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.043073 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.043144 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.043168 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.043198 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.043222 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:28Z","lastTransitionTime":"2026-02-16T17:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.146291 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.146361 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.146373 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.146391 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.146404 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:28Z","lastTransitionTime":"2026-02-16T17:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.249341 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.249466 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.249490 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.249519 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.249538 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:28Z","lastTransitionTime":"2026-02-16T17:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.352176 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.352218 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.352232 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.352251 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.352266 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:28Z","lastTransitionTime":"2026-02-16T17:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.454399 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.454449 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.454461 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.454478 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.454494 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:28Z","lastTransitionTime":"2026-02-16T17:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.556936 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.556989 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.557003 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.557023 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.557036 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:28Z","lastTransitionTime":"2026-02-16T17:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.659165 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.659218 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.659228 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.659245 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.659257 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:28Z","lastTransitionTime":"2026-02-16T17:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.761901 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.761968 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.761987 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.762378 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.762430 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:28Z","lastTransitionTime":"2026-02-16T17:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.765087 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 17:12:31.874226726 +0000 UTC Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.865161 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.865218 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.865234 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.865258 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.865276 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:28Z","lastTransitionTime":"2026-02-16T17:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.968604 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.968678 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.968700 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.968734 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:28 crc kubenswrapper[4794]: I0216 17:00:28.968757 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:28Z","lastTransitionTime":"2026-02-16T17:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.070962 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.070996 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.071004 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.071016 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.071057 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:29Z","lastTransitionTime":"2026-02-16T17:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.174435 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.174502 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.174519 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.174545 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.174565 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:29Z","lastTransitionTime":"2026-02-16T17:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.277477 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.277542 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.277572 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.277612 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.277635 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:29Z","lastTransitionTime":"2026-02-16T17:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.379888 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.379953 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.379969 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.379991 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.380006 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:29Z","lastTransitionTime":"2026-02-16T17:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.482843 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.482922 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.482948 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.482979 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.483002 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:29Z","lastTransitionTime":"2026-02-16T17:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.585557 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.585691 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.585704 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.585721 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.585765 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:29Z","lastTransitionTime":"2026-02-16T17:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.687919 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.687970 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.687982 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.687999 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.688011 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:29Z","lastTransitionTime":"2026-02-16T17:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.765326 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 13:44:44.589265148 +0000 UTC Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.790432 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.790466 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.790490 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:29 crc kubenswrapper[4794]: E0216 17:00:29.790596 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.790676 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:29 crc kubenswrapper[4794]: E0216 17:00:29.790767 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:29 crc kubenswrapper[4794]: E0216 17:00:29.790900 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.790931 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.790993 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.791016 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.791045 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.791068 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:29Z","lastTransitionTime":"2026-02-16T17:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:29 crc kubenswrapper[4794]: E0216 17:00:29.791071 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.893359 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.893400 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.893411 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.893425 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.893434 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:29Z","lastTransitionTime":"2026-02-16T17:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.995788 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.995837 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.995872 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.995890 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:29 crc kubenswrapper[4794]: I0216 17:00:29.995902 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:29Z","lastTransitionTime":"2026-02-16T17:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.098746 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.098788 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.098796 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.098811 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.098820 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:30Z","lastTransitionTime":"2026-02-16T17:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.200680 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.200742 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.200765 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.200794 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.200820 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:30Z","lastTransitionTime":"2026-02-16T17:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.303285 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.303350 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.303359 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.303374 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.303384 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:30Z","lastTransitionTime":"2026-02-16T17:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.406582 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.406653 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.406709 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.406736 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.406755 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:30Z","lastTransitionTime":"2026-02-16T17:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.509372 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.509428 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.509443 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.509461 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.509473 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:30Z","lastTransitionTime":"2026-02-16T17:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.611949 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.612008 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.612025 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.612051 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.612069 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:30Z","lastTransitionTime":"2026-02-16T17:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.714281 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.714340 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.714348 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.714361 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.714369 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:30Z","lastTransitionTime":"2026-02-16T17:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.766245 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 03:22:29.812589799 +0000 UTC Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.817232 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.817360 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.817397 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.817427 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.817445 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:30Z","lastTransitionTime":"2026-02-16T17:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.919539 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.919594 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.919610 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.919634 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:30 crc kubenswrapper[4794]: I0216 17:00:30.919652 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:30Z","lastTransitionTime":"2026-02-16T17:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.022253 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.022347 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.022367 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.022391 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.022408 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:31Z","lastTransitionTime":"2026-02-16T17:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.125395 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.125456 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.125468 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.125485 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.125498 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:31Z","lastTransitionTime":"2026-02-16T17:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.230730 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.230818 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.230853 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.230885 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.230912 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:31Z","lastTransitionTime":"2026-02-16T17:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.333961 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.334002 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.334013 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.334028 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.334040 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:31Z","lastTransitionTime":"2026-02-16T17:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.448401 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.448448 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.448458 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.448476 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.448487 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:31Z","lastTransitionTime":"2026-02-16T17:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.551749 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.551792 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.551803 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.551826 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.551838 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:31Z","lastTransitionTime":"2026-02-16T17:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.655608 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.655681 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.655696 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.655722 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.655737 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:31Z","lastTransitionTime":"2026-02-16T17:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.758050 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.758094 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.758103 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.758116 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.758127 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:31Z","lastTransitionTime":"2026-02-16T17:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.767247 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 06:33:17.260152498 +0000 UTC Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.791221 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.791255 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.791271 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.791232 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:00:31 crc kubenswrapper[4794]: E0216 17:00:31.791361 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:31 crc kubenswrapper[4794]: E0216 17:00:31.791452 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:31 crc kubenswrapper[4794]: E0216 17:00:31.791553 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:00:31 crc kubenswrapper[4794]: E0216 17:00:31.791598 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.861544 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.861588 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.861599 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.861625 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.861637 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:31Z","lastTransitionTime":"2026-02-16T17:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.964812 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.964858 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.964868 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.964885 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:31 crc kubenswrapper[4794]: I0216 17:00:31.964897 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:31Z","lastTransitionTime":"2026-02-16T17:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.067685 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.067745 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.067759 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.067775 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.067791 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:32Z","lastTransitionTime":"2026-02-16T17:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.170671 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.170729 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.170745 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.170770 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.170786 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:32Z","lastTransitionTime":"2026-02-16T17:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.273199 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.273333 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.273365 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.273399 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.273419 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:32Z","lastTransitionTime":"2026-02-16T17:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.375662 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.375702 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.375714 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.375730 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.375741 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:32Z","lastTransitionTime":"2026-02-16T17:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.479172 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.479204 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.479213 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.479226 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.479235 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:32Z","lastTransitionTime":"2026-02-16T17:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.583189 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.584055 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.584094 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.584119 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.584142 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:32Z","lastTransitionTime":"2026-02-16T17:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.638502 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.638550 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.638562 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.638578 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.638589 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:32Z","lastTransitionTime":"2026-02-16T17:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:32 crc kubenswrapper[4794]: E0216 17:00:32.654853 4794 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ccf280c0-9a33-46bd-be2c-0dca34f382e0\\\",\\\"systemUUID\\\":\\\"b3d0f632-3e25-45db-ae26-e5b3ec8421a1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.659529 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.659581 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.659619 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.659654 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.659677 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:32Z","lastTransitionTime":"2026-02-16T17:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:32 crc kubenswrapper[4794]: E0216 17:00:32.675810 4794 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ccf280c0-9a33-46bd-be2c-0dca34f382e0\\\",\\\"systemUUID\\\":\\\"b3d0f632-3e25-45db-ae26-e5b3ec8421a1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.679685 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.679718 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.679729 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.679745 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.679758 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:32Z","lastTransitionTime":"2026-02-16T17:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:32 crc kubenswrapper[4794]: E0216 17:00:32.692192 4794 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ccf280c0-9a33-46bd-be2c-0dca34f382e0\\\",\\\"systemUUID\\\":\\\"b3d0f632-3e25-45db-ae26-e5b3ec8421a1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.695837 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.695880 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.695892 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.695908 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.695921 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:32Z","lastTransitionTime":"2026-02-16T17:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:32 crc kubenswrapper[4794]: E0216 17:00:32.709335 4794 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ccf280c0-9a33-46bd-be2c-0dca34f382e0\\\",\\\"systemUUID\\\":\\\"b3d0f632-3e25-45db-ae26-e5b3ec8421a1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.712883 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.712919 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.712930 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.712945 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.712955 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:32Z","lastTransitionTime":"2026-02-16T17:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:32 crc kubenswrapper[4794]: E0216 17:00:32.724683 4794 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ccf280c0-9a33-46bd-be2c-0dca34f382e0\\\",\\\"systemUUID\\\":\\\"b3d0f632-3e25-45db-ae26-e5b3ec8421a1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4794]: E0216 17:00:32.724806 4794 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.726329 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.726373 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.726382 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.726394 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.726403 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:32Z","lastTransitionTime":"2026-02-16T17:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.767540 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 09:23:25.652387454 +0000 UTC Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.829156 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.829212 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.829225 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.829241 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.829253 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:32Z","lastTransitionTime":"2026-02-16T17:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.931943 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.931986 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.932004 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.932019 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:32 crc kubenswrapper[4794]: I0216 17:00:32.932029 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:32Z","lastTransitionTime":"2026-02-16T17:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.034349 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.034388 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.034397 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.034414 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.034425 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:33Z","lastTransitionTime":"2026-02-16T17:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.137211 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.137263 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.137274 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.137289 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.137332 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:33Z","lastTransitionTime":"2026-02-16T17:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.239994 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.240095 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.240119 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.240149 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.240166 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:33Z","lastTransitionTime":"2026-02-16T17:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.343024 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.343076 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.343130 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.343157 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.343174 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:33Z","lastTransitionTime":"2026-02-16T17:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.445645 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.445694 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.445705 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.445722 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.445735 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:33Z","lastTransitionTime":"2026-02-16T17:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.548958 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.549003 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.549020 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.549045 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.549059 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:33Z","lastTransitionTime":"2026-02-16T17:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.651884 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.651922 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.651932 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.651947 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.651958 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:33Z","lastTransitionTime":"2026-02-16T17:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.755256 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.755355 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.755368 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.755386 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.755397 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:33Z","lastTransitionTime":"2026-02-16T17:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.768482 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 05:36:02.178220036 +0000 UTC Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.791159 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.791201 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.791187 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.791176 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:33 crc kubenswrapper[4794]: E0216 17:00:33.791413 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:33 crc kubenswrapper[4794]: E0216 17:00:33.791567 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:33 crc kubenswrapper[4794]: E0216 17:00:33.791682 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:33 crc kubenswrapper[4794]: E0216 17:00:33.791861 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.792530 4794 scope.go:117] "RemoveContainer" containerID="9a35f79cea3289726aad21e06cdbef120a4acb566394eb9f0939efb5600609ea" Feb 16 17:00:33 crc kubenswrapper[4794]: E0216 17:00:33.792761 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9krvl_openshift-ovn-kubernetes(d985e4f1-78bb-43f9-b86c-cd47831d602c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.859231 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.859289 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.859333 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.859372 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.859394 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:33Z","lastTransitionTime":"2026-02-16T17:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.963491 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.963594 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.963673 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.963702 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:33 crc kubenswrapper[4794]: I0216 17:00:33.963720 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:33Z","lastTransitionTime":"2026-02-16T17:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.066984 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.067026 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.067034 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.067048 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.067057 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:34Z","lastTransitionTime":"2026-02-16T17:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.169376 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.169453 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.169478 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.169581 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.169607 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:34Z","lastTransitionTime":"2026-02-16T17:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.273539 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.273607 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.273628 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.273656 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.273676 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:34Z","lastTransitionTime":"2026-02-16T17:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.376832 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.376891 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.376914 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.376941 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.376961 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:34Z","lastTransitionTime":"2026-02-16T17:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.479773 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.479818 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.479831 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.479851 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.479865 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:34Z","lastTransitionTime":"2026-02-16T17:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.583035 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.583088 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.583100 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.583116 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.583127 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:34Z","lastTransitionTime":"2026-02-16T17:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.685646 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.685690 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.685702 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.685731 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.685742 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:34Z","lastTransitionTime":"2026-02-16T17:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.769529 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 00:16:59.091015154 +0000 UTC Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.787394 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.787432 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.787444 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.787459 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.787470 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:34Z","lastTransitionTime":"2026-02-16T17:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.811457 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.825602 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.837893 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.857125 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dbad4f63495ccf97b4852e3878b155e281c37662322d28b442acb4d2748e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.879334 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a35f79cea3289726aad21e06cdbef120a4acb566394eb9f0939efb5600609ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a35f79cea3289726aad21e06cdbef120a4acb566394eb9f0939efb5600609ea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"message\\\":\\\"cs-daemon-tf698\\\\\\\", UID:\\\\\\\"894bff1b-b8b9-4c28-8ffe-0e0469958227\\\\\\\", APIVersion:\\\\\\\"v1\\\\\\\", ResourceVersion:\\\\\\\"26909\\\\\\\", FieldPath:\\\\\\\"\\\\\\\"}): type: 'Warning' reason: 'ErrorAddingResource' addLogicalPort failed for openshift-multus/network-metrics-daemon-tf698: failed to update pod openshift-multus/network-metrics-daemon-tf698: Internal error occurred: failed calling webhook \\\\\\\"pod.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/pod?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z\\\\nI0216 17:00:20.291733 6450 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 17:00:20.291760 6450 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 17:00:20.291816 6450 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 17:00:20.291837 6450 factory.go:656] Stopping watch factory\\\\nI0216 17:00:20.291853 6450 ovnkube.go:599] Stopped ovnkube\\\\nI0216 17:00:20.291877 6450 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0216 17:00:20.291880 6450 handler.go:208] Removed *v1.Node event handler 2\\\\nF0216 17:00:20.291948 6450 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9krvl_openshift-ovn-kubernetes(d985e4f1-78bb-43f9-b86c-cd47831d602c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.888807 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.888841 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.888853 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.888869 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.888882 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:34Z","lastTransitionTime":"2026-02-16T17:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.895275 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d16146c9c352c4085083323776d5be036f92c66357a810bf908ac40e229192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.906016 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tf698" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894bff1b-b8b9-4c28-8ffe-0e0469958227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zns6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zns6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tf698\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.919386 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b09eb0-58bb-41dc-9660-eeea82e4496a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e67602098c1fa62e4147637862358b3139d1307c7f2d09fc3c715ea67520fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f61dc0173699c28378af80479249a69f3056651048398a9699c1e268d386329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090c39431176a8d41d18c2d8583aaa438dd244ceeeebaef0ee502ab9b0958d86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba8e70eecfbf8e03bddd7c5db4b683416b21b4e717c377954dfec7ff20d134e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba8e70eecfbf8e03bddd7c5db4b683416b21b4e717c377954dfec7ff20d134e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.930418 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.940669 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w6ttl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf36d1cc-c61d-4339-91a7-579ff74019aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2938850604a36c75c38eae180a2a231162f98c256191b74e63abcf08c8027924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rdjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w6ttl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.951931 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cmzfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"237c381f-d225-4a4b-8bc9-6c03ee09015f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca731f62fcf2b8c8e68925b2b13cf1f61cab4b77425c85820278f710f4d8c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fnpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61b6dad949f71a170816c56dbc1ad2c99e88e7ecf1043d74bb077950f135eeed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fnpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cmzfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.964987 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.979124 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.992158 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.992203 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.992216 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.992238 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.992249 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:34Z","lastTransitionTime":"2026-02-16T17:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:34 crc kubenswrapper[4794]: I0216 17:00:34.993638 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.004333 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bf0cc1807b2cbce57729fe5a58675ff9124110cb52d7b0d8ae4a309098b433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:35Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.016946 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:35Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.030789 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:35Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.094633 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.094949 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.095134 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.095372 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.095592 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:35Z","lastTransitionTime":"2026-02-16T17:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.200195 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.200336 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.200364 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.200394 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.200417 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:35Z","lastTransitionTime":"2026-02-16T17:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.303643 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.303703 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.303756 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.303783 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.303800 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:35Z","lastTransitionTime":"2026-02-16T17:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.406459 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.406546 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.406561 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.406588 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.406607 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:35Z","lastTransitionTime":"2026-02-16T17:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.508744 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.508801 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.508814 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.508833 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.508847 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:35Z","lastTransitionTime":"2026-02-16T17:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.611758 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.611808 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.611819 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.611837 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.611852 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:35Z","lastTransitionTime":"2026-02-16T17:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.715567 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.715607 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.715623 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.715645 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.715662 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:35Z","lastTransitionTime":"2026-02-16T17:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.770171 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 17:40:23.951835133 +0000 UTC Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.790614 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.790759 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:35 crc kubenswrapper[4794]: E0216 17:00:35.790871 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.791097 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.791140 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:00:35 crc kubenswrapper[4794]: E0216 17:00:35.791198 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:35 crc kubenswrapper[4794]: E0216 17:00:35.791362 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:35 crc kubenswrapper[4794]: E0216 17:00:35.791490 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.818228 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.818278 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.818288 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.818325 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.818338 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:35Z","lastTransitionTime":"2026-02-16T17:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.920899 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.920961 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.920981 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.921003 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:35 crc kubenswrapper[4794]: I0216 17:00:35.921025 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:35Z","lastTransitionTime":"2026-02-16T17:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.024059 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.024124 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.024162 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.024198 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.024222 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:36Z","lastTransitionTime":"2026-02-16T17:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.127239 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.127391 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.127417 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.127455 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.127491 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:36Z","lastTransitionTime":"2026-02-16T17:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.230618 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.230670 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.230683 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.230703 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.230719 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:36Z","lastTransitionTime":"2026-02-16T17:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.333180 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.333256 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.333276 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.333335 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.333355 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:36Z","lastTransitionTime":"2026-02-16T17:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.436071 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.436465 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.436631 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.436780 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.436933 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:36Z","lastTransitionTime":"2026-02-16T17:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.540693 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.540744 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.540762 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.540791 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.540811 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:36Z","lastTransitionTime":"2026-02-16T17:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.642984 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.643033 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.643052 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.643076 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.643093 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:36Z","lastTransitionTime":"2026-02-16T17:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.745472 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.745517 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.745531 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.745550 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.745564 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:36Z","lastTransitionTime":"2026-02-16T17:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.770882 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 13:09:47.439866664 +0000 UTC Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.848848 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.848901 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.848918 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.849082 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.849113 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:36Z","lastTransitionTime":"2026-02-16T17:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.952475 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.952549 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.952572 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.952602 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:36 crc kubenswrapper[4794]: I0216 17:00:36.952620 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:36Z","lastTransitionTime":"2026-02-16T17:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.055572 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.055629 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.055640 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.055661 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.055674 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:37Z","lastTransitionTime":"2026-02-16T17:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.158024 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.158076 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.158085 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.158104 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.158116 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:37Z","lastTransitionTime":"2026-02-16T17:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.260857 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.260907 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.260921 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.260940 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.260952 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:37Z","lastTransitionTime":"2026-02-16T17:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.363909 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.363966 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.363985 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.364009 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.364027 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:37Z","lastTransitionTime":"2026-02-16T17:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.467563 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.467639 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.467658 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.467684 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.467702 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:37Z","lastTransitionTime":"2026-02-16T17:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.573869 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.573928 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.573950 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.573986 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.574006 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:37Z","lastTransitionTime":"2026-02-16T17:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.677466 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.677540 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.677560 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.677585 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.677604 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:37Z","lastTransitionTime":"2026-02-16T17:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.771108 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 06:11:08.856410995 +0000 UTC Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.779969 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.780020 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.780035 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.780059 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.780076 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:37Z","lastTransitionTime":"2026-02-16T17:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.791234 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:37 crc kubenswrapper[4794]: E0216 17:00:37.791456 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.791676 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:00:37 crc kubenswrapper[4794]: E0216 17:00:37.791782 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.792222 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:37 crc kubenswrapper[4794]: E0216 17:00:37.793857 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.794759 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:37 crc kubenswrapper[4794]: E0216 17:00:37.794918 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.881880 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.881916 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.881927 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.881942 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.881952 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:37Z","lastTransitionTime":"2026-02-16T17:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.984518 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.984611 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.984622 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.984638 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:37 crc kubenswrapper[4794]: I0216 17:00:37.984650 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:37Z","lastTransitionTime":"2026-02-16T17:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.087393 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.087441 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.087457 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.087479 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.087495 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:38Z","lastTransitionTime":"2026-02-16T17:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.194355 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.194400 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.194418 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.194442 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.194460 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:38Z","lastTransitionTime":"2026-02-16T17:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.296770 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.296802 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.296812 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.296827 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.296837 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:38Z","lastTransitionTime":"2026-02-16T17:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.399189 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.399216 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.399224 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.399237 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.399245 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:38Z","lastTransitionTime":"2026-02-16T17:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.501473 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.501518 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.501533 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.501552 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.501567 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:38Z","lastTransitionTime":"2026-02-16T17:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.604321 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.604362 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.604374 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.604393 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.604407 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:38Z","lastTransitionTime":"2026-02-16T17:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.706405 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.706440 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.706451 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.706466 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.706478 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:38Z","lastTransitionTime":"2026-02-16T17:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.771460 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 16:17:01.963861359 +0000 UTC Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.808259 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.808355 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.808375 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.808400 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.808418 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:38Z","lastTransitionTime":"2026-02-16T17:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.910489 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.910553 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.910572 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.910597 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:38 crc kubenswrapper[4794]: I0216 17:00:38.910620 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:38Z","lastTransitionTime":"2026-02-16T17:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.013249 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.013316 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.013329 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.013346 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.013358 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:39Z","lastTransitionTime":"2026-02-16T17:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.115711 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.115765 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.115783 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.115803 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.115818 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:39Z","lastTransitionTime":"2026-02-16T17:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.218651 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.218688 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.218697 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.218709 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.218718 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:39Z","lastTransitionTime":"2026-02-16T17:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.320440 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.320494 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.320511 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.320531 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.320548 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:39Z","lastTransitionTime":"2026-02-16T17:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.423147 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.423200 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.423217 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.423242 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.423257 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:39Z","lastTransitionTime":"2026-02-16T17:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.525367 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.525405 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.525416 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.525430 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.525440 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:39Z","lastTransitionTime":"2026-02-16T17:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.627542 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.627571 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.627579 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.627591 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.627599 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:39Z","lastTransitionTime":"2026-02-16T17:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.729209 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.729246 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.729256 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.729270 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.729279 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:39Z","lastTransitionTime":"2026-02-16T17:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.771948 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 03:22:18.388935502 +0000 UTC Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.791259 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.791276 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.791361 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.791551 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:00:39 crc kubenswrapper[4794]: E0216 17:00:39.791547 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:39 crc kubenswrapper[4794]: E0216 17:00:39.791638 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:00:39 crc kubenswrapper[4794]: E0216 17:00:39.791697 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:39 crc kubenswrapper[4794]: E0216 17:00:39.791753 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.831398 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.831442 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.831453 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.831469 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.831479 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:39Z","lastTransitionTime":"2026-02-16T17:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.934241 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.934284 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.934333 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.934364 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:39 crc kubenswrapper[4794]: I0216 17:00:39.934387 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:39Z","lastTransitionTime":"2026-02-16T17:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.036203 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.036237 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.036249 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.036266 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.036278 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:40Z","lastTransitionTime":"2026-02-16T17:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.139003 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.139036 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.139046 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.139060 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.139071 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:40Z","lastTransitionTime":"2026-02-16T17:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.241239 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.241347 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.241375 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.241405 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.241425 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:40Z","lastTransitionTime":"2026-02-16T17:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.344427 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.344471 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.344481 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.344540 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.344549 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:40Z","lastTransitionTime":"2026-02-16T17:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.449689 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.449747 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.449766 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.449790 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.449812 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:40Z","lastTransitionTime":"2026-02-16T17:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.552157 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.552201 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.552213 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.552230 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.552240 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:40Z","lastTransitionTime":"2026-02-16T17:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.655136 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.655198 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.655221 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.655252 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.655276 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:40Z","lastTransitionTime":"2026-02-16T17:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.757615 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.757676 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.757685 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.757699 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.757708 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:40Z","lastTransitionTime":"2026-02-16T17:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.772063 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 06:54:17.153044128 +0000 UTC Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.860575 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.860616 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.860630 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.860646 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.860658 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:40Z","lastTransitionTime":"2026-02-16T17:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.962991 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.963050 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.963062 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.963077 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:40 crc kubenswrapper[4794]: I0216 17:00:40.963088 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:40Z","lastTransitionTime":"2026-02-16T17:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.065480 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.065533 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.065544 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.065556 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.065565 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:41Z","lastTransitionTime":"2026-02-16T17:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.168445 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.168506 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.168526 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.168546 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.168558 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:41Z","lastTransitionTime":"2026-02-16T17:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.270538 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.270581 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.270596 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.270618 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.270634 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:41Z","lastTransitionTime":"2026-02-16T17:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.373117 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.373149 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.373161 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.373176 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.373187 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:41Z","lastTransitionTime":"2026-02-16T17:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.475779 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.475827 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.475838 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.475856 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.475870 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:41Z","lastTransitionTime":"2026-02-16T17:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.577689 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.577740 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.577750 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.577767 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.577777 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:41Z","lastTransitionTime":"2026-02-16T17:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.680133 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.680167 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.680175 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.680189 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.680198 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:41Z","lastTransitionTime":"2026-02-16T17:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.772368 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 13:35:15.280813623 +0000 UTC Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.782184 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.782214 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.782229 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.782247 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.782258 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:41Z","lastTransitionTime":"2026-02-16T17:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.790906 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.790969 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.790982 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.790998 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:41 crc kubenswrapper[4794]: E0216 17:00:41.791037 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:41 crc kubenswrapper[4794]: E0216 17:00:41.791122 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:00:41 crc kubenswrapper[4794]: E0216 17:00:41.791226 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:41 crc kubenswrapper[4794]: E0216 17:00:41.791337 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.884578 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.884643 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.884667 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.884711 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.884729 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:41Z","lastTransitionTime":"2026-02-16T17:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.986776 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.986813 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.986831 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.986848 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:41 crc kubenswrapper[4794]: I0216 17:00:41.986857 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:41Z","lastTransitionTime":"2026-02-16T17:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.089109 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.089176 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.089199 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.089229 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.089270 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:42Z","lastTransitionTime":"2026-02-16T17:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.191703 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.191756 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.191771 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.191791 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.191808 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:42Z","lastTransitionTime":"2026-02-16T17:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.287747 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/894bff1b-b8b9-4c28-8ffe-0e0469958227-metrics-certs\") pod \"network-metrics-daemon-tf698\" (UID: \"894bff1b-b8b9-4c28-8ffe-0e0469958227\") " pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:00:42 crc kubenswrapper[4794]: E0216 17:00:42.287960 4794 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:00:42 crc kubenswrapper[4794]: E0216 17:00:42.288060 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/894bff1b-b8b9-4c28-8ffe-0e0469958227-metrics-certs podName:894bff1b-b8b9-4c28-8ffe-0e0469958227 nodeName:}" failed. No retries permitted until 2026-02-16 17:01:14.288037579 +0000 UTC m=+100.236132306 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/894bff1b-b8b9-4c28-8ffe-0e0469958227-metrics-certs") pod "network-metrics-daemon-tf698" (UID: "894bff1b-b8b9-4c28-8ffe-0e0469958227") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.294018 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.294059 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.294070 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.294086 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.294097 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:42Z","lastTransitionTime":"2026-02-16T17:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.397371 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.397429 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.397446 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.397472 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.397488 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:42Z","lastTransitionTime":"2026-02-16T17:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.499530 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.499574 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.499587 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.499602 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.499635 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:42Z","lastTransitionTime":"2026-02-16T17:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.602141 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.602182 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.602192 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.602207 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.602218 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:42Z","lastTransitionTime":"2026-02-16T17:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.703842 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.703877 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.703889 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.703904 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.703914 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:42Z","lastTransitionTime":"2026-02-16T17:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.768481 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.768533 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.768546 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.768567 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.768588 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:42Z","lastTransitionTime":"2026-02-16T17:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.772514 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 12:51:51.925284667 +0000 UTC Feb 16 17:00:42 crc kubenswrapper[4794]: E0216 17:00:42.780846 4794 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ccf280c0-9a33-46bd-be2c-0dca34f382e0\\\",\\\"systemUUID\\\":\\\"b3d0f632-3e25-45db-ae26-e5b3ec8421a1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:42Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.783780 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.783842 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.783858 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.783873 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.783883 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:42Z","lastTransitionTime":"2026-02-16T17:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:42 crc kubenswrapper[4794]: E0216 17:00:42.794903 4794 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ccf280c0-9a33-46bd-be2c-0dca34f382e0\\\",\\\"systemUUID\\\":\\\"b3d0f632-3e25-45db-ae26-e5b3ec8421a1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:42Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.802342 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.802402 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.802416 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.802436 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.802454 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:42Z","lastTransitionTime":"2026-02-16T17:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.802632 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 16 17:00:42 crc kubenswrapper[4794]: E0216 17:00:42.815246 4794 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ccf280c0-9a33-46bd-be2c-0dca34f382e0\\\",\\\"systemUUID\\\":\\\"b3d0f632-3e25-45db-ae26-e5b3ec8421a1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:42Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.818950 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.818998 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.819016 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.819038 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.819055 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:42Z","lastTransitionTime":"2026-02-16T17:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:42 crc kubenswrapper[4794]: E0216 17:00:42.832485 4794 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ccf280c0-9a33-46bd-be2c-0dca34f382e0\\\",\\\"systemUUID\\\":\\\"b3d0f632-3e25-45db-ae26-e5b3ec8421a1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:42Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.835890 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.835944 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.835967 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.835992 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.836013 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:42Z","lastTransitionTime":"2026-02-16T17:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:42 crc kubenswrapper[4794]: E0216 17:00:42.852604 4794 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ccf280c0-9a33-46bd-be2c-0dca34f382e0\\\",\\\"systemUUID\\\":\\\"b3d0f632-3e25-45db-ae26-e5b3ec8421a1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:42Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:42 crc kubenswrapper[4794]: E0216 17:00:42.852756 4794 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.854250 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.854283 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.854315 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.854333 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.854348 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:42Z","lastTransitionTime":"2026-02-16T17:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.956675 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.956717 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.956730 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.956746 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:42 crc kubenswrapper[4794]: I0216 17:00:42.956758 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:42Z","lastTransitionTime":"2026-02-16T17:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.058844 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.059049 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.059116 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.059185 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.059256 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:43Z","lastTransitionTime":"2026-02-16T17:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.161199 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.161353 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.161472 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.162262 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.162629 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:43Z","lastTransitionTime":"2026-02-16T17:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.264843 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.264882 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.264895 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.264911 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.264923 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:43Z","lastTransitionTime":"2026-02-16T17:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.367653 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.367695 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.367704 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.367718 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.367727 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:43Z","lastTransitionTime":"2026-02-16T17:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.470208 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.470282 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.470298 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.470338 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.470353 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:43Z","lastTransitionTime":"2026-02-16T17:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.572199 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.572231 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.572240 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.572253 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.572261 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:43Z","lastTransitionTime":"2026-02-16T17:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.674886 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.675086 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.675152 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.675218 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.675286 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:43Z","lastTransitionTime":"2026-02-16T17:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.773434 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 21:32:22.579069124 +0000 UTC Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.777193 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.777238 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.777250 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.777268 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.777281 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:43Z","lastTransitionTime":"2026-02-16T17:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.790765 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.790818 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:00:43 crc kubenswrapper[4794]: E0216 17:00:43.791504 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.790892 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:43 crc kubenswrapper[4794]: E0216 17:00:43.791591 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.790855 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:43 crc kubenswrapper[4794]: E0216 17:00:43.791707 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:43 crc kubenswrapper[4794]: E0216 17:00:43.791756 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.879827 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.879889 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.879906 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.879928 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.879947 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:43Z","lastTransitionTime":"2026-02-16T17:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.982485 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.982550 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.982576 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.982606 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:43 crc kubenswrapper[4794]: I0216 17:00:43.982633 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:43Z","lastTransitionTime":"2026-02-16T17:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.085640 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.085688 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.085705 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.085727 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.085741 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:44Z","lastTransitionTime":"2026-02-16T17:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.188049 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.188083 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.188093 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.188107 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.188117 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:44Z","lastTransitionTime":"2026-02-16T17:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.240718 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zwhdn_f6f074ad-d6ce-4c47-aa3c-196e4ad30e64/kube-multus/0.log" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.240777 4794 generic.go:334] "Generic (PLEG): container finished" podID="f6f074ad-d6ce-4c47-aa3c-196e4ad30e64" containerID="9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757" exitCode=1 Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.240812 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zwhdn" event={"ID":"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64","Type":"ContainerDied","Data":"9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757"} Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.241289 4794 scope.go:117] "RemoveContainer" containerID="9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.251922 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b09eb0-58bb-41dc-9660-eeea82e4496a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e67602098c1fa62e4147637862358b3139d1307c7f2d09fc3c715ea67520fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f61dc0173699c28378af80479249a69f3056651048398a9699c1e268d386329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090c39431176a8d41d18c2d8583aaa438dd244ceeeebaef0ee502ab9b0958d86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba8e70eecfbf8e03bddd7c5db4b683416b21b4e717c377954dfec7ff20d134e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba8e70eecfbf8e03bddd7c5db4b683416b21b4e717c377954dfec7ff20d134e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:44Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.262548 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:44Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.275041 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w6ttl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf36d1cc-c61d-4339-91a7-579ff74019aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2938850604a36c75c38eae180a2a231162f98c256191b74e63abcf08c8027924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rdjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w6ttl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:44Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.285561 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cmzfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"237c381f-d225-4a4b-8bc9-6c03ee09015f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca731f62fcf2b8c8e68925b2b13cf1f61cab4b77425c85820278f710f4d8c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fnpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61b6dad949f71a170816c56dbc1ad2c99e88e7ecf1043d74bb077950f135eeed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fnpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cmzfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:44Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.290124 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.290148 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.290157 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.290174 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.290183 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:44Z","lastTransitionTime":"2026-02-16T17:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.300849 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:44Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.315298 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd2a1cf-d118-4f18-9ef8-7478fb22dcee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6269efb5010fa7baa3f906435c74594d73213e78cb782702c1fda4e3feae5f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b570424b4cd97b02ca03d9901d70aab76c2037d3b3799978e1a116b0c8f5e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4b570424b4cd97b02ca03d9901d70aab76c2037d3b3799978e1a116b0c8f5e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:44Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.329960 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:44Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.345458 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:44Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.355999 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bf0cc1807b2cbce57729fe5a58675ff9124110cb52d7b0d8ae4a309098b433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:44Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.372827 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:44Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.386439 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"message\\\":\\\"2026-02-16T16:59:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1005e3bb-676a-4f72-8899-1a6ff4f8312b\\\\n2026-02-16T16:59:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1005e3bb-676a-4f72-8899-1a6ff4f8312b to /host/opt/cni/bin/\\\\n2026-02-16T16:59:58Z [verbose] multus-daemon started\\\\n2026-02-16T16:59:58Z [verbose] Readiness Indicator file check\\\\n2026-02-16T17:00:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:44Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.393015 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.393046 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.393067 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.393082 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.393093 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:44Z","lastTransitionTime":"2026-02-16T17:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.401159 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d16146c9c352c4085083323776d5be036f92c66357a810bf908ac40e229192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:44Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.410572 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tf698" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894bff1b-b8b9-4c28-8ffe-0e0469958227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zns6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zns6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tf698\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:44Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.423194 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:44Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.433892 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:44Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.445374 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:44Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.461422 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dbad4f63495ccf97b4852e3878b155e281c37662322d28b442acb4d2748e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:44Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.483600 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a35f79cea3289726aad21e06cdbef120a4acb566394eb9f0939efb5600609ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a35f79cea3289726aad21e06cdbef120a4acb566394eb9f0939efb5600609ea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"message\\\":\\\"cs-daemon-tf698\\\\\\\", UID:\\\\\\\"894bff1b-b8b9-4c28-8ffe-0e0469958227\\\\\\\", APIVersion:\\\\\\\"v1\\\\\\\", ResourceVersion:\\\\\\\"26909\\\\\\\", FieldPath:\\\\\\\"\\\\\\\"}): type: 'Warning' reason: 'ErrorAddingResource' addLogicalPort failed for openshift-multus/network-metrics-daemon-tf698: failed to update pod openshift-multus/network-metrics-daemon-tf698: Internal error occurred: failed calling webhook \\\\\\\"pod.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/pod?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z\\\\nI0216 17:00:20.291733 6450 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 17:00:20.291760 6450 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 17:00:20.291816 6450 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 17:00:20.291837 6450 factory.go:656] Stopping watch factory\\\\nI0216 17:00:20.291853 6450 ovnkube.go:599] Stopped ovnkube\\\\nI0216 17:00:20.291877 6450 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0216 17:00:20.291880 6450 handler.go:208] Removed *v1.Node event handler 2\\\\nF0216 17:00:20.291948 6450 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9krvl_openshift-ovn-kubernetes(d985e4f1-78bb-43f9-b86c-cd47831d602c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:44Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.494923 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.494977 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.494986 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.495003 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.495014 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:44Z","lastTransitionTime":"2026-02-16T17:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.597199 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.597236 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.597246 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.597263 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.597275 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:44Z","lastTransitionTime":"2026-02-16T17:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.700608 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.700695 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.700707 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.700724 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.700736 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:44Z","lastTransitionTime":"2026-02-16T17:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.774503 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 08:27:09.219076711 +0000 UTC Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.792625 4794 scope.go:117] "RemoveContainer" containerID="9a35f79cea3289726aad21e06cdbef120a4acb566394eb9f0939efb5600609ea" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.802816 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.802851 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.802859 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.802870 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.802881 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:44Z","lastTransitionTime":"2026-02-16T17:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.812039 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:44Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.825963 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:44Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.839216 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dbad4f63495ccf97b4852e3878b155e281c37662322d28b442acb4d2748e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:44Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.856735 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a35f79cea3289726aad21e06cdbef120a4acb566394eb9f0939efb5600609ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a35f79cea3289726aad21e06cdbef120a4acb566394eb9f0939efb5600609ea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"message\\\":\\\"cs-daemon-tf698\\\\\\\", UID:\\\\\\\"894bff1b-b8b9-4c28-8ffe-0e0469958227\\\\\\\", APIVersion:\\\\\\\"v1\\\\\\\", ResourceVersion:\\\\\\\"26909\\\\\\\", FieldPath:\\\\\\\"\\\\\\\"}): type: 'Warning' reason: 'ErrorAddingResource' addLogicalPort failed for openshift-multus/network-metrics-daemon-tf698: failed to update pod openshift-multus/network-metrics-daemon-tf698: Internal error occurred: failed calling webhook \\\\\\\"pod.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/pod?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z\\\\nI0216 17:00:20.291733 6450 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 17:00:20.291760 6450 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 17:00:20.291816 6450 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 17:00:20.291837 6450 factory.go:656] Stopping watch factory\\\\nI0216 17:00:20.291853 6450 ovnkube.go:599] Stopped ovnkube\\\\nI0216 17:00:20.291877 6450 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0216 17:00:20.291880 6450 handler.go:208] Removed *v1.Node event handler 2\\\\nF0216 17:00:20.291948 6450 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9krvl_openshift-ovn-kubernetes(d985e4f1-78bb-43f9-b86c-cd47831d602c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:44Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.866540 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d16146c9c352c4085083323776d5be036f92c66357a810bf908ac40e229192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:44Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.877381 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tf698" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894bff1b-b8b9-4c28-8ffe-0e0469958227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zns6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zns6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tf698\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:44Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.890450 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:44Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.901285 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w6ttl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf36d1cc-c61d-4339-91a7-579ff74019aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2938850604a36c75c38eae180a2a231162f98c256191b74e63abcf08c8027924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rdjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w6ttl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:44Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.904236 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.904265 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.904315 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.904331 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.904340 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:44Z","lastTransitionTime":"2026-02-16T17:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.913440 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cmzfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"237c381f-d225-4a4b-8bc9-6c03ee09015f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca731f62fcf2b8c8e68925b2b13cf1f61cab4b77425c85820278f710f4d8c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fnpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61b6dad949f71a170816c56dbc1ad2c99e88e7ecf1043d74bb077950f135eeed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fnpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cmzfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:44Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.925811 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b09eb0-58bb-41dc-9660-eeea82e4496a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e67602098c1fa62e4147637862358b3139d1307c7f2d09fc3c715ea67520fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f61dc0173699c28378af80479249a69f3056651048398a9699c1e268d386329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090c39431176a8d41d18c2d8583aaa438dd244ceeeebaef0ee502ab9b0958d86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba8e70eecfbf8e03bddd7c5db4b683416b21b4e717c377954dfec7ff20d134e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba8e70eecfbf8e03bddd7c5db4b683416b21b4e717c377954dfec7ff20d134e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:44Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.940072 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:44Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.955356 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:44Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.968124 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:44Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.978651 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bf0cc1807b2cbce57729fe5a58675ff9124110cb52d7b0d8ae4a309098b433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:44Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:44 crc kubenswrapper[4794]: I0216 17:00:44.990938 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:44Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.005115 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"message\\\":\\\"2026-02-16T16:59:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1005e3bb-676a-4f72-8899-1a6ff4f8312b\\\\n2026-02-16T16:59:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1005e3bb-676a-4f72-8899-1a6ff4f8312b to /host/opt/cni/bin/\\\\n2026-02-16T16:59:58Z [verbose] multus-daemon started\\\\n2026-02-16T16:59:58Z [verbose] Readiness Indicator file check\\\\n2026-02-16T17:00:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.006824 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.006878 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.006910 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.006931 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.006942 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:45Z","lastTransitionTime":"2026-02-16T17:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.014755 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd2a1cf-d118-4f18-9ef8-7478fb22dcee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6269efb5010fa7baa3f906435c74594d73213e78cb782702c1fda4e3feae5f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b570424b4cd97b02ca03d9901d70aab76c2037d3b3799978e1a116b0c8f5e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4b570424b4cd97b02ca03d9901d70aab76c2037d3b3799978e1a116b0c8f5e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.025509 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.110156 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.110218 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.110230 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.110249 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.110263 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:45Z","lastTransitionTime":"2026-02-16T17:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.212676 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.212715 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.212725 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.212740 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.212750 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:45Z","lastTransitionTime":"2026-02-16T17:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.245911 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zwhdn_f6f074ad-d6ce-4c47-aa3c-196e4ad30e64/kube-multus/0.log" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.245987 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zwhdn" event={"ID":"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64","Type":"ContainerStarted","Data":"1a81814b182e8628b21c89d613668a46a0be932629aacc121699a0775ddc225d"} Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.248267 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9krvl_d985e4f1-78bb-43f9-b86c-cd47831d602c/ovnkube-controller/2.log" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.250749 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" event={"ID":"d985e4f1-78bb-43f9-b86c-cd47831d602c","Type":"ContainerStarted","Data":"6a9b07055fd16bf9dde792f372b5a19f7faf37d643ae4986f169c85fdcfe27d9"} Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.251234 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.258986 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.269057 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd2a1cf-d118-4f18-9ef8-7478fb22dcee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6269efb5010fa7baa3f906435c74594d73213e78cb782702c1fda4e3feae5f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b570424b4cd97b02ca03d9901d70aab76c2037d3b3799978e1a116b0c8f5e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4b570424b4cd97b02ca03d9901d70aab76c2037d3b3799978e1a116b0c8f5e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.281027 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.292024 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.302964 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bf0cc1807b2cbce57729fe5a58675ff9124110cb52d7b0d8ae4a309098b433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.315159 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.315203 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.315212 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.315227 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.315237 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:45Z","lastTransitionTime":"2026-02-16T17:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.315887 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.329266 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a81814b182e8628b21c89d613668a46a0be932629aacc121699a0775ddc225d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"message\\\":\\\"2026-02-16T16:59:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1005e3bb-676a-4f72-8899-1a6ff4f8312b\\\\n2026-02-16T16:59:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1005e3bb-676a-4f72-8899-1a6ff4f8312b to /host/opt/cni/bin/\\\\n2026-02-16T16:59:58Z [verbose] multus-daemon started\\\\n2026-02-16T16:59:58Z [verbose] Readiness Indicator file check\\\\n2026-02-16T17:00:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.341986 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d16146c9c352c4085083323776d5be036f92c66357a810bf908ac40e229192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.353444 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tf698" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894bff1b-b8b9-4c28-8ffe-0e0469958227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zns6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zns6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tf698\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.372436 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.387949 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.402059 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.417444 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.417488 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.417500 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.417517 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.417529 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:45Z","lastTransitionTime":"2026-02-16T17:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.418333 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dbad4f63495ccf97b4852e3878b155e281c37662322d28b442acb4d2748e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.439219 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a35f79cea3289726aad21e06cdbef120a4acb566394eb9f0939efb5600609ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a35f79cea3289726aad21e06cdbef120a4acb566394eb9f0939efb5600609ea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"message\\\":\\\"cs-daemon-tf698\\\\\\\", UID:\\\\\\\"894bff1b-b8b9-4c28-8ffe-0e0469958227\\\\\\\", APIVersion:\\\\\\\"v1\\\\\\\", ResourceVersion:\\\\\\\"26909\\\\\\\", FieldPath:\\\\\\\"\\\\\\\"}): type: 'Warning' reason: 'ErrorAddingResource' addLogicalPort failed for openshift-multus/network-metrics-daemon-tf698: failed to update pod openshift-multus/network-metrics-daemon-tf698: Internal error occurred: failed calling webhook \\\\\\\"pod.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/pod?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z\\\\nI0216 17:00:20.291733 6450 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 17:00:20.291760 6450 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 17:00:20.291816 6450 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 17:00:20.291837 6450 factory.go:656] Stopping watch factory\\\\nI0216 17:00:20.291853 6450 ovnkube.go:599] Stopped ovnkube\\\\nI0216 17:00:20.291877 6450 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0216 17:00:20.291880 6450 handler.go:208] Removed *v1.Node event handler 2\\\\nF0216 17:00:20.291948 6450 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9krvl_openshift-ovn-kubernetes(d985e4f1-78bb-43f9-b86c-cd47831d602c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.451213 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b09eb0-58bb-41dc-9660-eeea82e4496a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e67602098c1fa62e4147637862358b3139d1307c7f2d09fc3c715ea67520fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f61dc0173699c28378af80479249a69f3056651048398a9699c1e268d386329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090c39431176a8d41d18c2d8583aaa438dd244ceeeebaef0ee502ab9b0958d86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba8e70eecfbf8e03bddd7c5db4b683416b21b4e717c377954dfec7ff20d134e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba8e70eecfbf8e03bddd7c5db4b683416b21b4e717c377954dfec7ff20d134e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.461774 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.475295 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w6ttl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf36d1cc-c61d-4339-91a7-579ff74019aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2938850604a36c75c38eae180a2a231162f98c256191b74e63abcf08c8027924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rdjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w6ttl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.488130 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cmzfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"237c381f-d225-4a4b-8bc9-6c03ee09015f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca731f62fcf2b8c8e68925b2b13cf1f61cab4b77425c85820278f710f4d8c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fnpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61b6dad949f71a170816c56dbc1ad2c99e88e7ecf1043d74bb077950f135eeed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fnpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cmzfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.499158 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bf0cc1807b2cbce57729fe5a58675ff9124110cb52d7b0d8ae4a309098b433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.510635 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.519753 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.519803 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.519815 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.519831 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.519841 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:45Z","lastTransitionTime":"2026-02-16T17:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.526598 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a81814b182e8628b21c89d613668a46a0be932629aacc121699a0775ddc225d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"message\\\":\\\"2026-02-16T16:59:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1005e3bb-676a-4f72-8899-1a6ff4f8312b\\\\n2026-02-16T16:59:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1005e3bb-676a-4f72-8899-1a6ff4f8312b to /host/opt/cni/bin/\\\\n2026-02-16T16:59:58Z [verbose] multus-daemon started\\\\n2026-02-16T16:59:58Z [verbose] Readiness Indicator file check\\\\n2026-02-16T17:00:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.537427 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd2a1cf-d118-4f18-9ef8-7478fb22dcee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6269efb5010fa7baa3f906435c74594d73213e78cb782702c1fda4e3feae5f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b570424b4cd97b02ca03d9901d70aab76c2037d3b3799978e1a116b0c8f5e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4b570424b4cd97b02ca03d9901d70aab76c2037d3b3799978e1a116b0c8f5e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.550092 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.560705 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.573336 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.587702 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dbad4f63495ccf97b4852e3878b155e281c37662322d28b442acb4d2748e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.607106 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a9b07055fd16bf9dde792f372b5a19f7faf37d643ae4986f169c85fdcfe27d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a35f79cea3289726aad21e06cdbef120a4acb566394eb9f0939efb5600609ea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"message\\\":\\\"cs-daemon-tf698\\\\\\\", UID:\\\\\\\"894bff1b-b8b9-4c28-8ffe-0e0469958227\\\\\\\", APIVersion:\\\\\\\"v1\\\\\\\", ResourceVersion:\\\\\\\"26909\\\\\\\", FieldPath:\\\\\\\"\\\\\\\"}): type: 'Warning' reason: 'ErrorAddingResource' addLogicalPort failed for openshift-multus/network-metrics-daemon-tf698: failed to update pod openshift-multus/network-metrics-daemon-tf698: Internal error occurred: failed calling webhook \\\\\\\"pod.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/pod?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z\\\\nI0216 17:00:20.291733 6450 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 17:00:20.291760 6450 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 17:00:20.291816 6450 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 17:00:20.291837 6450 factory.go:656] Stopping watch factory\\\\nI0216 17:00:20.291853 6450 ovnkube.go:599] Stopped ovnkube\\\\nI0216 17:00:20.291877 6450 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0216 17:00:20.291880 6450 handler.go:208] Removed *v1.Node event handler 2\\\\nF0216 17:00:20.291948 6450 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.621904 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d16146c9c352c4085083323776d5be036f92c66357a810bf908ac40e229192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.622449 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.622488 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.622502 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.622519 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.622530 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:45Z","lastTransitionTime":"2026-02-16T17:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.637523 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tf698" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894bff1b-b8b9-4c28-8ffe-0e0469958227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zns6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zns6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tf698\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.649860 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.661179 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.673428 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cmzfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"237c381f-d225-4a4b-8bc9-6c03ee09015f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca731f62fcf2b8c8e68925b2b13cf1f61cab4b77425c85820278f710f4d8c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fnpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61b6dad949f71a170816c56dbc1ad2c99e88e7ecf1043d74bb077950f135eeed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fnpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cmzfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.683617 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b09eb0-58bb-41dc-9660-eeea82e4496a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e67602098c1fa62e4147637862358b3139d1307c7f2d09fc3c715ea67520fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f61dc0173699c28378af80479249a69f3056651048398a9699c1e268d386329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090c39431176a8d41d18c2d8583aaa438dd244ceeeebaef0ee502ab9b0958d86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba8e70eecfbf8e03bddd7c5db4b683416b21b4e717c377954dfec7ff20d134e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba8e70eecfbf8e03bddd7c5db4b683416b21b4e717c377954dfec7ff20d134e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.692553 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.702899 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w6ttl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf36d1cc-c61d-4339-91a7-579ff74019aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2938850604a36c75c38eae180a2a231162f98c256191b74e63abcf08c8027924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rdjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w6ttl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.713884 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:45Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.725903 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.725939 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.725948 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.725962 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.725973 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:45Z","lastTransitionTime":"2026-02-16T17:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.775258 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 15:42:56.120231323 +0000 UTC Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.790699 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.790775 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.790786 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.790715 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:45 crc kubenswrapper[4794]: E0216 17:00:45.790919 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:45 crc kubenswrapper[4794]: E0216 17:00:45.790842 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:45 crc kubenswrapper[4794]: E0216 17:00:45.791045 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:00:45 crc kubenswrapper[4794]: E0216 17:00:45.791153 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.828108 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.828173 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.828191 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.828220 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.828235 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:45Z","lastTransitionTime":"2026-02-16T17:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.930725 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.930845 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.930863 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.930889 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:45 crc kubenswrapper[4794]: I0216 17:00:45.930906 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:45Z","lastTransitionTime":"2026-02-16T17:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.033168 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.033261 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.033282 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.033338 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.033362 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:46Z","lastTransitionTime":"2026-02-16T17:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.136663 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.136749 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.136772 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.136802 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.136823 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:46Z","lastTransitionTime":"2026-02-16T17:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.239650 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.239724 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.239751 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.239791 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.239816 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:46Z","lastTransitionTime":"2026-02-16T17:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.256424 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9krvl_d985e4f1-78bb-43f9-b86c-cd47831d602c/ovnkube-controller/3.log" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.257476 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9krvl_d985e4f1-78bb-43f9-b86c-cd47831d602c/ovnkube-controller/2.log" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.261641 4794 generic.go:334] "Generic (PLEG): container finished" podID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerID="6a9b07055fd16bf9dde792f372b5a19f7faf37d643ae4986f169c85fdcfe27d9" exitCode=1 Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.261725 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" event={"ID":"d985e4f1-78bb-43f9-b86c-cd47831d602c","Type":"ContainerDied","Data":"6a9b07055fd16bf9dde792f372b5a19f7faf37d643ae4986f169c85fdcfe27d9"} Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.261807 4794 scope.go:117] "RemoveContainer" containerID="9a35f79cea3289726aad21e06cdbef120a4acb566394eb9f0939efb5600609ea" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.263333 4794 scope.go:117] "RemoveContainer" containerID="6a9b07055fd16bf9dde792f372b5a19f7faf37d643ae4986f169c85fdcfe27d9" Feb 16 17:00:46 crc kubenswrapper[4794]: E0216 17:00:46.263698 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9krvl_openshift-ovn-kubernetes(d985e4f1-78bb-43f9-b86c-cd47831d602c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.287646 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a9b07055fd16bf9dde792f372b5a19f7faf37d643ae4986f169c85fdcfe27d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a35f79cea3289726aad21e06cdbef120a4acb566394eb9f0939efb5600609ea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"message\\\":\\\"cs-daemon-tf698\\\\\\\", UID:\\\\\\\"894bff1b-b8b9-4c28-8ffe-0e0469958227\\\\\\\", APIVersion:\\\\\\\"v1\\\\\\\", ResourceVersion:\\\\\\\"26909\\\\\\\", FieldPath:\\\\\\\"\\\\\\\"}): type: 'Warning' reason: 'ErrorAddingResource' addLogicalPort failed for openshift-multus/network-metrics-daemon-tf698: failed to update pod openshift-multus/network-metrics-daemon-tf698: Internal error occurred: failed calling webhook \\\\\\\"pod.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/pod?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z\\\\nI0216 17:00:20.291733 6450 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 17:00:20.291760 6450 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 17:00:20.291816 6450 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 17:00:20.291837 6450 factory.go:656] Stopping watch factory\\\\nI0216 17:00:20.291853 6450 ovnkube.go:599] Stopped ovnkube\\\\nI0216 17:00:20.291877 6450 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0216 17:00:20.291880 6450 handler.go:208] Removed *v1.Node event handler 2\\\\nF0216 17:00:20.291948 6450 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a9b07055fd16bf9dde792f372b5a19f7faf37d643ae4986f169c85fdcfe27d9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:45Z\\\",\\\"message\\\":\\\":00:45.541841 6847 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0216 17:00:45.541847 6847 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-tqtvb\\\\nI0216 17:00:45.541851 6847 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0216 17:00:45.541857 6847 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-tqtvb\\\\nI0216 17:00:45.541865 6847 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-tqtvb in node crc\\\\nI0216 17:00:45.541872 6847 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-tqtvb after 0 failed attempt(s)\\\\nI0216 17:00:45.541878 6847 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-tqtvb\\\\nI0216 17:00:45.541831 6847 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0216 17:00:45.541879 6847 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF0216 17:00:45.541942 6847 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:46Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.299871 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d16146c9c352c4085083323776d5be036f92c66357a810bf908ac40e229192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:46Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.309686 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tf698" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894bff1b-b8b9-4c28-8ffe-0e0469958227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zns6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zns6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tf698\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:46Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.325620 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:46Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.339423 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:46Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.342519 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.342589 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.342602 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.342619 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.342638 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:46Z","lastTransitionTime":"2026-02-16T17:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.353389 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:46Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.366645 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dbad4f63495ccf97b4852e3878b155e281c37662322d28b442acb4d2748e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:46Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.377437 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b09eb0-58bb-41dc-9660-eeea82e4496a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e67602098c1fa62e4147637862358b3139d1307c7f2d09fc3c715ea67520fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f61dc0173699c28378af80479249a69f3056651048398a9699c1e268d386329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090c39431176a8d41d18c2d8583aaa438dd244ceeeebaef0ee502ab9b0958d86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba8e70eecfbf8e03bddd7c5db4b683416b21b4e717c377954dfec7ff20d134e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba8e70eecfbf8e03bddd7c5db4b683416b21b4e717c377954dfec7ff20d134e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:46Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.388277 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:46Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.397824 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w6ttl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf36d1cc-c61d-4339-91a7-579ff74019aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2938850604a36c75c38eae180a2a231162f98c256191b74e63abcf08c8027924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rdjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w6ttl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:46Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.409225 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cmzfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"237c381f-d225-4a4b-8bc9-6c03ee09015f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca731f62fcf2b8c8e68925b2b13cf1f61cab4b77425c85820278f710f4d8c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fnpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61b6dad949f71a170816c56dbc1ad2c99e88e7ecf1043d74bb077950f135eeed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fnpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cmzfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:46Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.423478 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:46Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.437742 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a81814b182e8628b21c89d613668a46a0be932629aacc121699a0775ddc225d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"message\\\":\\\"2026-02-16T16:59:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1005e3bb-676a-4f72-8899-1a6ff4f8312b\\\\n2026-02-16T16:59:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1005e3bb-676a-4f72-8899-1a6ff4f8312b to /host/opt/cni/bin/\\\\n2026-02-16T16:59:58Z [verbose] multus-daemon started\\\\n2026-02-16T16:59:58Z [verbose] Readiness Indicator file check\\\\n2026-02-16T17:00:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:46Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.445700 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.445764 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.445782 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.445810 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.445828 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:46Z","lastTransitionTime":"2026-02-16T17:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.447873 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd2a1cf-d118-4f18-9ef8-7478fb22dcee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6269efb5010fa7baa3f906435c74594d73213e78cb782702c1fda4e3feae5f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b570424b4cd97b02ca03d9901d70aab76c2037d3b3799978e1a116b0c8f5e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4b570424b4cd97b02ca03d9901d70aab76c2037d3b3799978e1a116b0c8f5e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:46Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.460031 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:46Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.474457 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:46Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.486447 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bf0cc1807b2cbce57729fe5a58675ff9124110cb52d7b0d8ae4a309098b433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:46Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.500260 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:46Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.548368 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.548852 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.548928 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.549037 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.549121 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:46Z","lastTransitionTime":"2026-02-16T17:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.694291 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.694367 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.694381 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.694396 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.694406 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:46Z","lastTransitionTime":"2026-02-16T17:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.776223 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 23:09:39.94421593 +0000 UTC Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.799005 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.799072 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.799088 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.799115 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.799129 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:46Z","lastTransitionTime":"2026-02-16T17:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.902002 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.902045 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.902054 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.902070 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:46 crc kubenswrapper[4794]: I0216 17:00:46.902079 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:46Z","lastTransitionTime":"2026-02-16T17:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.004547 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.004599 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.004612 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.004630 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.004641 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:47Z","lastTransitionTime":"2026-02-16T17:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.107029 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.107062 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.107072 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.107083 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.107091 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:47Z","lastTransitionTime":"2026-02-16T17:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.208614 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.208655 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.208666 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.208683 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.208693 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:47Z","lastTransitionTime":"2026-02-16T17:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.269219 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9krvl_d985e4f1-78bb-43f9-b86c-cd47831d602c/ovnkube-controller/3.log" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.273639 4794 scope.go:117] "RemoveContainer" containerID="6a9b07055fd16bf9dde792f372b5a19f7faf37d643ae4986f169c85fdcfe27d9" Feb 16 17:00:47 crc kubenswrapper[4794]: E0216 17:00:47.274024 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9krvl_openshift-ovn-kubernetes(d985e4f1-78bb-43f9-b86c-cd47831d602c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.289104 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd2a1cf-d118-4f18-9ef8-7478fb22dcee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6269efb5010fa7baa3f906435c74594d73213e78cb782702c1fda4e3feae5f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b570424b4cd97b02ca03d9901d70aab76c2037d3b3799978e1a116b0c8f5e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4b570424b4cd97b02ca03d9901d70aab76c2037d3b3799978e1a116b0c8f5e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:47Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.307682 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:47Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.311726 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.311767 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.311777 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.311793 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.311802 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:47Z","lastTransitionTime":"2026-02-16T17:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.320883 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:47Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.336983 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bf0cc1807b2cbce57729fe5a58675ff9124110cb52d7b0d8ae4a309098b433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:47Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.354776 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:47Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.366645 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a81814b182e8628b21c89d613668a46a0be932629aacc121699a0775ddc225d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"message\\\":\\\"2026-02-16T16:59:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1005e3bb-676a-4f72-8899-1a6ff4f8312b\\\\n2026-02-16T16:59:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1005e3bb-676a-4f72-8899-1a6ff4f8312b to /host/opt/cni/bin/\\\\n2026-02-16T16:59:58Z [verbose] multus-daemon started\\\\n2026-02-16T16:59:58Z [verbose] Readiness Indicator file check\\\\n2026-02-16T17:00:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:47Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.377455 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tf698" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894bff1b-b8b9-4c28-8ffe-0e0469958227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zns6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zns6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tf698\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:47Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.390816 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:47Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.403014 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:47Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.413884 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.413919 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.413928 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.413944 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.413955 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:47Z","lastTransitionTime":"2026-02-16T17:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.414654 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:47Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.427739 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dbad4f63495ccf97b4852e3878b155e281c37662322d28b442acb4d2748e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:47Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.458013 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a9b07055fd16bf9dde792f372b5a19f7faf37d643ae4986f169c85fdcfe27d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a9b07055fd16bf9dde792f372b5a19f7faf37d643ae4986f169c85fdcfe27d9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:45Z\\\",\\\"message\\\":\\\":00:45.541841 6847 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0216 17:00:45.541847 6847 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-tqtvb\\\\nI0216 17:00:45.541851 6847 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0216 17:00:45.541857 6847 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-tqtvb\\\\nI0216 17:00:45.541865 6847 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-tqtvb in node crc\\\\nI0216 17:00:45.541872 6847 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-tqtvb after 0 failed attempt(s)\\\\nI0216 17:00:45.541878 6847 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-tqtvb\\\\nI0216 17:00:45.541831 6847 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0216 17:00:45.541879 6847 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF0216 17:00:45.541942 6847 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9krvl_openshift-ovn-kubernetes(d985e4f1-78bb-43f9-b86c-cd47831d602c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:47Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.468017 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d16146c9c352c4085083323776d5be036f92c66357a810bf908ac40e229192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:47Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.479989 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b09eb0-58bb-41dc-9660-eeea82e4496a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e67602098c1fa62e4147637862358b3139d1307c7f2d09fc3c715ea67520fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f61dc0173699c28378af80479249a69f3056651048398a9699c1e268d386329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090c39431176a8d41d18c2d8583aaa438dd244ceeeebaef0ee502ab9b0958d86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba8e70eecfbf8e03bddd7c5db4b683416b21b4e717c377954dfec7ff20d134e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba8e70eecfbf8e03bddd7c5db4b683416b21b4e717c377954dfec7ff20d134e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:47Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.490759 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:47Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.502773 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w6ttl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf36d1cc-c61d-4339-91a7-579ff74019aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2938850604a36c75c38eae180a2a231162f98c256191b74e63abcf08c8027924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rdjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w6ttl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:47Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.513936 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cmzfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"237c381f-d225-4a4b-8bc9-6c03ee09015f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca731f62fcf2b8c8e68925b2b13cf1f61cab4b77425c85820278f710f4d8c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fnpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61b6dad949f71a170816c56dbc1ad2c99e88e7ecf1043d74bb077950f135eeed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fnpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cmzfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:47Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.515465 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.515491 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.515499 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.515514 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.515523 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:47Z","lastTransitionTime":"2026-02-16T17:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.525856 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:47Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.618190 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.618222 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.618232 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.618247 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.618256 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:47Z","lastTransitionTime":"2026-02-16T17:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.721426 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.721476 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.721488 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.721504 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.721517 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:47Z","lastTransitionTime":"2026-02-16T17:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.792829 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 08:39:55.267653325 +0000 UTC Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.793048 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.793055 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.793091 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.793180 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:47 crc kubenswrapper[4794]: E0216 17:00:47.793350 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:47 crc kubenswrapper[4794]: E0216 17:00:47.793561 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:00:47 crc kubenswrapper[4794]: E0216 17:00:47.793677 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:47 crc kubenswrapper[4794]: E0216 17:00:47.793776 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.823982 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.824021 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.824031 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.824046 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.824059 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:47Z","lastTransitionTime":"2026-02-16T17:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.925808 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.925843 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.925851 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.925864 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:47 crc kubenswrapper[4794]: I0216 17:00:47.925872 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:47Z","lastTransitionTime":"2026-02-16T17:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.028561 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.028592 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.028600 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.028613 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.028621 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:48Z","lastTransitionTime":"2026-02-16T17:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.131214 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.131251 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.131262 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.131280 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.131292 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:48Z","lastTransitionTime":"2026-02-16T17:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.234015 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.234044 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.234054 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.234069 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.234080 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:48Z","lastTransitionTime":"2026-02-16T17:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.336629 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.336677 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.336694 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.336716 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.336734 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:48Z","lastTransitionTime":"2026-02-16T17:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.439320 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.439360 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.439368 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.439383 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.439392 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:48Z","lastTransitionTime":"2026-02-16T17:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.541100 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.541124 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.541133 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.541145 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.541153 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:48Z","lastTransitionTime":"2026-02-16T17:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.643821 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.643864 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.643873 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.643887 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.643899 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:48Z","lastTransitionTime":"2026-02-16T17:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.747401 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.747454 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.747464 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.747477 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.747486 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:48Z","lastTransitionTime":"2026-02-16T17:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.793199 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 06:04:38.780233387 +0000 UTC Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.850611 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.850682 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.850699 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.851114 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.851170 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:48Z","lastTransitionTime":"2026-02-16T17:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.954279 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.954365 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.954382 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.954407 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:48 crc kubenswrapper[4794]: I0216 17:00:48.954424 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:48Z","lastTransitionTime":"2026-02-16T17:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.057431 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.057705 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.057771 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.057837 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.057906 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:49Z","lastTransitionTime":"2026-02-16T17:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.161158 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.161206 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.161220 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.161242 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.161257 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:49Z","lastTransitionTime":"2026-02-16T17:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.263496 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.263780 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.263877 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.264006 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.264090 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:49Z","lastTransitionTime":"2026-02-16T17:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.366935 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.366971 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.366984 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.367000 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.367008 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:49Z","lastTransitionTime":"2026-02-16T17:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.469943 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.470014 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.470036 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.470063 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.470084 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:49Z","lastTransitionTime":"2026-02-16T17:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.573572 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.573687 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.573711 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.573741 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.573767 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:49Z","lastTransitionTime":"2026-02-16T17:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.678061 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.678124 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.678141 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.678168 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.678189 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:49Z","lastTransitionTime":"2026-02-16T17:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.781444 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.781519 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.781540 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.781570 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.781587 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:49Z","lastTransitionTime":"2026-02-16T17:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.790762 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.790851 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.790790 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:00:49 crc kubenswrapper[4794]: E0216 17:00:49.790970 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:49 crc kubenswrapper[4794]: E0216 17:00:49.791091 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:00:49 crc kubenswrapper[4794]: E0216 17:00:49.791170 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.791212 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:49 crc kubenswrapper[4794]: E0216 17:00:49.791276 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.793832 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 08:24:30.436012837 +0000 UTC Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.885003 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.885079 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.885097 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.885132 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.885148 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:49Z","lastTransitionTime":"2026-02-16T17:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.988113 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.988146 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.988158 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.988174 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:49 crc kubenswrapper[4794]: I0216 17:00:49.988186 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:49Z","lastTransitionTime":"2026-02-16T17:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.090856 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.090938 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.090959 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.090979 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.090993 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:50Z","lastTransitionTime":"2026-02-16T17:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.193611 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.193650 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.193694 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.193714 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.193727 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:50Z","lastTransitionTime":"2026-02-16T17:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.296721 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.296803 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.296820 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.296843 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.296860 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:50Z","lastTransitionTime":"2026-02-16T17:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.399667 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.399751 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.399774 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.399805 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.399827 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:50Z","lastTransitionTime":"2026-02-16T17:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.503563 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.503616 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.503643 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.503664 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.503676 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:50Z","lastTransitionTime":"2026-02-16T17:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.607138 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.607202 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.607219 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.607244 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.607261 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:50Z","lastTransitionTime":"2026-02-16T17:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.710437 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.710514 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.710532 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.710558 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.710577 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:50Z","lastTransitionTime":"2026-02-16T17:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.794164 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 13:43:14.997810858 +0000 UTC Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.812730 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.812781 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.812797 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.812814 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.812829 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:50Z","lastTransitionTime":"2026-02-16T17:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.916270 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.916388 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.916408 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.916434 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:50 crc kubenswrapper[4794]: I0216 17:00:50.916452 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:50Z","lastTransitionTime":"2026-02-16T17:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.019722 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.019833 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.019857 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.019888 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.019911 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:51Z","lastTransitionTime":"2026-02-16T17:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.123173 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.123337 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.123366 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.123409 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.123432 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:51Z","lastTransitionTime":"2026-02-16T17:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.226647 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.226691 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.226702 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.226718 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.226729 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:51Z","lastTransitionTime":"2026-02-16T17:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.330486 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.330568 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.330595 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.330625 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.330646 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:51Z","lastTransitionTime":"2026-02-16T17:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.433523 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.433599 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.433614 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.433632 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.433646 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:51Z","lastTransitionTime":"2026-02-16T17:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.536557 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.536631 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.536647 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.536727 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.536746 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:51Z","lastTransitionTime":"2026-02-16T17:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.638477 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.638522 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.638538 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.638560 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.638574 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:51Z","lastTransitionTime":"2026-02-16T17:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.740750 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.740789 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.740799 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.740814 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.740827 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:51Z","lastTransitionTime":"2026-02-16T17:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.791371 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.791441 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.791466 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.791371 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:00:51 crc kubenswrapper[4794]: E0216 17:00:51.791544 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:51 crc kubenswrapper[4794]: E0216 17:00:51.791663 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:51 crc kubenswrapper[4794]: E0216 17:00:51.791798 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:51 crc kubenswrapper[4794]: E0216 17:00:51.791916 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.794632 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 08:14:09.60118108 +0000 UTC Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.843570 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.843728 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.843760 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.843812 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.843829 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:51Z","lastTransitionTime":"2026-02-16T17:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.947230 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.947288 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.947323 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.947344 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:51 crc kubenswrapper[4794]: I0216 17:00:51.947357 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:51Z","lastTransitionTime":"2026-02-16T17:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.050215 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.050280 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.050321 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.050346 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.050363 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:52Z","lastTransitionTime":"2026-02-16T17:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.152760 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.152831 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.152850 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.152876 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.152897 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:52Z","lastTransitionTime":"2026-02-16T17:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.255780 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.255902 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.255935 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.255971 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.255995 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:52Z","lastTransitionTime":"2026-02-16T17:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.359021 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.359092 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.359126 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.359167 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.359194 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:52Z","lastTransitionTime":"2026-02-16T17:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.462567 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.462597 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.462605 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.462620 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.462628 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:52Z","lastTransitionTime":"2026-02-16T17:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.564680 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.564714 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.564723 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.564743 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.564758 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:52Z","lastTransitionTime":"2026-02-16T17:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.666952 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.667006 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.667023 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.667046 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.667066 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:52Z","lastTransitionTime":"2026-02-16T17:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.770008 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.770075 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.770092 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.770118 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.770135 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:52Z","lastTransitionTime":"2026-02-16T17:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.794764 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 20:41:56.597556017 +0000 UTC Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.873801 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.873860 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.873871 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.873886 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.873896 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:52Z","lastTransitionTime":"2026-02-16T17:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.930838 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.930875 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.930885 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.930899 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.930910 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:52Z","lastTransitionTime":"2026-02-16T17:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:52 crc kubenswrapper[4794]: E0216 17:00:52.947737 4794 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ccf280c0-9a33-46bd-be2c-0dca34f382e0\\\",\\\"systemUUID\\\":\\\"b3d0f632-3e25-45db-ae26-e5b3ec8421a1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:52Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.951690 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.951732 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.951742 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.951758 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.951769 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:52Z","lastTransitionTime":"2026-02-16T17:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:52 crc kubenswrapper[4794]: E0216 17:00:52.967866 4794 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ccf280c0-9a33-46bd-be2c-0dca34f382e0\\\",\\\"systemUUID\\\":\\\"b3d0f632-3e25-45db-ae26-e5b3ec8421a1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:52Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.972108 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.972170 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.972191 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.972219 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.972247 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:52Z","lastTransitionTime":"2026-02-16T17:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:52 crc kubenswrapper[4794]: E0216 17:00:52.989170 4794 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ccf280c0-9a33-46bd-be2c-0dca34f382e0\\\",\\\"systemUUID\\\":\\\"b3d0f632-3e25-45db-ae26-e5b3ec8421a1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:52Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.993698 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.993749 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.993775 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.993799 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:52 crc kubenswrapper[4794]: I0216 17:00:52.993854 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:52Z","lastTransitionTime":"2026-02-16T17:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:53 crc kubenswrapper[4794]: E0216 17:00:53.006185 4794 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ccf280c0-9a33-46bd-be2c-0dca34f382e0\\\",\\\"systemUUID\\\":\\\"b3d0f632-3e25-45db-ae26-e5b3ec8421a1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:53Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.009598 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.009673 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.009694 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.009715 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.009730 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:53Z","lastTransitionTime":"2026-02-16T17:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:53 crc kubenswrapper[4794]: E0216 17:00:53.024188 4794 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ccf280c0-9a33-46bd-be2c-0dca34f382e0\\\",\\\"systemUUID\\\":\\\"b3d0f632-3e25-45db-ae26-e5b3ec8421a1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:53Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:53 crc kubenswrapper[4794]: E0216 17:00:53.024399 4794 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.025996 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.026035 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.026046 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.026063 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.026076 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:53Z","lastTransitionTime":"2026-02-16T17:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.128245 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.128282 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.128292 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.128321 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.128331 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:53Z","lastTransitionTime":"2026-02-16T17:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.230778 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.231215 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.231464 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.231684 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.231889 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:53Z","lastTransitionTime":"2026-02-16T17:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.334732 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.334782 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.334794 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.334850 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.334866 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:53Z","lastTransitionTime":"2026-02-16T17:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.438275 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.438562 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.438738 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.439006 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.439227 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:53Z","lastTransitionTime":"2026-02-16T17:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.542360 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.542779 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.542867 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.543018 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.543106 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:53Z","lastTransitionTime":"2026-02-16T17:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.645429 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.645455 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.645466 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.645480 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.645490 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:53Z","lastTransitionTime":"2026-02-16T17:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.747613 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.747643 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.747651 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.747665 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.747673 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:53Z","lastTransitionTime":"2026-02-16T17:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.790919 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.791024 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:53 crc kubenswrapper[4794]: E0216 17:00:53.791089 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:53 crc kubenswrapper[4794]: E0216 17:00:53.791177 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.791242 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:53 crc kubenswrapper[4794]: E0216 17:00:53.791287 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.791346 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:00:53 crc kubenswrapper[4794]: E0216 17:00:53.791389 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.795602 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 20:57:20.142259556 +0000 UTC Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.850203 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.850269 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.850291 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.850399 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.850429 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:53Z","lastTransitionTime":"2026-02-16T17:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.953609 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.953683 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.953700 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.953726 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:53 crc kubenswrapper[4794]: I0216 17:00:53.953745 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:53Z","lastTransitionTime":"2026-02-16T17:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.056534 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.056592 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.056609 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.056633 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.056701 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:54Z","lastTransitionTime":"2026-02-16T17:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.159522 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.159967 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.160157 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.160387 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.160592 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:54Z","lastTransitionTime":"2026-02-16T17:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.263662 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.263729 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.263754 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.263783 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.263804 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:54Z","lastTransitionTime":"2026-02-16T17:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.367896 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.367961 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.367977 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.368000 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.368016 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:54Z","lastTransitionTime":"2026-02-16T17:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.470906 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.470939 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.470948 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.470967 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.470984 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:54Z","lastTransitionTime":"2026-02-16T17:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.573795 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.573874 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.573886 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.573905 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.573920 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:54Z","lastTransitionTime":"2026-02-16T17:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.677402 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.677469 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.677488 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.677513 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.677532 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:54Z","lastTransitionTime":"2026-02-16T17:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.780829 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.780873 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.780885 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.780899 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.780910 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:54Z","lastTransitionTime":"2026-02-16T17:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.795756 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 21:04:19.195659726 +0000 UTC Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.803715 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44bf0cc1807b2cbce57729fe5a58675ff9124110cb52d7b0d8ae4a309098b433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:54Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.816385 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:54Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.832827 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zwhdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a81814b182e8628b21c89d613668a46a0be932629aacc121699a0775ddc225d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"message\\\":\\\"2026-02-16T16:59:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1005e3bb-676a-4f72-8899-1a6ff4f8312b\\\\n2026-02-16T16:59:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1005e3bb-676a-4f72-8899-1a6ff4f8312b to /host/opt/cni/bin/\\\\n2026-02-16T16:59:58Z [verbose] multus-daemon started\\\\n2026-02-16T16:59:58Z [verbose] Readiness Indicator file check\\\\n2026-02-16T17:00:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pk7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zwhdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:54Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.845093 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd2a1cf-d118-4f18-9ef8-7478fb22dcee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6269efb5010fa7baa3f906435c74594d73213e78cb782702c1fda4e3feae5f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b570424b4cd97b02ca03d9901d70aab76c2037d3b3799978e1a116b0c8f5e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4b570424b4cd97b02ca03d9901d70aab76c2037d3b3799978e1a116b0c8f5e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:54Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.863543 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c050b632ac1fb9bafdb50339fcd8b29ec06cd4959792d91d2c1892697007e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:54Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.876329 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:54Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.883677 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.883708 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.883717 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.883729 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.883738 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:54Z","lastTransitionTime":"2026-02-16T17:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.889283 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b6a9cf4d85cbbee5fc062d14bee7246f1c4b9d6628c000d9d60d694eaf93453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fbd7eb8264aebd7651d484c645e5a5b1f3f24e9610b03f85342c1fb3b0bed97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:54Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.906354 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fk74m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b325454b-7201-4221-a07a-6093f1245d66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dbad4f63495ccf97b4852e3878b155e281c37662322d28b442acb4d2748e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a86e5be7fdd464ed76aff7d39b220cf5dfde1a44f3bf8a5d6b5227fae80974\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b420d51dcd4009bd32bb05a5a0ed6b83085ba642c7c0958de1c4bb9edb1d0332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b126db7ec2aed32a31c7ba3a74de2ec7d6838ee6bddc76964ae8f13f156f9375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6312fcc73550cb08a64f3c4577e1daa68431962bc5c2f8e65547626c86f78971\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4381554f348550cb07ed2bfc187fb02e954f14c006fa9bff47f4f2abf95db595\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5a9feeb89fde1b3987599b603c0efa7868264fd5a60c26117f26ab57e94e50c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfvks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fk74m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:54Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.933165 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985e4f1-78bb-43f9-b86c-cd47831d602c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a9b07055fd16bf9dde792f372b5a19f7faf37d643ae4986f169c85fdcfe27d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a9b07055fd16bf9dde792f372b5a19f7faf37d643ae4986f169c85fdcfe27d9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:45Z\\\",\\\"message\\\":\\\":00:45.541841 6847 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0216 17:00:45.541847 6847 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-tqtvb\\\\nI0216 17:00:45.541851 6847 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0216 17:00:45.541857 6847 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-tqtvb\\\\nI0216 17:00:45.541865 6847 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-tqtvb in node crc\\\\nI0216 17:00:45.541872 6847 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-tqtvb after 0 failed attempt(s)\\\\nI0216 17:00:45.541878 6847 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-tqtvb\\\\nI0216 17:00:45.541831 6847 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0216 17:00:45.541879 6847 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF0216 17:00:45.541942 6847 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9krvl_openshift-ovn-kubernetes(d985e4f1-78bb-43f9-b86c-cd47831d602c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpr45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9krvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:54Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.944404 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqtvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7860ec44-a894-441d-b76a-2a88fa8441ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d16146c9c352c4085083323776d5be036f92c66357a810bf908ac40e229192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvswt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqtvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:54Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.954912 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tf698" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"894bff1b-b8b9-4c28-8ffe-0e0469958227\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zns6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zns6k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tf698\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:54Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.967677 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d61ecd-11f5-4131-b26d-7411c7be73e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 16:59:48.441049 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 16:59:48.443442 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3622113610/tls.crt::/tmp/serving-cert-3622113610/tls.key\\\\\\\"\\\\nI0216 16:59:54.869737 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 16:59:54.872510 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 16:59:54.872529 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 16:59:54.872549 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 16:59:54.872554 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 16:59:54.915365 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 16:59:54.915391 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 16:59:54.915402 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 16:59:54.915407 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 16:59:54.915410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 16:59:54.915414 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 16:59:54.915580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 16:59:54.917412 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:54Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.979019 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:54Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.986605 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.986666 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.986684 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.986708 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.986726 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:54Z","lastTransitionTime":"2026-02-16T17:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:54 crc kubenswrapper[4794]: I0216 17:00:54.989597 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cmzfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"237c381f-d225-4a4b-8bc9-6c03ee09015f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca731f62fcf2b8c8e68925b2b13cf1f61cab4b77425c85820278f710f4d8c939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fnpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61b6dad949f71a170816c56dbc1ad2c99e88e7ecf1043d74bb077950f135eeed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fnpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cmzfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:54Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.004802 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b09eb0-58bb-41dc-9660-eeea82e4496a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e67602098c1fa62e4147637862358b3139d1307c7f2d09fc3c715ea67520fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f61dc0173699c28378af80479249a69f3056651048398a9699c1e268d386329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090c39431176a8d41d18c2d8583aaa438dd244ceeeebaef0ee502ab9b0958d86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba8e70eecfbf8e03bddd7c5db4b683416b21b4e717c377954dfec7ff20d134e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba8e70eecfbf8e03bddd7c5db4b683416b21b4e717c377954dfec7ff20d134e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:55Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.015923 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d17fb0b-381a-46a1-8bba-33daee594e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02320248c29d0f44a991bbceb827abd57ca025b858a2da13372a7745614c0166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztkjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8q7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:55Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.026677 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w6ttl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf36d1cc-c61d-4339-91a7-579ff74019aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2938850604a36c75c38eae180a2a231162f98c256191b74e63abcf08c8027924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rdjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w6ttl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:55Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.042137 4794 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"099cddd1-0da5-4456-932b-f694a6c38cf4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41efac3d16da4f5c3a6bbecdb3724acf04d6b5be1a077b5fcf1625411f64c1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26c4dc3e9fd924e404a0b65038cf6975e7c61283be495d6aad984e7932d27a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aa7ac79b58423e5f4faea992c87663afe23b6ca58fec9a00a5c0cc36caedcce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:55Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.089129 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.089231 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.089256 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.089292 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.089346 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:55Z","lastTransitionTime":"2026-02-16T17:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.191514 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.191587 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.191608 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.191630 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.191645 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:55Z","lastTransitionTime":"2026-02-16T17:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.293864 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.293902 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.293914 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.293929 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.293939 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:55Z","lastTransitionTime":"2026-02-16T17:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.396744 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.396827 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.396849 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.396879 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.396927 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:55Z","lastTransitionTime":"2026-02-16T17:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.500511 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.500571 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.500588 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.500609 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.500625 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:55Z","lastTransitionTime":"2026-02-16T17:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.603994 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.604032 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.604041 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.604056 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.604068 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:55Z","lastTransitionTime":"2026-02-16T17:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.706868 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.707236 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.707254 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.707279 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.707296 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:55Z","lastTransitionTime":"2026-02-16T17:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.791121 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.791199 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.791210 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.791128 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:55 crc kubenswrapper[4794]: E0216 17:00:55.791293 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:55 crc kubenswrapper[4794]: E0216 17:00:55.791451 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:00:55 crc kubenswrapper[4794]: E0216 17:00:55.791566 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:55 crc kubenswrapper[4794]: E0216 17:00:55.791639 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.796218 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 17:03:44.814667262 +0000 UTC Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.810973 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.811052 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.811090 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.811122 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.811146 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:55Z","lastTransitionTime":"2026-02-16T17:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.914098 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.914164 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.914178 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.914195 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:55 crc kubenswrapper[4794]: I0216 17:00:55.914208 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:55Z","lastTransitionTime":"2026-02-16T17:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.017065 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.017099 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.017118 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.017133 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.017144 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:56Z","lastTransitionTime":"2026-02-16T17:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.119934 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.119985 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.119999 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.120019 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.120031 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:56Z","lastTransitionTime":"2026-02-16T17:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.222651 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.222707 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.222717 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.222734 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.222743 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:56Z","lastTransitionTime":"2026-02-16T17:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.325583 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.325843 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.325930 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.326015 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.326099 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:56Z","lastTransitionTime":"2026-02-16T17:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.429389 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.429457 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.429475 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.429500 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.429520 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:56Z","lastTransitionTime":"2026-02-16T17:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.532200 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.532237 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.532247 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.532262 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.532272 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:56Z","lastTransitionTime":"2026-02-16T17:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.634893 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.634967 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.634994 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.635025 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.635048 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:56Z","lastTransitionTime":"2026-02-16T17:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.738328 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.738604 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.738669 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.738747 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.738807 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:56Z","lastTransitionTime":"2026-02-16T17:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.796634 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 04:13:53.046506268 +0000 UTC Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.842166 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.842226 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.842268 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.842292 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.842330 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:56Z","lastTransitionTime":"2026-02-16T17:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.945609 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.946073 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.946136 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.946171 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:56 crc kubenswrapper[4794]: I0216 17:00:56.946198 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:56Z","lastTransitionTime":"2026-02-16T17:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.048970 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.049023 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.049042 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.049064 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.049080 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:57Z","lastTransitionTime":"2026-02-16T17:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.151554 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.151615 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.151636 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.151664 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.151685 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:57Z","lastTransitionTime":"2026-02-16T17:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.253916 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.254181 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.254269 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.254378 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.254460 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:57Z","lastTransitionTime":"2026-02-16T17:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.357001 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.357062 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.357080 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.357104 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.357127 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:57Z","lastTransitionTime":"2026-02-16T17:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.461477 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.461550 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.461573 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.461602 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.461623 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:57Z","lastTransitionTime":"2026-02-16T17:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.564607 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.564668 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.564683 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.564704 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.564721 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:57Z","lastTransitionTime":"2026-02-16T17:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.667865 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.668237 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.668370 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.668474 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.668584 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:57Z","lastTransitionTime":"2026-02-16T17:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.771355 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.771406 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.771421 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.771443 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.771455 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:57Z","lastTransitionTime":"2026-02-16T17:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.790686 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.790705 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.790686 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:57 crc kubenswrapper[4794]: E0216 17:00:57.790787 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:57 crc kubenswrapper[4794]: E0216 17:00:57.790919 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:00:57 crc kubenswrapper[4794]: E0216 17:00:57.790966 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.791614 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:57 crc kubenswrapper[4794]: E0216 17:00:57.791948 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.792209 4794 scope.go:117] "RemoveContainer" containerID="6a9b07055fd16bf9dde792f372b5a19f7faf37d643ae4986f169c85fdcfe27d9" Feb 16 17:00:57 crc kubenswrapper[4794]: E0216 17:00:57.792553 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9krvl_openshift-ovn-kubernetes(d985e4f1-78bb-43f9-b86c-cd47831d602c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.796907 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 09:35:20.83478008 +0000 UTC Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.874679 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.874787 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.874803 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.874829 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.874848 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:57Z","lastTransitionTime":"2026-02-16T17:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.978047 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.978091 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.978102 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.978119 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:57 crc kubenswrapper[4794]: I0216 17:00:57.978135 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:57Z","lastTransitionTime":"2026-02-16T17:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.080971 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.081037 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.081049 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.081068 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.081082 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:58Z","lastTransitionTime":"2026-02-16T17:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.183633 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.183676 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.183685 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.183700 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.183712 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:58Z","lastTransitionTime":"2026-02-16T17:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.286449 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.286625 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.286645 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.286666 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.286680 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:58Z","lastTransitionTime":"2026-02-16T17:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.389248 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.389353 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.389386 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.389425 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.389450 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:58Z","lastTransitionTime":"2026-02-16T17:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.492104 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.492169 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.492187 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.492217 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.492234 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:58Z","lastTransitionTime":"2026-02-16T17:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.596236 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.596292 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.596329 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.596356 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.596371 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:58Z","lastTransitionTime":"2026-02-16T17:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.700426 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.700510 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.700553 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.700588 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.700608 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:58Z","lastTransitionTime":"2026-02-16T17:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.797055 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 21:55:37.573638112 +0000 UTC Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.803463 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.803522 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.803539 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.803569 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.803587 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:58Z","lastTransitionTime":"2026-02-16T17:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.906198 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.906280 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.906338 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.906364 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:58 crc kubenswrapper[4794]: I0216 17:00:58.906381 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:58Z","lastTransitionTime":"2026-02-16T17:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.009485 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.009548 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.009565 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.009591 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.009610 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:59Z","lastTransitionTime":"2026-02-16T17:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.111879 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.111952 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.111975 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.112004 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.112026 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:59Z","lastTransitionTime":"2026-02-16T17:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.214390 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.214430 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.214439 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.214454 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.214468 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:59Z","lastTransitionTime":"2026-02-16T17:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.316685 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.316773 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.316798 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.316832 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.316858 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:59Z","lastTransitionTime":"2026-02-16T17:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.420118 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.420178 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.420201 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.420229 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.420252 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:59Z","lastTransitionTime":"2026-02-16T17:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.523407 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.523464 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.523480 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.523504 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.523521 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:59Z","lastTransitionTime":"2026-02-16T17:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.625787 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.625849 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.625866 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.625890 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.625909 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:59Z","lastTransitionTime":"2026-02-16T17:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.676936 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.676989 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.677037 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:59 crc kubenswrapper[4794]: E0216 17:00:59.677147 4794 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:00:59 crc kubenswrapper[4794]: E0216 17:00:59.677198 4794 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:00:59 crc kubenswrapper[4794]: E0216 17:00:59.677207 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:03.677187225 +0000 UTC m=+149.625281882 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:00:59 crc kubenswrapper[4794]: E0216 17:00:59.677349 4794 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:00:59 crc kubenswrapper[4794]: E0216 17:00:59.677420 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:03.677384601 +0000 UTC m=+149.625479288 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:00:59 crc kubenswrapper[4794]: E0216 17:00:59.677483 4794 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:00:59 crc kubenswrapper[4794]: E0216 17:00:59.677522 4794 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:59 crc kubenswrapper[4794]: E0216 17:00:59.677638 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:03.677609516 +0000 UTC m=+149.625704203 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.729202 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.729373 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.729397 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.729426 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.729446 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:59Z","lastTransitionTime":"2026-02-16T17:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.778596 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:00:59 crc kubenswrapper[4794]: E0216 17:00:59.778962 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:03.778919029 +0000 UTC m=+149.727013716 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.779042 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:59 crc kubenswrapper[4794]: E0216 17:00:59.779217 4794 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:00:59 crc kubenswrapper[4794]: E0216 17:00:59.779243 4794 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:00:59 crc kubenswrapper[4794]: E0216 17:00:59.779263 4794 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:59 crc kubenswrapper[4794]: E0216 17:00:59.779419 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:03.779385032 +0000 UTC m=+149.727479719 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.790744 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.790774 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.790863 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.790948 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:59 crc kubenswrapper[4794]: E0216 17:00:59.790978 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:59 crc kubenswrapper[4794]: E0216 17:00:59.791106 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:00:59 crc kubenswrapper[4794]: E0216 17:00:59.791194 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:59 crc kubenswrapper[4794]: E0216 17:00:59.791278 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.797226 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 08:35:20.063145214 +0000 UTC Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.831922 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.831982 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.832000 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.832024 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.832043 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:59Z","lastTransitionTime":"2026-02-16T17:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.933913 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.933953 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.933966 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.933982 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:59 crc kubenswrapper[4794]: I0216 17:00:59.933993 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:59Z","lastTransitionTime":"2026-02-16T17:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.036384 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.036440 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.036457 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.036483 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.036501 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:00Z","lastTransitionTime":"2026-02-16T17:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.138421 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.138462 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.138476 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.138490 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.138502 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:00Z","lastTransitionTime":"2026-02-16T17:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.241338 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.241395 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.241408 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.241425 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.241436 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:00Z","lastTransitionTime":"2026-02-16T17:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.343528 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.343566 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.343595 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.343611 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.343621 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:00Z","lastTransitionTime":"2026-02-16T17:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.447161 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.447236 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.447261 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.447290 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.447344 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:00Z","lastTransitionTime":"2026-02-16T17:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.551780 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.551858 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.551893 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.551929 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.551954 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:00Z","lastTransitionTime":"2026-02-16T17:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.655716 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.655785 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.655803 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.655828 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.655845 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:00Z","lastTransitionTime":"2026-02-16T17:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.758899 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.758974 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.758991 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.759014 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.759031 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:00Z","lastTransitionTime":"2026-02-16T17:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.797892 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 03:01:13.389391162 +0000 UTC Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.862605 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.862924 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.862940 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.862955 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.862963 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:00Z","lastTransitionTime":"2026-02-16T17:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.965922 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.965980 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.965994 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.966012 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:00 crc kubenswrapper[4794]: I0216 17:01:00.966029 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:00Z","lastTransitionTime":"2026-02-16T17:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.068782 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.068837 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.068849 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.068866 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.068879 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:01Z","lastTransitionTime":"2026-02-16T17:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.172805 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.174036 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.174116 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.174215 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.174322 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:01Z","lastTransitionTime":"2026-02-16T17:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.277177 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.277208 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.277217 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.277229 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.277237 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:01Z","lastTransitionTime":"2026-02-16T17:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.378975 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.379019 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.379032 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.379053 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.379068 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:01Z","lastTransitionTime":"2026-02-16T17:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.481781 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.481841 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.481859 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.481886 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.481906 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:01Z","lastTransitionTime":"2026-02-16T17:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.584092 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.584119 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.584128 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.584140 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.584148 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:01Z","lastTransitionTime":"2026-02-16T17:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.686562 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.686599 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.686616 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.686631 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.686642 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:01Z","lastTransitionTime":"2026-02-16T17:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.789430 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.789464 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.789474 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.789490 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.789500 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:01Z","lastTransitionTime":"2026-02-16T17:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.791198 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.791269 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:01:01 crc kubenswrapper[4794]: E0216 17:01:01.791297 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:01 crc kubenswrapper[4794]: E0216 17:01:01.791466 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.791496 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.791479 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:01 crc kubenswrapper[4794]: E0216 17:01:01.791556 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:01 crc kubenswrapper[4794]: E0216 17:01:01.791663 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.798256 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 02:04:05.183764029 +0000 UTC Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.892626 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.892659 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.892670 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.892684 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.892695 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:01Z","lastTransitionTime":"2026-02-16T17:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.995503 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.995558 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.995577 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.995601 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:01 crc kubenswrapper[4794]: I0216 17:01:01.995617 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:01Z","lastTransitionTime":"2026-02-16T17:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.098531 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.098612 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.098628 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.098655 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.098672 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:02Z","lastTransitionTime":"2026-02-16T17:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.201448 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.201492 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.201507 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.201526 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.201541 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:02Z","lastTransitionTime":"2026-02-16T17:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.303801 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.303845 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.303858 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.303874 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.303886 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:02Z","lastTransitionTime":"2026-02-16T17:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.407256 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.407345 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.407362 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.407388 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.407406 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:02Z","lastTransitionTime":"2026-02-16T17:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.510801 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.510876 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.510904 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.510934 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.510957 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:02Z","lastTransitionTime":"2026-02-16T17:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.618052 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.618129 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.618149 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.618178 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.618201 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:02Z","lastTransitionTime":"2026-02-16T17:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.721108 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.721170 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.721187 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.721218 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.721241 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:02Z","lastTransitionTime":"2026-02-16T17:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.802105 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 00:35:39.460298838 +0000 UTC Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.819217 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.824913 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.825060 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.825086 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.825114 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.825136 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:02Z","lastTransitionTime":"2026-02-16T17:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.927844 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.927905 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.927927 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.928026 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:02 crc kubenswrapper[4794]: I0216 17:01:02.928056 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:02Z","lastTransitionTime":"2026-02-16T17:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.030396 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.030430 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.030441 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.030455 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.030466 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:03Z","lastTransitionTime":"2026-02-16T17:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.133864 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.134034 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.134056 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.134113 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.134135 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:03Z","lastTransitionTime":"2026-02-16T17:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.232397 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.232456 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.232493 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.232518 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.232539 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:03Z","lastTransitionTime":"2026-02-16T17:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:03 crc kubenswrapper[4794]: E0216 17:01:03.250231 4794 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ccf280c0-9a33-46bd-be2c-0dca34f382e0\\\",\\\"systemUUID\\\":\\\"b3d0f632-3e25-45db-ae26-e5b3ec8421a1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:03Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.255049 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.255098 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.255144 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.255165 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.255183 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:03Z","lastTransitionTime":"2026-02-16T17:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:03 crc kubenswrapper[4794]: E0216 17:01:03.269187 4794 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ccf280c0-9a33-46bd-be2c-0dca34f382e0\\\",\\\"systemUUID\\\":\\\"b3d0f632-3e25-45db-ae26-e5b3ec8421a1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:03Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.274551 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.274584 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.274596 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.274610 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.274621 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:03Z","lastTransitionTime":"2026-02-16T17:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:03 crc kubenswrapper[4794]: E0216 17:01:03.292416 4794 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ccf280c0-9a33-46bd-be2c-0dca34f382e0\\\",\\\"systemUUID\\\":\\\"b3d0f632-3e25-45db-ae26-e5b3ec8421a1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:03Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.296375 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.296417 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.296425 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.296437 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.296446 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:03Z","lastTransitionTime":"2026-02-16T17:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:03 crc kubenswrapper[4794]: E0216 17:01:03.313925 4794 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ccf280c0-9a33-46bd-be2c-0dca34f382e0\\\",\\\"systemUUID\\\":\\\"b3d0f632-3e25-45db-ae26-e5b3ec8421a1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:03Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.319423 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.319645 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.319778 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.319925 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.320058 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:03Z","lastTransitionTime":"2026-02-16T17:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:03 crc kubenswrapper[4794]: E0216 17:01:03.340884 4794 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ccf280c0-9a33-46bd-be2c-0dca34f382e0\\\",\\\"systemUUID\\\":\\\"b3d0f632-3e25-45db-ae26-e5b3ec8421a1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:03Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:03 crc kubenswrapper[4794]: E0216 17:01:03.341035 4794 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.342989 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.343020 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.343074 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.343094 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.343105 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:03Z","lastTransitionTime":"2026-02-16T17:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.446792 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.446827 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.446835 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.446849 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.446858 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:03Z","lastTransitionTime":"2026-02-16T17:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.549143 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.549228 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.549250 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.549278 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.549334 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:03Z","lastTransitionTime":"2026-02-16T17:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.651709 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.651738 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.651747 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.651762 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.651771 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:03Z","lastTransitionTime":"2026-02-16T17:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.753601 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.753629 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.753637 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.753651 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.753658 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:03Z","lastTransitionTime":"2026-02-16T17:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.790911 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:03 crc kubenswrapper[4794]: E0216 17:01:03.791011 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.791593 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.791659 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.791681 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:03 crc kubenswrapper[4794]: E0216 17:01:03.792235 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:03 crc kubenswrapper[4794]: E0216 17:01:03.792271 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:01:03 crc kubenswrapper[4794]: E0216 17:01:03.792324 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.802992 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 03:17:55.340987626 +0000 UTC Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.856187 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.856247 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.856266 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.856289 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.856328 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:03Z","lastTransitionTime":"2026-02-16T17:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.958681 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.958712 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.958722 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.958737 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:03 crc kubenswrapper[4794]: I0216 17:01:03.958748 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:03Z","lastTransitionTime":"2026-02-16T17:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.061528 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.061808 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.061896 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.062009 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.062100 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:04Z","lastTransitionTime":"2026-02-16T17:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.164625 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.164681 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.164700 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.164728 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.164750 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:04Z","lastTransitionTime":"2026-02-16T17:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.268006 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.268038 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.268048 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.268065 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.268076 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:04Z","lastTransitionTime":"2026-02-16T17:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.370266 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.370350 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.370367 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.370387 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.370400 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:04Z","lastTransitionTime":"2026-02-16T17:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.473348 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.473620 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.473739 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.473883 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.473982 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:04Z","lastTransitionTime":"2026-02-16T17:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.577514 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.577551 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.577562 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.577606 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.577620 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:04Z","lastTransitionTime":"2026-02-16T17:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.679506 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.679833 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.679936 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.680024 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.680103 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:04Z","lastTransitionTime":"2026-02-16T17:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.783862 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.784114 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.784250 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.784359 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.784456 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:04Z","lastTransitionTime":"2026-02-16T17:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.803394 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 09:04:54.44509671 +0000 UTC Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.889921 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.889951 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.889959 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.889973 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.889982 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:04Z","lastTransitionTime":"2026-02-16T17:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.892495 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-zwhdn" podStartSLOduration=69.89248283 podStartE2EDuration="1m9.89248283s" podCreationTimestamp="2026-02-16 16:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:04.883786989 +0000 UTC m=+90.831881636" watchObservedRunningTime="2026-02-16 17:01:04.89248283 +0000 UTC m=+90.840577467" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.913174 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=69.913133959 podStartE2EDuration="1m9.913133959s" podCreationTimestamp="2026-02-16 16:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:04.912213285 +0000 UTC m=+90.860307932" watchObservedRunningTime="2026-02-16 17:01:04.913133959 +0000 UTC m=+90.861228606" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.913694 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=22.913686154 podStartE2EDuration="22.913686154s" podCreationTimestamp="2026-02-16 17:00:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:04.892933512 +0000 UTC m=+90.841028159" watchObservedRunningTime="2026-02-16 17:01:04.913686154 +0000 UTC m=+90.861780801" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.992848 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.992894 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.992905 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.992922 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.992933 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:04Z","lastTransitionTime":"2026-02-16T17:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:04 crc kubenswrapper[4794]: I0216 17:01:04.996551 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-fk74m" podStartSLOduration=69.996531606 podStartE2EDuration="1m9.996531606s" podCreationTimestamp="2026-02-16 16:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:04.964511815 +0000 UTC m=+90.912606462" watchObservedRunningTime="2026-02-16 17:01:04.996531606 +0000 UTC m=+90.944626253" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.007534 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-tqtvb" podStartSLOduration=71.007518618 podStartE2EDuration="1m11.007518618s" podCreationTimestamp="2026-02-16 16:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:05.006621214 +0000 UTC m=+90.954715861" watchObservedRunningTime="2026-02-16 17:01:05.007518618 +0000 UTC m=+90.955613265" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.088035 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podStartSLOduration=70.088020357 podStartE2EDuration="1m10.088020357s" podCreationTimestamp="2026-02-16 16:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:05.087719049 +0000 UTC m=+91.035813696" watchObservedRunningTime="2026-02-16 17:01:05.088020357 +0000 UTC m=+91.036115004" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.088260 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=3.088256014 podStartE2EDuration="3.088256014s" podCreationTimestamp="2026-02-16 17:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:05.074759245 +0000 UTC m=+91.022853892" watchObservedRunningTime="2026-02-16 17:01:05.088256014 +0000 UTC m=+91.036350651" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.095617 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.095855 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.096050 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.096249 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.096479 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:05Z","lastTransitionTime":"2026-02-16T17:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.102165 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-w6ttl" podStartSLOduration=71.102152603 podStartE2EDuration="1m11.102152603s" podCreationTimestamp="2026-02-16 16:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:05.101427894 +0000 UTC m=+91.049522541" watchObservedRunningTime="2026-02-16 17:01:05.102152603 +0000 UTC m=+91.050247250" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.117994 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cmzfs" podStartSLOduration=69.117975894 podStartE2EDuration="1m9.117975894s" podCreationTimestamp="2026-02-16 16:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:05.117575943 +0000 UTC m=+91.065670590" watchObservedRunningTime="2026-02-16 17:01:05.117975894 +0000 UTC m=+91.066070541" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.131334 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=40.131316018 podStartE2EDuration="40.131316018s" podCreationTimestamp="2026-02-16 17:00:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:05.13024122 +0000 UTC m=+91.078335867" watchObservedRunningTime="2026-02-16 17:01:05.131316018 +0000 UTC m=+91.079410665" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.199001 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.199251 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.199388 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.199508 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.199608 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:05Z","lastTransitionTime":"2026-02-16T17:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.301711 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.301789 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.301818 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.301848 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.301873 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:05Z","lastTransitionTime":"2026-02-16T17:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.405421 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.405507 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.405525 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.405550 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.405567 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:05Z","lastTransitionTime":"2026-02-16T17:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.507675 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.507720 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.507731 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.507748 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.507759 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:05Z","lastTransitionTime":"2026-02-16T17:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.610064 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.610142 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.610166 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.610196 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.610224 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:05Z","lastTransitionTime":"2026-02-16T17:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.713192 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.713241 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.713257 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.713279 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.713295 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:05Z","lastTransitionTime":"2026-02-16T17:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.790999 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.791059 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.791068 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.791126 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:05 crc kubenswrapper[4794]: E0216 17:01:05.791265 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:05 crc kubenswrapper[4794]: E0216 17:01:05.791477 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:05 crc kubenswrapper[4794]: E0216 17:01:05.791639 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:01:05 crc kubenswrapper[4794]: E0216 17:01:05.791809 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.804114 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 00:54:07.149507348 +0000 UTC Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.816268 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.816375 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.816399 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.816425 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.816445 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:05Z","lastTransitionTime":"2026-02-16T17:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.919974 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.920046 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.920064 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.920100 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:05 crc kubenswrapper[4794]: I0216 17:01:05.920123 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:05Z","lastTransitionTime":"2026-02-16T17:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.023693 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.023765 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.023780 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.023817 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.023836 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:06Z","lastTransitionTime":"2026-02-16T17:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.126789 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.126858 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.126869 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.126889 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.126900 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:06Z","lastTransitionTime":"2026-02-16T17:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.230347 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.230407 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.230419 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.230436 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.230449 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:06Z","lastTransitionTime":"2026-02-16T17:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.333200 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.333244 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.333255 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.333271 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.333282 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:06Z","lastTransitionTime":"2026-02-16T17:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.435955 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.435994 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.436005 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.436021 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.436034 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:06Z","lastTransitionTime":"2026-02-16T17:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.538956 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.539006 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.539014 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.539028 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.539040 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:06Z","lastTransitionTime":"2026-02-16T17:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.642366 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.642809 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.643032 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.643197 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.643483 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:06Z","lastTransitionTime":"2026-02-16T17:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.746357 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.746404 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.746439 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.746463 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.746475 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:06Z","lastTransitionTime":"2026-02-16T17:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.804995 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 16:48:13.051241289 +0000 UTC Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.849349 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.849417 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.849444 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.849475 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.849502 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:06Z","lastTransitionTime":"2026-02-16T17:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.952056 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.952117 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.952135 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.952159 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:06 crc kubenswrapper[4794]: I0216 17:01:06.952181 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:06Z","lastTransitionTime":"2026-02-16T17:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.055154 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.055185 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.055195 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.055207 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.055216 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:07Z","lastTransitionTime":"2026-02-16T17:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.157713 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.157752 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.157765 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.157779 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.157791 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:07Z","lastTransitionTime":"2026-02-16T17:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.259996 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.260055 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.260070 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.260091 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.260112 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:07Z","lastTransitionTime":"2026-02-16T17:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.363832 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.363894 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.363912 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.363936 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.363950 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:07Z","lastTransitionTime":"2026-02-16T17:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.467997 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.468085 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.468109 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.468142 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.468160 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:07Z","lastTransitionTime":"2026-02-16T17:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.571168 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.571221 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.571236 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.571257 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.571277 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:07Z","lastTransitionTime":"2026-02-16T17:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.674722 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.674768 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.674784 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.674805 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.674818 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:07Z","lastTransitionTime":"2026-02-16T17:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.778517 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.778578 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.778596 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.778620 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.778638 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:07Z","lastTransitionTime":"2026-02-16T17:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.791036 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:07 crc kubenswrapper[4794]: E0216 17:01:07.791216 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.791396 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.791400 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:07 crc kubenswrapper[4794]: E0216 17:01:07.791546 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:07 crc kubenswrapper[4794]: E0216 17:01:07.791702 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.791938 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:01:07 crc kubenswrapper[4794]: E0216 17:01:07.792266 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.805739 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 12:23:02.73264365 +0000 UTC Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.881231 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.881988 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.882030 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.882048 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.882060 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:07Z","lastTransitionTime":"2026-02-16T17:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.986130 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.986165 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.986174 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.986189 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:07 crc kubenswrapper[4794]: I0216 17:01:07.986199 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:07Z","lastTransitionTime":"2026-02-16T17:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.089154 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.089225 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.089251 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.089280 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.089334 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:08Z","lastTransitionTime":"2026-02-16T17:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.191907 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.191964 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.191979 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.192006 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.192023 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:08Z","lastTransitionTime":"2026-02-16T17:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.294356 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.294432 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.294451 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.294474 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.294492 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:08Z","lastTransitionTime":"2026-02-16T17:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.397559 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.397606 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.397616 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.397632 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.397643 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:08Z","lastTransitionTime":"2026-02-16T17:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.501752 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.504367 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.504613 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.504763 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.504908 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:08Z","lastTransitionTime":"2026-02-16T17:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.611870 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.611933 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.611955 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.611987 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.612004 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:08Z","lastTransitionTime":"2026-02-16T17:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.714707 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.714748 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.714875 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.714910 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.714921 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:08Z","lastTransitionTime":"2026-02-16T17:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.806745 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 10:41:19.685370069 +0000 UTC Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.818790 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.818869 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.818895 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.818922 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.818939 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:08Z","lastTransitionTime":"2026-02-16T17:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.922741 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.922778 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.922787 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.922800 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:08 crc kubenswrapper[4794]: I0216 17:01:08.922808 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:08Z","lastTransitionTime":"2026-02-16T17:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.025371 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.025424 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.025442 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.025465 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.025482 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:09Z","lastTransitionTime":"2026-02-16T17:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.128392 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.128470 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.128493 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.128522 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.128547 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:09Z","lastTransitionTime":"2026-02-16T17:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.231823 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.231881 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.231900 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.231924 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.231941 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:09Z","lastTransitionTime":"2026-02-16T17:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.334885 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.334921 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.334929 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.334942 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.334951 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:09Z","lastTransitionTime":"2026-02-16T17:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.437780 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.437835 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.437846 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.437860 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.437871 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:09Z","lastTransitionTime":"2026-02-16T17:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.540130 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.540173 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.540188 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.540205 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.540216 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:09Z","lastTransitionTime":"2026-02-16T17:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.643166 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.643203 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.643217 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.643238 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.643251 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:09Z","lastTransitionTime":"2026-02-16T17:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.746956 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.747010 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.747022 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.747040 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.747052 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:09Z","lastTransitionTime":"2026-02-16T17:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.790686 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.790773 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.790777 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.790701 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:09 crc kubenswrapper[4794]: E0216 17:01:09.790874 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:09 crc kubenswrapper[4794]: E0216 17:01:09.791055 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:01:09 crc kubenswrapper[4794]: E0216 17:01:09.791372 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:09 crc kubenswrapper[4794]: E0216 17:01:09.791297 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.806863 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 08:02:23.656656296 +0000 UTC Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.850263 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.850347 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.850359 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.850373 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.850383 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:09Z","lastTransitionTime":"2026-02-16T17:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.953455 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.953531 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.953552 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.953580 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:09 crc kubenswrapper[4794]: I0216 17:01:09.953601 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:09Z","lastTransitionTime":"2026-02-16T17:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.056708 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.056764 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.056780 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.056798 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.056811 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:10Z","lastTransitionTime":"2026-02-16T17:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.159360 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.159404 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.159418 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.159437 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.159451 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:10Z","lastTransitionTime":"2026-02-16T17:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.262471 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.262548 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.262569 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.262601 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.262622 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:10Z","lastTransitionTime":"2026-02-16T17:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.365979 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.366047 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.366066 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.366099 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.366117 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:10Z","lastTransitionTime":"2026-02-16T17:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.468927 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.468974 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.468985 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.469005 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.469018 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:10Z","lastTransitionTime":"2026-02-16T17:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.571665 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.571716 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.571733 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.571792 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.571811 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:10Z","lastTransitionTime":"2026-02-16T17:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.674587 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.674638 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.674660 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.674685 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.674704 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:10Z","lastTransitionTime":"2026-02-16T17:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.777911 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.777965 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.777982 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.778005 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.778021 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:10Z","lastTransitionTime":"2026-02-16T17:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.793848 4794 scope.go:117] "RemoveContainer" containerID="6a9b07055fd16bf9dde792f372b5a19f7faf37d643ae4986f169c85fdcfe27d9" Feb 16 17:01:10 crc kubenswrapper[4794]: E0216 17:01:10.794092 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9krvl_openshift-ovn-kubernetes(d985e4f1-78bb-43f9-b86c-cd47831d602c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.807364 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 16:13:40.116389893 +0000 UTC Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.880968 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.881029 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.881046 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.881069 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.881086 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:10Z","lastTransitionTime":"2026-02-16T17:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.984683 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.984718 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.984729 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.984745 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:10 crc kubenswrapper[4794]: I0216 17:01:10.984756 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:10Z","lastTransitionTime":"2026-02-16T17:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.087565 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.087614 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.087629 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.087649 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.087664 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:11Z","lastTransitionTime":"2026-02-16T17:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.190999 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.191350 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.191497 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.191633 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.191749 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:11Z","lastTransitionTime":"2026-02-16T17:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.295219 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.295257 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.295270 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.295285 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.295295 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:11Z","lastTransitionTime":"2026-02-16T17:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.398557 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.398629 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.398648 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.398675 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.398692 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:11Z","lastTransitionTime":"2026-02-16T17:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.501167 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.501240 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.501257 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.501280 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.501296 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:11Z","lastTransitionTime":"2026-02-16T17:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.604149 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.604226 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.604250 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.604280 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.604333 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:11Z","lastTransitionTime":"2026-02-16T17:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.707253 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.707291 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.707318 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.707332 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.707341 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:11Z","lastTransitionTime":"2026-02-16T17:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.791207 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.791287 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.791348 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:11 crc kubenswrapper[4794]: E0216 17:01:11.791485 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.791516 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:11 crc kubenswrapper[4794]: E0216 17:01:11.791762 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:01:11 crc kubenswrapper[4794]: E0216 17:01:11.791891 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:11 crc kubenswrapper[4794]: E0216 17:01:11.791969 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.808066 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 16:34:19.24990957 +0000 UTC Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.810665 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.810727 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.810761 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.810800 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.810822 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:11Z","lastTransitionTime":"2026-02-16T17:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.914094 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.914137 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.914149 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.914166 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:11 crc kubenswrapper[4794]: I0216 17:01:11.914177 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:11Z","lastTransitionTime":"2026-02-16T17:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.018598 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.018684 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.018694 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.018724 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.018735 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:12Z","lastTransitionTime":"2026-02-16T17:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.121864 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.121931 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.121949 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.121975 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.121991 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:12Z","lastTransitionTime":"2026-02-16T17:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.225012 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.225055 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.225074 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.225098 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.225114 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:12Z","lastTransitionTime":"2026-02-16T17:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.328137 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.328194 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.328211 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.328238 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.328257 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:12Z","lastTransitionTime":"2026-02-16T17:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.431592 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.431677 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.431701 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.431731 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.431754 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:12Z","lastTransitionTime":"2026-02-16T17:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.534957 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.535993 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.536034 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.536069 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.536092 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:12Z","lastTransitionTime":"2026-02-16T17:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.639053 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.639129 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.639154 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.639187 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.639210 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:12Z","lastTransitionTime":"2026-02-16T17:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.742334 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.742409 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.742426 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.742450 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.742467 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:12Z","lastTransitionTime":"2026-02-16T17:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.808387 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 09:11:48.050210076 +0000 UTC Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.845122 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.845187 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.845207 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.845233 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.845251 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:12Z","lastTransitionTime":"2026-02-16T17:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.947930 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.947990 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.948014 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.948044 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:12 crc kubenswrapper[4794]: I0216 17:01:12.948065 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:12Z","lastTransitionTime":"2026-02-16T17:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.051201 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.051254 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.051266 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.051285 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.051299 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:13Z","lastTransitionTime":"2026-02-16T17:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.154295 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.154438 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.154456 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.154487 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.154511 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:13Z","lastTransitionTime":"2026-02-16T17:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.258070 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.258128 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.258144 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.258169 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.258181 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:13Z","lastTransitionTime":"2026-02-16T17:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.361758 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.361813 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.361830 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.361853 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.361871 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:13Z","lastTransitionTime":"2026-02-16T17:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.464612 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.464661 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.464674 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.464694 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.464708 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:13Z","lastTransitionTime":"2026-02-16T17:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.568336 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.568394 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.568410 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.568429 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.568444 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:13Z","lastTransitionTime":"2026-02-16T17:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.619440 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.619495 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.619509 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.619532 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.619548 4794 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:13Z","lastTransitionTime":"2026-02-16T17:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.689942 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=77.689921002 podStartE2EDuration="1m17.689921002s" podCreationTimestamp="2026-02-16 16:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:05.143291366 +0000 UTC m=+91.091386013" watchObservedRunningTime="2026-02-16 17:01:13.689921002 +0000 UTC m=+99.638015659" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.690608 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-6h46t"] Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.690992 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6h46t" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.692880 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.694407 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.694512 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.694609 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.724391 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/563163db-e403-4a7c-b52e-b6f9a4354774-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6h46t\" (UID: \"563163db-e403-4a7c-b52e-b6f9a4354774\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6h46t" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.724510 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/563163db-e403-4a7c-b52e-b6f9a4354774-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6h46t\" (UID: \"563163db-e403-4a7c-b52e-b6f9a4354774\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6h46t" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.724554 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/563163db-e403-4a7c-b52e-b6f9a4354774-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6h46t\" (UID: \"563163db-e403-4a7c-b52e-b6f9a4354774\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6h46t" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.724594 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/563163db-e403-4a7c-b52e-b6f9a4354774-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6h46t\" (UID: \"563163db-e403-4a7c-b52e-b6f9a4354774\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6h46t" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.724649 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/563163db-e403-4a7c-b52e-b6f9a4354774-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6h46t\" (UID: \"563163db-e403-4a7c-b52e-b6f9a4354774\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6h46t" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.790912 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.790982 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.791023 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.791067 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:13 crc kubenswrapper[4794]: E0216 17:01:13.791692 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:13 crc kubenswrapper[4794]: E0216 17:01:13.791722 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:13 crc kubenswrapper[4794]: E0216 17:01:13.791467 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:01:13 crc kubenswrapper[4794]: E0216 17:01:13.791483 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.809333 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 11:12:46.203996823 +0000 UTC Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.809704 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.820353 4794 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.825172 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/563163db-e403-4a7c-b52e-b6f9a4354774-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6h46t\" (UID: \"563163db-e403-4a7c-b52e-b6f9a4354774\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6h46t" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.825421 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/563163db-e403-4a7c-b52e-b6f9a4354774-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6h46t\" (UID: \"563163db-e403-4a7c-b52e-b6f9a4354774\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6h46t" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.825468 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/563163db-e403-4a7c-b52e-b6f9a4354774-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6h46t\" (UID: \"563163db-e403-4a7c-b52e-b6f9a4354774\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6h46t" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.825636 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/563163db-e403-4a7c-b52e-b6f9a4354774-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6h46t\" (UID: \"563163db-e403-4a7c-b52e-b6f9a4354774\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6h46t" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.825681 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/563163db-e403-4a7c-b52e-b6f9a4354774-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6h46t\" (UID: \"563163db-e403-4a7c-b52e-b6f9a4354774\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6h46t" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.825742 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/563163db-e403-4a7c-b52e-b6f9a4354774-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6h46t\" (UID: \"563163db-e403-4a7c-b52e-b6f9a4354774\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6h46t" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.825914 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/563163db-e403-4a7c-b52e-b6f9a4354774-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6h46t\" (UID: \"563163db-e403-4a7c-b52e-b6f9a4354774\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6h46t" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.827619 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/563163db-e403-4a7c-b52e-b6f9a4354774-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6h46t\" (UID: \"563163db-e403-4a7c-b52e-b6f9a4354774\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6h46t" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.836865 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/563163db-e403-4a7c-b52e-b6f9a4354774-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6h46t\" (UID: \"563163db-e403-4a7c-b52e-b6f9a4354774\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6h46t" Feb 16 17:01:13 crc kubenswrapper[4794]: I0216 17:01:13.845796 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/563163db-e403-4a7c-b52e-b6f9a4354774-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6h46t\" (UID: \"563163db-e403-4a7c-b52e-b6f9a4354774\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6h46t" Feb 16 17:01:14 crc kubenswrapper[4794]: I0216 17:01:14.018559 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6h46t" Feb 16 17:01:14 crc kubenswrapper[4794]: I0216 17:01:14.331526 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/894bff1b-b8b9-4c28-8ffe-0e0469958227-metrics-certs\") pod \"network-metrics-daemon-tf698\" (UID: \"894bff1b-b8b9-4c28-8ffe-0e0469958227\") " pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:01:14 crc kubenswrapper[4794]: E0216 17:01:14.331700 4794 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:01:14 crc kubenswrapper[4794]: E0216 17:01:14.331996 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/894bff1b-b8b9-4c28-8ffe-0e0469958227-metrics-certs podName:894bff1b-b8b9-4c28-8ffe-0e0469958227 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:18.331979547 +0000 UTC m=+164.280074194 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/894bff1b-b8b9-4c28-8ffe-0e0469958227-metrics-certs") pod "network-metrics-daemon-tf698" (UID: "894bff1b-b8b9-4c28-8ffe-0e0469958227") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:01:14 crc kubenswrapper[4794]: I0216 17:01:14.367587 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6h46t" event={"ID":"563163db-e403-4a7c-b52e-b6f9a4354774","Type":"ContainerStarted","Data":"8db150542c77a1cf29dc2c2f53e1c6c1eecde3e1d00ddf6d7d47f3e0e994104a"} Feb 16 17:01:14 crc kubenswrapper[4794]: I0216 17:01:14.367838 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6h46t" event={"ID":"563163db-e403-4a7c-b52e-b6f9a4354774","Type":"ContainerStarted","Data":"16e04084e94b176372e402287d650afc4957ba9d128f7d2b6aea1c0befca3610"} Feb 16 17:01:14 crc kubenswrapper[4794]: I0216 17:01:14.381051 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6h46t" podStartSLOduration=79.38102968 podStartE2EDuration="1m19.38102968s" podCreationTimestamp="2026-02-16 16:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:14.380727062 +0000 UTC m=+100.328821759" watchObservedRunningTime="2026-02-16 17:01:14.38102968 +0000 UTC m=+100.329124357" Feb 16 17:01:15 crc kubenswrapper[4794]: I0216 17:01:15.791409 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:01:15 crc kubenswrapper[4794]: I0216 17:01:15.791439 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:15 crc kubenswrapper[4794]: I0216 17:01:15.791541 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:15 crc kubenswrapper[4794]: I0216 17:01:15.791575 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:15 crc kubenswrapper[4794]: E0216 17:01:15.791764 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:01:15 crc kubenswrapper[4794]: E0216 17:01:15.791931 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:15 crc kubenswrapper[4794]: E0216 17:01:15.791977 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:15 crc kubenswrapper[4794]: E0216 17:01:15.792054 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:17 crc kubenswrapper[4794]: I0216 17:01:17.790992 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:01:17 crc kubenswrapper[4794]: I0216 17:01:17.791035 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:17 crc kubenswrapper[4794]: I0216 17:01:17.791059 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:17 crc kubenswrapper[4794]: I0216 17:01:17.790992 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:17 crc kubenswrapper[4794]: E0216 17:01:17.791121 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:01:17 crc kubenswrapper[4794]: E0216 17:01:17.791278 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:17 crc kubenswrapper[4794]: E0216 17:01:17.791386 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:17 crc kubenswrapper[4794]: E0216 17:01:17.791716 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:19 crc kubenswrapper[4794]: I0216 17:01:19.791079 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:19 crc kubenswrapper[4794]: I0216 17:01:19.791119 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:01:19 crc kubenswrapper[4794]: I0216 17:01:19.791166 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:19 crc kubenswrapper[4794]: I0216 17:01:19.791174 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:19 crc kubenswrapper[4794]: E0216 17:01:19.791293 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:19 crc kubenswrapper[4794]: E0216 17:01:19.791420 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:19 crc kubenswrapper[4794]: E0216 17:01:19.791592 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:19 crc kubenswrapper[4794]: E0216 17:01:19.791763 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:01:21 crc kubenswrapper[4794]: I0216 17:01:21.791252 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:21 crc kubenswrapper[4794]: I0216 17:01:21.791330 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:21 crc kubenswrapper[4794]: I0216 17:01:21.791399 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:21 crc kubenswrapper[4794]: I0216 17:01:21.791257 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:01:21 crc kubenswrapper[4794]: E0216 17:01:21.791451 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:21 crc kubenswrapper[4794]: E0216 17:01:21.791530 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:21 crc kubenswrapper[4794]: E0216 17:01:21.791620 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:21 crc kubenswrapper[4794]: E0216 17:01:21.791773 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:01:23 crc kubenswrapper[4794]: I0216 17:01:23.790571 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:23 crc kubenswrapper[4794]: I0216 17:01:23.790619 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:01:23 crc kubenswrapper[4794]: I0216 17:01:23.790631 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:23 crc kubenswrapper[4794]: I0216 17:01:23.790585 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:23 crc kubenswrapper[4794]: E0216 17:01:23.790741 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:23 crc kubenswrapper[4794]: E0216 17:01:23.790896 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:01:23 crc kubenswrapper[4794]: E0216 17:01:23.791003 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:23 crc kubenswrapper[4794]: E0216 17:01:23.791087 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:25 crc kubenswrapper[4794]: I0216 17:01:25.791392 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:01:25 crc kubenswrapper[4794]: I0216 17:01:25.791463 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:25 crc kubenswrapper[4794]: E0216 17:01:25.791609 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:01:25 crc kubenswrapper[4794]: I0216 17:01:25.791413 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:25 crc kubenswrapper[4794]: I0216 17:01:25.791698 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:25 crc kubenswrapper[4794]: E0216 17:01:25.791794 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:25 crc kubenswrapper[4794]: E0216 17:01:25.792584 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:25 crc kubenswrapper[4794]: E0216 17:01:25.792727 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:25 crc kubenswrapper[4794]: I0216 17:01:25.793044 4794 scope.go:117] "RemoveContainer" containerID="6a9b07055fd16bf9dde792f372b5a19f7faf37d643ae4986f169c85fdcfe27d9" Feb 16 17:01:26 crc kubenswrapper[4794]: I0216 17:01:26.411533 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9krvl_d985e4f1-78bb-43f9-b86c-cd47831d602c/ovnkube-controller/3.log" Feb 16 17:01:26 crc kubenswrapper[4794]: I0216 17:01:26.414530 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" event={"ID":"d985e4f1-78bb-43f9-b86c-cd47831d602c","Type":"ContainerStarted","Data":"725d53506d1041fde56a67b4b413ded09a8b73fe4cced1bd1199e1b99c1ed3e1"} Feb 16 17:01:26 crc kubenswrapper[4794]: I0216 17:01:26.414882 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 17:01:26 crc kubenswrapper[4794]: I0216 17:01:26.449566 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" podStartSLOduration=91.449549912 podStartE2EDuration="1m31.449549912s" podCreationTimestamp="2026-02-16 16:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:26.447810196 +0000 UTC m=+112.395904843" watchObservedRunningTime="2026-02-16 17:01:26.449549912 +0000 UTC m=+112.397644559" Feb 16 17:01:26 crc kubenswrapper[4794]: I0216 17:01:26.892741 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-tf698"] Feb 16 17:01:26 crc kubenswrapper[4794]: I0216 17:01:26.893173 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:01:26 crc kubenswrapper[4794]: E0216 17:01:26.893282 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:01:27 crc kubenswrapper[4794]: I0216 17:01:27.790870 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:27 crc kubenswrapper[4794]: I0216 17:01:27.790926 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:27 crc kubenswrapper[4794]: E0216 17:01:27.791015 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:27 crc kubenswrapper[4794]: I0216 17:01:27.791071 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:27 crc kubenswrapper[4794]: E0216 17:01:27.791122 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:27 crc kubenswrapper[4794]: E0216 17:01:27.791338 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:28 crc kubenswrapper[4794]: I0216 17:01:28.790985 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:01:28 crc kubenswrapper[4794]: E0216 17:01:28.791153 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tf698" podUID="894bff1b-b8b9-4c28-8ffe-0e0469958227" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.427937 4794 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.428100 4794 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.491042 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vtrkl"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.491678 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vtrkl" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.494276 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-2rjhr"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.495055 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-2rjhr" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.495520 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.495552 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.495793 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.496004 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.496531 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ld54h"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.496592 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.496790 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.497162 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-42sb2"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.497633 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42sb2" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.498054 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ld54h" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.498757 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.499329 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.499472 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-mmhdw"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.499881 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.500072 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mmhdw" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.500099 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.500267 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.500349 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.500486 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-zwsbc"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.501086 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zwsbc" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.501414 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-mkhh2"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.501956 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mkhh2" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.504025 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.504629 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.504794 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.504875 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.505023 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.505197 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.505357 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.505517 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.505535 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.505740 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.505868 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.506359 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.506688 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-fg5gp"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.506962 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.513705 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-lgmrt"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.513901 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-fg5gp" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.514204 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nsztn"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.514406 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-lgmrt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.514614 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-h6xgf"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.514929 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.514941 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.515027 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vhvsb"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.515151 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nsztn" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.515193 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.515707 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.515826 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.516268 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.516454 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.516604 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.516787 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.517215 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.517529 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.517690 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.518264 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.518463 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.518598 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.518751 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.519076 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.519213 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.519401 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.519499 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.519559 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.523931 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-dsk9b"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.524670 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n5rq7"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.525189 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-rr75f"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.525587 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-b77qj"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.525965 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-dp7xf"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.526645 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-dp7xf" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.527514 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vhvsb" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.527860 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n5rq7" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.528166 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dsk9b" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.528440 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-rr75f" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.528843 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-b77qj" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.541758 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.542623 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.543780 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.543784 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-xtklb"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.544758 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.545100 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.545557 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.556407 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xtklb" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.556835 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.556913 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.557088 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.557180 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.557329 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.557443 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.557569 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.560341 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.560709 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.560871 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.561009 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.561592 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.561758 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5fbkt"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.562693 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.562827 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.563252 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hg4bz"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.563522 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.563598 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.563649 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hg4bz" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.563764 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.564364 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lzvbh"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.565060 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.565702 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lzvbh" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.566344 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.566567 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.566613 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.567182 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5dxq9"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.567690 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5dxq9" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.567840 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-6z49s"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.569266 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.571700 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.573618 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s8r99"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.576542 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.576977 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.577244 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.577531 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-swnwm"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.577581 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.577675 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.577755 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.577864 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.577918 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.578060 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.578189 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.578460 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.578627 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.578753 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.578816 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.579032 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.579137 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.579185 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.579283 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.579295 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.579415 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.579418 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.579461 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.579512 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.579523 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.579534 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6z49s" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.579577 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.579667 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.579715 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s8r99" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.579610 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.580554 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h26hh"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.580895 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6ghx"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.598793 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qfr5h"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.599222 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h26hh" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.599624 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-swnwm" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.599726 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jv2jb"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.599737 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.600272 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6ghx" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.601494 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3fa8c07-9947-4f5c-8295-bdec401113b0-trusted-ca-bundle\") pod \"console-f9d7485db-zwsbc\" (UID: \"f3fa8c07-9947-4f5c-8295-bdec401113b0\") " pod="openshift-console/console-f9d7485db-zwsbc" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.601524 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9d2009dc-5385-4529-b1b3-d14a75a50089-etcd-serving-ca\") pod \"apiserver-76f77b778f-5fbkt\" (UID: \"9d2009dc-5385-4529-b1b3-d14a75a50089\") " pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.601548 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/077b99e5-e95c-4afd-9008-d1f18a6b2f70-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vtrkl\" (UID: \"077b99e5-e95c-4afd-9008-d1f18a6b2f70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vtrkl" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.601564 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9bbc4953-c59b-46cd-8f17-513136731d2a-etcd-client\") pod \"apiserver-7bbb656c7d-ld54h\" (UID: \"9bbc4953-c59b-46cd-8f17-513136731d2a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ld54h" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.607116 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3fa8c07-9947-4f5c-8295-bdec401113b0-console-serving-cert\") pod \"console-f9d7485db-zwsbc\" (UID: \"f3fa8c07-9947-4f5c-8295-bdec401113b0\") " pod="openshift-console/console-f9d7485db-zwsbc" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.607411 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a58008d3-84e1-425d-a7de-bc37a0f2664e-metrics-tls\") pod \"dns-operator-744455d44c-dp7xf\" (UID: \"a58008d3-84e1-425d-a7de-bc37a0f2664e\") " pod="openshift-dns-operator/dns-operator-744455d44c-dp7xf" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.607514 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/902f37b4-ec5c-40ae-b110-f17b282f7ddb-serving-cert\") pod \"authentication-operator-69f744f599-b77qj\" (UID: \"902f37b4-ec5c-40ae-b110-f17b282f7ddb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b77qj" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.607596 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnzq9\" (UniqueName: \"kubernetes.io/projected/28dcafba-7fbd-4ee4-aac0-431d46f0a438-kube-api-access-nnzq9\") pod \"openshift-config-operator-7777fb866f-mkhh2\" (UID: \"28dcafba-7fbd-4ee4-aac0-431d46f0a438\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mkhh2" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.607704 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/112f572c-0d1d-4bb9-a66c-202a42a9aba1-serving-cert\") pod \"etcd-operator-b45778765-rr75f\" (UID: \"112f572c-0d1d-4bb9-a66c-202a42a9aba1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rr75f" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.607797 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2td9r\" (UniqueName: \"kubernetes.io/projected/5cff129b-bd54-4115-bc42-d5617d10eae0-kube-api-access-2td9r\") pod \"downloads-7954f5f757-fg5gp\" (UID: \"5cff129b-bd54-4115-bc42-d5617d10eae0\") " pod="openshift-console/downloads-7954f5f757-fg5gp" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.607358 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.607832 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5czgb"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.608032 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9bbc4953-c59b-46cd-8f17-513136731d2a-audit-policies\") pod \"apiserver-7bbb656c7d-ld54h\" (UID: \"9bbc4953-c59b-46cd-8f17-513136731d2a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ld54h" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.608034 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.608145 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/902f37b4-ec5c-40ae-b110-f17b282f7ddb-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-b77qj\" (UID: \"902f37b4-ec5c-40ae-b110-f17b282f7ddb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b77qj" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.607884 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jv2jb" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.608314 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9d2009dc-5385-4529-b1b3-d14a75a50089-etcd-client\") pod \"apiserver-76f77b778f-5fbkt\" (UID: \"9d2009dc-5385-4529-b1b3-d14a75a50089\") " pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.608412 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9bbc4953-c59b-46cd-8f17-513136731d2a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ld54h\" (UID: \"9bbc4953-c59b-46cd-8f17-513136731d2a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ld54h" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.608568 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f455dc97-cc72-4981-ac4d-097fe30413d1-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-hg4bz\" (UID: \"f455dc97-cc72-4981-ac4d-097fe30413d1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hg4bz" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.608592 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5czgb" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.608525 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hr6kg"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.609360 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/112f572c-0d1d-4bb9-a66c-202a42a9aba1-etcd-client\") pod \"etcd-operator-b45778765-rr75f\" (UID: \"112f572c-0d1d-4bb9-a66c-202a42a9aba1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rr75f" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.609423 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d2009dc-5385-4529-b1b3-d14a75a50089-serving-cert\") pod \"apiserver-76f77b778f-5fbkt\" (UID: \"9d2009dc-5385-4529-b1b3-d14a75a50089\") " pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.609442 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-42sb2"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.609444 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lztsk\" (UniqueName: \"kubernetes.io/projected/a58008d3-84e1-425d-a7de-bc37a0f2664e-kube-api-access-lztsk\") pod \"dns-operator-744455d44c-dp7xf\" (UID: \"a58008d3-84e1-425d-a7de-bc37a0f2664e\") " pod="openshift-dns-operator/dns-operator-744455d44c-dp7xf" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.609508 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d2009dc-5385-4529-b1b3-d14a75a50089-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5fbkt\" (UID: \"9d2009dc-5385-4529-b1b3-d14a75a50089\") " pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.609529 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8734f760-160a-49c7-9eb3-65e33d816f02-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nsztn\" (UID: \"8734f760-160a-49c7-9eb3-65e33d816f02\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nsztn" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.609547 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8734f760-160a-49c7-9eb3-65e33d816f02-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nsztn\" (UID: \"8734f760-160a-49c7-9eb3-65e33d816f02\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nsztn" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.609565 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/902f37b4-ec5c-40ae-b110-f17b282f7ddb-service-ca-bundle\") pod \"authentication-operator-69f744f599-b77qj\" (UID: \"902f37b4-ec5c-40ae-b110-f17b282f7ddb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b77qj" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.609582 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bbc4953-c59b-46cd-8f17-513136731d2a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ld54h\" (UID: \"9bbc4953-c59b-46cd-8f17-513136731d2a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ld54h" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.609598 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4943f829-0922-4e87-a750-1cfc2f2f1b72-client-ca\") pod \"route-controller-manager-6576b87f9c-42sb2\" (UID: \"4943f829-0922-4e87-a750-1cfc2f2f1b72\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42sb2" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.609616 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f455dc97-cc72-4981-ac4d-097fe30413d1-config\") pod \"kube-controller-manager-operator-78b949d7b-hg4bz\" (UID: \"f455dc97-cc72-4981-ac4d-097fe30413d1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hg4bz" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.609631 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/112f572c-0d1d-4bb9-a66c-202a42a9aba1-config\") pod \"etcd-operator-b45778765-rr75f\" (UID: \"112f572c-0d1d-4bb9-a66c-202a42a9aba1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rr75f" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.609648 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/902f37b4-ec5c-40ae-b110-f17b282f7ddb-config\") pod \"authentication-operator-69f744f599-b77qj\" (UID: \"902f37b4-ec5c-40ae-b110-f17b282f7ddb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b77qj" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.609667 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btpks\" (UniqueName: \"kubernetes.io/projected/f3fa8c07-9947-4f5c-8295-bdec401113b0-kube-api-access-btpks\") pod \"console-f9d7485db-zwsbc\" (UID: \"f3fa8c07-9947-4f5c-8295-bdec401113b0\") " pod="openshift-console/console-f9d7485db-zwsbc" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.609681 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99adae54-586d-4a41-90e1-288285c47957-config\") pod \"machine-approver-56656f9798-mmhdw\" (UID: \"99adae54-586d-4a41-90e1-288285c47957\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mmhdw" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.609697 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3aee29ce-f5ae-42d5-9c0c-7648739c6c49-trusted-ca\") pod \"ingress-operator-5b745b69d9-dsk9b\" (UID: \"3aee29ce-f5ae-42d5-9c0c-7648739c6c49\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dsk9b" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.609728 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx4ct\" (UniqueName: \"kubernetes.io/projected/9d2009dc-5385-4529-b1b3-d14a75a50089-kube-api-access-mx4ct\") pod \"apiserver-76f77b778f-5fbkt\" (UID: \"9d2009dc-5385-4529-b1b3-d14a75a50089\") " pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.609744 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9d2009dc-5385-4529-b1b3-d14a75a50089-audit-dir\") pod \"apiserver-76f77b778f-5fbkt\" (UID: \"9d2009dc-5385-4529-b1b3-d14a75a50089\") " pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.609761 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f3fa8c07-9947-4f5c-8295-bdec401113b0-console-config\") pod \"console-f9d7485db-zwsbc\" (UID: \"f3fa8c07-9947-4f5c-8295-bdec401113b0\") " pod="openshift-console/console-f9d7485db-zwsbc" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.609776 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5f4k\" (UniqueName: \"kubernetes.io/projected/9bbc4953-c59b-46cd-8f17-513136731d2a-kube-api-access-d5f4k\") pod \"apiserver-7bbb656c7d-ld54h\" (UID: \"9bbc4953-c59b-46cd-8f17-513136731d2a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ld54h" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.609794 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/077b99e5-e95c-4afd-9008-d1f18a6b2f70-client-ca\") pod \"controller-manager-879f6c89f-vtrkl\" (UID: \"077b99e5-e95c-4afd-9008-d1f18a6b2f70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vtrkl" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.609810 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl7qt\" (UniqueName: \"kubernetes.io/projected/3aee29ce-f5ae-42d5-9c0c-7648739c6c49-kube-api-access-sl7qt\") pod \"ingress-operator-5b745b69d9-dsk9b\" (UID: \"3aee29ce-f5ae-42d5-9c0c-7648739c6c49\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dsk9b" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.609826 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f3fa8c07-9947-4f5c-8295-bdec401113b0-console-oauth-config\") pod \"console-f9d7485db-zwsbc\" (UID: \"f3fa8c07-9947-4f5c-8295-bdec401113b0\") " pod="openshift-console/console-f9d7485db-zwsbc" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.609843 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9d2009dc-5385-4529-b1b3-d14a75a50089-image-import-ca\") pod \"apiserver-76f77b778f-5fbkt\" (UID: \"9d2009dc-5385-4529-b1b3-d14a75a50089\") " pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.609861 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5zdw\" (UniqueName: \"kubernetes.io/projected/077b99e5-e95c-4afd-9008-d1f18a6b2f70-kube-api-access-p5zdw\") pod \"controller-manager-879f6c89f-vtrkl\" (UID: \"077b99e5-e95c-4afd-9008-d1f18a6b2f70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vtrkl" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.609877 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bbc4953-c59b-46cd-8f17-513136731d2a-serving-cert\") pod \"apiserver-7bbb656c7d-ld54h\" (UID: \"9bbc4953-c59b-46cd-8f17-513136731d2a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ld54h" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.609892 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9bbc4953-c59b-46cd-8f17-513136731d2a-encryption-config\") pod \"apiserver-7bbb656c7d-ld54h\" (UID: \"9bbc4953-c59b-46cd-8f17-513136731d2a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ld54h" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.609907 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/33ee8fad-d568-45d8-b55f-3302e5f3c9c0-stats-auth\") pod \"router-default-5444994796-xtklb\" (UID: \"33ee8fad-d568-45d8-b55f-3302e5f3c9c0\") " pod="openshift-ingress/router-default-5444994796-xtklb" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.609921 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f455dc97-cc72-4981-ac4d-097fe30413d1-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-hg4bz\" (UID: \"f455dc97-cc72-4981-ac4d-097fe30413d1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hg4bz" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.609940 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d2009dc-5385-4529-b1b3-d14a75a50089-config\") pod \"apiserver-76f77b778f-5fbkt\" (UID: \"9d2009dc-5385-4529-b1b3-d14a75a50089\") " pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.609956 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33ee8fad-d568-45d8-b55f-3302e5f3c9c0-service-ca-bundle\") pod \"router-default-5444994796-xtklb\" (UID: \"33ee8fad-d568-45d8-b55f-3302e5f3c9c0\") " pod="openshift-ingress/router-default-5444994796-xtklb" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.609974 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3aee29ce-f5ae-42d5-9c0c-7648739c6c49-bound-sa-token\") pod \"ingress-operator-5b745b69d9-dsk9b\" (UID: \"3aee29ce-f5ae-42d5-9c0c-7648739c6c49\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dsk9b" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.609988 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c033bce4-9921-49ec-bda6-ba7f79647c00-images\") pod \"machine-api-operator-5694c8668f-2rjhr\" (UID: \"c033bce4-9921-49ec-bda6-ba7f79647c00\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2rjhr" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.610013 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9d2009dc-5385-4529-b1b3-d14a75a50089-node-pullsecrets\") pod \"apiserver-76f77b778f-5fbkt\" (UID: \"9d2009dc-5385-4529-b1b3-d14a75a50089\") " pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.610027 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/077b99e5-e95c-4afd-9008-d1f18a6b2f70-serving-cert\") pod \"controller-manager-879f6c89f-vtrkl\" (UID: \"077b99e5-e95c-4afd-9008-d1f18a6b2f70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vtrkl" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.610041 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c033bce4-9921-49ec-bda6-ba7f79647c00-config\") pod \"machine-api-operator-5694c8668f-2rjhr\" (UID: \"c033bce4-9921-49ec-bda6-ba7f79647c00\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2rjhr" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.610054 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f3fa8c07-9947-4f5c-8295-bdec401113b0-service-ca\") pod \"console-f9d7485db-zwsbc\" (UID: \"f3fa8c07-9947-4f5c-8295-bdec401113b0\") " pod="openshift-console/console-f9d7485db-zwsbc" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.610072 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/99adae54-586d-4a41-90e1-288285c47957-auth-proxy-config\") pod \"machine-approver-56656f9798-mmhdw\" (UID: \"99adae54-586d-4a41-90e1-288285c47957\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mmhdw" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.610096 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/33ee8fad-d568-45d8-b55f-3302e5f3c9c0-default-certificate\") pod \"router-default-5444994796-xtklb\" (UID: \"33ee8fad-d568-45d8-b55f-3302e5f3c9c0\") " pod="openshift-ingress/router-default-5444994796-xtklb" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.610111 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/33ee8fad-d568-45d8-b55f-3302e5f3c9c0-metrics-certs\") pod \"router-default-5444994796-xtklb\" (UID: \"33ee8fad-d568-45d8-b55f-3302e5f3c9c0\") " pod="openshift-ingress/router-default-5444994796-xtklb" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.610133 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9d2009dc-5385-4529-b1b3-d14a75a50089-audit\") pod \"apiserver-76f77b778f-5fbkt\" (UID: \"9d2009dc-5385-4529-b1b3-d14a75a50089\") " pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.610149 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/28dcafba-7fbd-4ee4-aac0-431d46f0a438-available-featuregates\") pod \"openshift-config-operator-7777fb866f-mkhh2\" (UID: \"28dcafba-7fbd-4ee4-aac0-431d46f0a438\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mkhh2" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.610166 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3aee29ce-f5ae-42d5-9c0c-7648739c6c49-metrics-tls\") pod \"ingress-operator-5b745b69d9-dsk9b\" (UID: \"3aee29ce-f5ae-42d5-9c0c-7648739c6c49\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dsk9b" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.610182 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmnsm\" (UniqueName: \"kubernetes.io/projected/112f572c-0d1d-4bb9-a66c-202a42a9aba1-kube-api-access-nmnsm\") pod \"etcd-operator-b45778765-rr75f\" (UID: \"112f572c-0d1d-4bb9-a66c-202a42a9aba1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rr75f" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.610201 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9d2009dc-5385-4529-b1b3-d14a75a50089-encryption-config\") pod \"apiserver-76f77b778f-5fbkt\" (UID: \"9d2009dc-5385-4529-b1b3-d14a75a50089\") " pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.610217 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79fts\" (UniqueName: \"kubernetes.io/projected/902f37b4-ec5c-40ae-b110-f17b282f7ddb-kube-api-access-79fts\") pod \"authentication-operator-69f744f599-b77qj\" (UID: \"902f37b4-ec5c-40ae-b110-f17b282f7ddb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b77qj" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.610232 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4943f829-0922-4e87-a750-1cfc2f2f1b72-serving-cert\") pod \"route-controller-manager-6576b87f9c-42sb2\" (UID: \"4943f829-0922-4e87-a750-1cfc2f2f1b72\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42sb2" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.610249 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z49tl\" (UniqueName: \"kubernetes.io/projected/99adae54-586d-4a41-90e1-288285c47957-kube-api-access-z49tl\") pod \"machine-approver-56656f9798-mmhdw\" (UID: \"99adae54-586d-4a41-90e1-288285c47957\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mmhdw" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.610265 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28dcafba-7fbd-4ee4-aac0-431d46f0a438-serving-cert\") pod \"openshift-config-operator-7777fb866f-mkhh2\" (UID: \"28dcafba-7fbd-4ee4-aac0-431d46f0a438\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mkhh2" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.610279 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/112f572c-0d1d-4bb9-a66c-202a42a9aba1-etcd-service-ca\") pod \"etcd-operator-b45778765-rr75f\" (UID: \"112f572c-0d1d-4bb9-a66c-202a42a9aba1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rr75f" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.610295 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv4m5\" (UniqueName: \"kubernetes.io/projected/4943f829-0922-4e87-a750-1cfc2f2f1b72-kube-api-access-tv4m5\") pod \"route-controller-manager-6576b87f9c-42sb2\" (UID: \"4943f829-0922-4e87-a750-1cfc2f2f1b72\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42sb2" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.610358 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f3fa8c07-9947-4f5c-8295-bdec401113b0-oauth-serving-cert\") pod \"console-f9d7485db-zwsbc\" (UID: \"f3fa8c07-9947-4f5c-8295-bdec401113b0\") " pod="openshift-console/console-f9d7485db-zwsbc" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.610374 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c033bce4-9921-49ec-bda6-ba7f79647c00-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-2rjhr\" (UID: \"c033bce4-9921-49ec-bda6-ba7f79647c00\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2rjhr" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.610404 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prbtg\" (UniqueName: \"kubernetes.io/projected/8734f760-160a-49c7-9eb3-65e33d816f02-kube-api-access-prbtg\") pod \"openshift-controller-manager-operator-756b6f6bc6-nsztn\" (UID: \"8734f760-160a-49c7-9eb3-65e33d816f02\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nsztn" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.610420 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/077b99e5-e95c-4afd-9008-d1f18a6b2f70-config\") pod \"controller-manager-879f6c89f-vtrkl\" (UID: \"077b99e5-e95c-4afd-9008-d1f18a6b2f70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vtrkl" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.610435 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbs8f\" (UniqueName: \"kubernetes.io/projected/c033bce4-9921-49ec-bda6-ba7f79647c00-kube-api-access-hbs8f\") pod \"machine-api-operator-5694c8668f-2rjhr\" (UID: \"c033bce4-9921-49ec-bda6-ba7f79647c00\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2rjhr" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.610451 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4943f829-0922-4e87-a750-1cfc2f2f1b72-config\") pod \"route-controller-manager-6576b87f9c-42sb2\" (UID: \"4943f829-0922-4e87-a750-1cfc2f2f1b72\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42sb2" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.610467 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9bbc4953-c59b-46cd-8f17-513136731d2a-audit-dir\") pod \"apiserver-7bbb656c7d-ld54h\" (UID: \"9bbc4953-c59b-46cd-8f17-513136731d2a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ld54h" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.610482 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/99adae54-586d-4a41-90e1-288285c47957-machine-approver-tls\") pod \"machine-approver-56656f9798-mmhdw\" (UID: \"99adae54-586d-4a41-90e1-288285c47957\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mmhdw" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.610496 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/112f572c-0d1d-4bb9-a66c-202a42a9aba1-etcd-ca\") pod \"etcd-operator-b45778765-rr75f\" (UID: \"112f572c-0d1d-4bb9-a66c-202a42a9aba1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rr75f" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.610514 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n26dq\" (UniqueName: \"kubernetes.io/projected/33ee8fad-d568-45d8-b55f-3302e5f3c9c0-kube-api-access-n26dq\") pod \"router-default-5444994796-xtklb\" (UID: \"33ee8fad-d568-45d8-b55f-3302e5f3c9c0\") " pod="openshift-ingress/router-default-5444994796-xtklb" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.611684 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hr6kg" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.613492 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.616576 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vtrkl"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.617620 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vwnxb"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.618258 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-vwnxb" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.618491 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6x29q"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.619119 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-6x29q" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.621099 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g2zhh"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.621725 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g2zhh" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.625660 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-2rjhr"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.627275 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-85b84"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.628154 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-85b84" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.630443 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-fg5gp"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.632356 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-zwsbc"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.633975 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.636147 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-b6clj"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.636741 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-b6clj" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.638908 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-mkhh2"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.640750 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-hmkvr"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.641354 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-hmkvr" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.642188 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521020-v7cql"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.642653 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-v7cql" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.644252 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7ngdp"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.645181 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7ngdp" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.648034 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-lgmrt"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.648716 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ld54h"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.650274 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-6z49s"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.651738 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-dsk9b"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.653551 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n5rq7"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.656139 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.656333 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hg4bz"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.657509 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h26hh"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.659575 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5dxq9"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.662318 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lzvbh"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.664032 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-85b84"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.666796 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5fbkt"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.667712 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vwnxb"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.669463 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nsztn"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.673937 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.674039 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-b77qj"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.677574 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-dp7xf"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.680039 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qfr5h"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.681591 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-h6xgf"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.684787 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s8r99"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.685980 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6x29q"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.687343 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vhvsb"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.688381 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-swnwm"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.689383 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-4vwx5"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.689902 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4vwx5" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.691052 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jv2jb"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.692171 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-b6clj"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.693586 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.693904 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6ghx"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.695772 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-rr75f"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.697425 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5czgb"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.703879 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g2zhh"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.708004 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hr6kg"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.708573 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4vwx5"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.709658 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7ngdp"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.710951 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521020-v7cql"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.711188 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/902f37b4-ec5c-40ae-b110-f17b282f7ddb-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-b77qj\" (UID: \"902f37b4-ec5c-40ae-b110-f17b282f7ddb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b77qj" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.711229 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2td9r\" (UniqueName: \"kubernetes.io/projected/5cff129b-bd54-4115-bc42-d5617d10eae0-kube-api-access-2td9r\") pod \"downloads-7954f5f757-fg5gp\" (UID: \"5cff129b-bd54-4115-bc42-d5617d10eae0\") " pod="openshift-console/downloads-7954f5f757-fg5gp" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.711256 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9bbc4953-c59b-46cd-8f17-513136731d2a-audit-policies\") pod \"apiserver-7bbb656c7d-ld54h\" (UID: \"9bbc4953-c59b-46cd-8f17-513136731d2a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ld54h" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.711463 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9d2009dc-5385-4529-b1b3-d14a75a50089-etcd-client\") pod \"apiserver-76f77b778f-5fbkt\" (UID: \"9d2009dc-5385-4529-b1b3-d14a75a50089\") " pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.711511 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9bbc4953-c59b-46cd-8f17-513136731d2a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ld54h\" (UID: \"9bbc4953-c59b-46cd-8f17-513136731d2a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ld54h" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.711535 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f455dc97-cc72-4981-ac4d-097fe30413d1-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-hg4bz\" (UID: \"f455dc97-cc72-4981-ac4d-097fe30413d1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hg4bz" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.712018 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9bbc4953-c59b-46cd-8f17-513136731d2a-audit-policies\") pod \"apiserver-7bbb656c7d-ld54h\" (UID: \"9bbc4953-c59b-46cd-8f17-513136731d2a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ld54h" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.712157 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/112f572c-0d1d-4bb9-a66c-202a42a9aba1-etcd-client\") pod \"etcd-operator-b45778765-rr75f\" (UID: \"112f572c-0d1d-4bb9-a66c-202a42a9aba1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rr75f" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.712244 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lztsk\" (UniqueName: \"kubernetes.io/projected/a58008d3-84e1-425d-a7de-bc37a0f2664e-kube-api-access-lztsk\") pod \"dns-operator-744455d44c-dp7xf\" (UID: \"a58008d3-84e1-425d-a7de-bc37a0f2664e\") " pod="openshift-dns-operator/dns-operator-744455d44c-dp7xf" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.712270 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d2009dc-5385-4529-b1b3-d14a75a50089-serving-cert\") pod \"apiserver-76f77b778f-5fbkt\" (UID: \"9d2009dc-5385-4529-b1b3-d14a75a50089\") " pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.712618 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d2009dc-5385-4529-b1b3-d14a75a50089-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5fbkt\" (UID: \"9d2009dc-5385-4529-b1b3-d14a75a50089\") " pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.712696 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8734f760-160a-49c7-9eb3-65e33d816f02-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nsztn\" (UID: \"8734f760-160a-49c7-9eb3-65e33d816f02\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nsztn" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.712718 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8734f760-160a-49c7-9eb3-65e33d816f02-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nsztn\" (UID: \"8734f760-160a-49c7-9eb3-65e33d816f02\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nsztn" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.712739 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/902f37b4-ec5c-40ae-b110-f17b282f7ddb-service-ca-bundle\") pod \"authentication-operator-69f744f599-b77qj\" (UID: \"902f37b4-ec5c-40ae-b110-f17b282f7ddb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b77qj" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.712815 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bbc4953-c59b-46cd-8f17-513136731d2a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ld54h\" (UID: \"9bbc4953-c59b-46cd-8f17-513136731d2a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ld54h" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.712837 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/902f37b4-ec5c-40ae-b110-f17b282f7ddb-config\") pod \"authentication-operator-69f744f599-b77qj\" (UID: \"902f37b4-ec5c-40ae-b110-f17b282f7ddb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b77qj" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.712860 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4943f829-0922-4e87-a750-1cfc2f2f1b72-client-ca\") pod \"route-controller-manager-6576b87f9c-42sb2\" (UID: \"4943f829-0922-4e87-a750-1cfc2f2f1b72\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42sb2" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.713039 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9bbc4953-c59b-46cd-8f17-513136731d2a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ld54h\" (UID: \"9bbc4953-c59b-46cd-8f17-513136731d2a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ld54h" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.713142 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f455dc97-cc72-4981-ac4d-097fe30413d1-config\") pod \"kube-controller-manager-operator-78b949d7b-hg4bz\" (UID: \"f455dc97-cc72-4981-ac4d-097fe30413d1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hg4bz" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.713167 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/112f572c-0d1d-4bb9-a66c-202a42a9aba1-config\") pod \"etcd-operator-b45778765-rr75f\" (UID: \"112f572c-0d1d-4bb9-a66c-202a42a9aba1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rr75f" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.713187 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3aee29ce-f5ae-42d5-9c0c-7648739c6c49-trusted-ca\") pod \"ingress-operator-5b745b69d9-dsk9b\" (UID: \"3aee29ce-f5ae-42d5-9c0c-7648739c6c49\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dsk9b" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.713209 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btpks\" (UniqueName: \"kubernetes.io/projected/f3fa8c07-9947-4f5c-8295-bdec401113b0-kube-api-access-btpks\") pod \"console-f9d7485db-zwsbc\" (UID: \"f3fa8c07-9947-4f5c-8295-bdec401113b0\") " pod="openshift-console/console-f9d7485db-zwsbc" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.713230 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99adae54-586d-4a41-90e1-288285c47957-config\") pod \"machine-approver-56656f9798-mmhdw\" (UID: \"99adae54-586d-4a41-90e1-288285c47957\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mmhdw" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.713249 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx4ct\" (UniqueName: \"kubernetes.io/projected/9d2009dc-5385-4529-b1b3-d14a75a50089-kube-api-access-mx4ct\") pod \"apiserver-76f77b778f-5fbkt\" (UID: \"9d2009dc-5385-4529-b1b3-d14a75a50089\") " pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.713284 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9d2009dc-5385-4529-b1b3-d14a75a50089-audit-dir\") pod \"apiserver-76f77b778f-5fbkt\" (UID: \"9d2009dc-5385-4529-b1b3-d14a75a50089\") " pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.713329 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f3fa8c07-9947-4f5c-8295-bdec401113b0-console-config\") pod \"console-f9d7485db-zwsbc\" (UID: \"f3fa8c07-9947-4f5c-8295-bdec401113b0\") " pod="openshift-console/console-f9d7485db-zwsbc" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.713351 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5f4k\" (UniqueName: \"kubernetes.io/projected/9bbc4953-c59b-46cd-8f17-513136731d2a-kube-api-access-d5f4k\") pod \"apiserver-7bbb656c7d-ld54h\" (UID: \"9bbc4953-c59b-46cd-8f17-513136731d2a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ld54h" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.713376 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl7qt\" (UniqueName: \"kubernetes.io/projected/3aee29ce-f5ae-42d5-9c0c-7648739c6c49-kube-api-access-sl7qt\") pod \"ingress-operator-5b745b69d9-dsk9b\" (UID: \"3aee29ce-f5ae-42d5-9c0c-7648739c6c49\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dsk9b" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.713455 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/077b99e5-e95c-4afd-9008-d1f18a6b2f70-client-ca\") pod \"controller-manager-879f6c89f-vtrkl\" (UID: \"077b99e5-e95c-4afd-9008-d1f18a6b2f70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vtrkl" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.713479 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f3fa8c07-9947-4f5c-8295-bdec401113b0-console-oauth-config\") pod \"console-f9d7485db-zwsbc\" (UID: \"f3fa8c07-9947-4f5c-8295-bdec401113b0\") " pod="openshift-console/console-f9d7485db-zwsbc" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.713502 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9d2009dc-5385-4529-b1b3-d14a75a50089-image-import-ca\") pod \"apiserver-76f77b778f-5fbkt\" (UID: \"9d2009dc-5385-4529-b1b3-d14a75a50089\") " pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.713523 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bbc4953-c59b-46cd-8f17-513136731d2a-serving-cert\") pod \"apiserver-7bbb656c7d-ld54h\" (UID: \"9bbc4953-c59b-46cd-8f17-513136731d2a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ld54h" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.713547 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5zdw\" (UniqueName: \"kubernetes.io/projected/077b99e5-e95c-4afd-9008-d1f18a6b2f70-kube-api-access-p5zdw\") pod \"controller-manager-879f6c89f-vtrkl\" (UID: \"077b99e5-e95c-4afd-9008-d1f18a6b2f70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vtrkl" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.713571 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9bbc4953-c59b-46cd-8f17-513136731d2a-encryption-config\") pod \"apiserver-7bbb656c7d-ld54h\" (UID: \"9bbc4953-c59b-46cd-8f17-513136731d2a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ld54h" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.713592 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33ee8fad-d568-45d8-b55f-3302e5f3c9c0-service-ca-bundle\") pod \"router-default-5444994796-xtklb\" (UID: \"33ee8fad-d568-45d8-b55f-3302e5f3c9c0\") " pod="openshift-ingress/router-default-5444994796-xtklb" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.713612 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/33ee8fad-d568-45d8-b55f-3302e5f3c9c0-stats-auth\") pod \"router-default-5444994796-xtklb\" (UID: \"33ee8fad-d568-45d8-b55f-3302e5f3c9c0\") " pod="openshift-ingress/router-default-5444994796-xtklb" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.713630 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f455dc97-cc72-4981-ac4d-097fe30413d1-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-hg4bz\" (UID: \"f455dc97-cc72-4981-ac4d-097fe30413d1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hg4bz" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.713654 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d2009dc-5385-4529-b1b3-d14a75a50089-config\") pod \"apiserver-76f77b778f-5fbkt\" (UID: \"9d2009dc-5385-4529-b1b3-d14a75a50089\") " pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.713713 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3aee29ce-f5ae-42d5-9c0c-7648739c6c49-bound-sa-token\") pod \"ingress-operator-5b745b69d9-dsk9b\" (UID: \"3aee29ce-f5ae-42d5-9c0c-7648739c6c49\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dsk9b" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.713733 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c033bce4-9921-49ec-bda6-ba7f79647c00-images\") pod \"machine-api-operator-5694c8668f-2rjhr\" (UID: \"c033bce4-9921-49ec-bda6-ba7f79647c00\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2rjhr" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.713769 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9d2009dc-5385-4529-b1b3-d14a75a50089-node-pullsecrets\") pod \"apiserver-76f77b778f-5fbkt\" (UID: \"9d2009dc-5385-4529-b1b3-d14a75a50089\") " pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.713792 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/077b99e5-e95c-4afd-9008-d1f18a6b2f70-serving-cert\") pod \"controller-manager-879f6c89f-vtrkl\" (UID: \"077b99e5-e95c-4afd-9008-d1f18a6b2f70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vtrkl" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.713813 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c033bce4-9921-49ec-bda6-ba7f79647c00-config\") pod \"machine-api-operator-5694c8668f-2rjhr\" (UID: \"c033bce4-9921-49ec-bda6-ba7f79647c00\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2rjhr" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.713847 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f3fa8c07-9947-4f5c-8295-bdec401113b0-service-ca\") pod \"console-f9d7485db-zwsbc\" (UID: \"f3fa8c07-9947-4f5c-8295-bdec401113b0\") " pod="openshift-console/console-f9d7485db-zwsbc" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.713869 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/99adae54-586d-4a41-90e1-288285c47957-auth-proxy-config\") pod \"machine-approver-56656f9798-mmhdw\" (UID: \"99adae54-586d-4a41-90e1-288285c47957\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mmhdw" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.713895 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/33ee8fad-d568-45d8-b55f-3302e5f3c9c0-default-certificate\") pod \"router-default-5444994796-xtklb\" (UID: \"33ee8fad-d568-45d8-b55f-3302e5f3c9c0\") " pod="openshift-ingress/router-default-5444994796-xtklb" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.713918 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/33ee8fad-d568-45d8-b55f-3302e5f3c9c0-metrics-certs\") pod \"router-default-5444994796-xtklb\" (UID: \"33ee8fad-d568-45d8-b55f-3302e5f3c9c0\") " pod="openshift-ingress/router-default-5444994796-xtklb" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.713953 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9d2009dc-5385-4529-b1b3-d14a75a50089-audit\") pod \"apiserver-76f77b778f-5fbkt\" (UID: \"9d2009dc-5385-4529-b1b3-d14a75a50089\") " pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.713978 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/28dcafba-7fbd-4ee4-aac0-431d46f0a438-available-featuregates\") pod \"openshift-config-operator-7777fb866f-mkhh2\" (UID: \"28dcafba-7fbd-4ee4-aac0-431d46f0a438\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mkhh2" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.714002 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmnsm\" (UniqueName: \"kubernetes.io/projected/112f572c-0d1d-4bb9-a66c-202a42a9aba1-kube-api-access-nmnsm\") pod \"etcd-operator-b45778765-rr75f\" (UID: \"112f572c-0d1d-4bb9-a66c-202a42a9aba1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rr75f" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.714028 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3aee29ce-f5ae-42d5-9c0c-7648739c6c49-metrics-tls\") pod \"ingress-operator-5b745b69d9-dsk9b\" (UID: \"3aee29ce-f5ae-42d5-9c0c-7648739c6c49\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dsk9b" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.714055 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9d2009dc-5385-4529-b1b3-d14a75a50089-encryption-config\") pod \"apiserver-76f77b778f-5fbkt\" (UID: \"9d2009dc-5385-4529-b1b3-d14a75a50089\") " pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.714399 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/112f572c-0d1d-4bb9-a66c-202a42a9aba1-etcd-service-ca\") pod \"etcd-operator-b45778765-rr75f\" (UID: \"112f572c-0d1d-4bb9-a66c-202a42a9aba1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rr75f" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.714429 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79fts\" (UniqueName: \"kubernetes.io/projected/902f37b4-ec5c-40ae-b110-f17b282f7ddb-kube-api-access-79fts\") pod \"authentication-operator-69f744f599-b77qj\" (UID: \"902f37b4-ec5c-40ae-b110-f17b282f7ddb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b77qj" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.714455 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4943f829-0922-4e87-a750-1cfc2f2f1b72-serving-cert\") pod \"route-controller-manager-6576b87f9c-42sb2\" (UID: \"4943f829-0922-4e87-a750-1cfc2f2f1b72\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42sb2" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.714479 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z49tl\" (UniqueName: \"kubernetes.io/projected/99adae54-586d-4a41-90e1-288285c47957-kube-api-access-z49tl\") pod \"machine-approver-56656f9798-mmhdw\" (UID: \"99adae54-586d-4a41-90e1-288285c47957\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mmhdw" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.714501 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28dcafba-7fbd-4ee4-aac0-431d46f0a438-serving-cert\") pod \"openshift-config-operator-7777fb866f-mkhh2\" (UID: \"28dcafba-7fbd-4ee4-aac0-431d46f0a438\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mkhh2" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.714566 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv4m5\" (UniqueName: \"kubernetes.io/projected/4943f829-0922-4e87-a750-1cfc2f2f1b72-kube-api-access-tv4m5\") pod \"route-controller-manager-6576b87f9c-42sb2\" (UID: \"4943f829-0922-4e87-a750-1cfc2f2f1b72\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42sb2" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.714589 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f3fa8c07-9947-4f5c-8295-bdec401113b0-oauth-serving-cert\") pod \"console-f9d7485db-zwsbc\" (UID: \"f3fa8c07-9947-4f5c-8295-bdec401113b0\") " pod="openshift-console/console-f9d7485db-zwsbc" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.714615 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c033bce4-9921-49ec-bda6-ba7f79647c00-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-2rjhr\" (UID: \"c033bce4-9921-49ec-bda6-ba7f79647c00\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2rjhr" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.714639 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prbtg\" (UniqueName: \"kubernetes.io/projected/8734f760-160a-49c7-9eb3-65e33d816f02-kube-api-access-prbtg\") pod \"openshift-controller-manager-operator-756b6f6bc6-nsztn\" (UID: \"8734f760-160a-49c7-9eb3-65e33d816f02\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nsztn" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.714663 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/077b99e5-e95c-4afd-9008-d1f18a6b2f70-config\") pod \"controller-manager-879f6c89f-vtrkl\" (UID: \"077b99e5-e95c-4afd-9008-d1f18a6b2f70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vtrkl" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.714685 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbs8f\" (UniqueName: \"kubernetes.io/projected/c033bce4-9921-49ec-bda6-ba7f79647c00-kube-api-access-hbs8f\") pod \"machine-api-operator-5694c8668f-2rjhr\" (UID: \"c033bce4-9921-49ec-bda6-ba7f79647c00\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2rjhr" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.714706 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/112f572c-0d1d-4bb9-a66c-202a42a9aba1-etcd-ca\") pod \"etcd-operator-b45778765-rr75f\" (UID: \"112f572c-0d1d-4bb9-a66c-202a42a9aba1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rr75f" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.714770 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4943f829-0922-4e87-a750-1cfc2f2f1b72-config\") pod \"route-controller-manager-6576b87f9c-42sb2\" (UID: \"4943f829-0922-4e87-a750-1cfc2f2f1b72\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42sb2" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.714799 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9bbc4953-c59b-46cd-8f17-513136731d2a-audit-dir\") pod \"apiserver-7bbb656c7d-ld54h\" (UID: \"9bbc4953-c59b-46cd-8f17-513136731d2a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ld54h" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.714820 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/99adae54-586d-4a41-90e1-288285c47957-machine-approver-tls\") pod \"machine-approver-56656f9798-mmhdw\" (UID: \"99adae54-586d-4a41-90e1-288285c47957\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mmhdw" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.714845 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n26dq\" (UniqueName: \"kubernetes.io/projected/33ee8fad-d568-45d8-b55f-3302e5f3c9c0-kube-api-access-n26dq\") pod \"router-default-5444994796-xtklb\" (UID: \"33ee8fad-d568-45d8-b55f-3302e5f3c9c0\") " pod="openshift-ingress/router-default-5444994796-xtklb" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.714871 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3fa8c07-9947-4f5c-8295-bdec401113b0-trusted-ca-bundle\") pod \"console-f9d7485db-zwsbc\" (UID: \"f3fa8c07-9947-4f5c-8295-bdec401113b0\") " pod="openshift-console/console-f9d7485db-zwsbc" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.714890 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9d2009dc-5385-4529-b1b3-d14a75a50089-etcd-serving-ca\") pod \"apiserver-76f77b778f-5fbkt\" (UID: \"9d2009dc-5385-4529-b1b3-d14a75a50089\") " pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.714912 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/077b99e5-e95c-4afd-9008-d1f18a6b2f70-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vtrkl\" (UID: \"077b99e5-e95c-4afd-9008-d1f18a6b2f70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vtrkl" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.714932 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9bbc4953-c59b-46cd-8f17-513136731d2a-etcd-client\") pod \"apiserver-7bbb656c7d-ld54h\" (UID: \"9bbc4953-c59b-46cd-8f17-513136731d2a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ld54h" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.714947 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/902f37b4-ec5c-40ae-b110-f17b282f7ddb-serving-cert\") pod \"authentication-operator-69f744f599-b77qj\" (UID: \"902f37b4-ec5c-40ae-b110-f17b282f7ddb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b77qj" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.714977 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3fa8c07-9947-4f5c-8295-bdec401113b0-console-serving-cert\") pod \"console-f9d7485db-zwsbc\" (UID: \"f3fa8c07-9947-4f5c-8295-bdec401113b0\") " pod="openshift-console/console-f9d7485db-zwsbc" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.714998 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a58008d3-84e1-425d-a7de-bc37a0f2664e-metrics-tls\") pod \"dns-operator-744455d44c-dp7xf\" (UID: \"a58008d3-84e1-425d-a7de-bc37a0f2664e\") " pod="openshift-dns-operator/dns-operator-744455d44c-dp7xf" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.715020 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/112f572c-0d1d-4bb9-a66c-202a42a9aba1-serving-cert\") pod \"etcd-operator-b45778765-rr75f\" (UID: \"112f572c-0d1d-4bb9-a66c-202a42a9aba1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rr75f" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.715042 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnzq9\" (UniqueName: \"kubernetes.io/projected/28dcafba-7fbd-4ee4-aac0-431d46f0a438-kube-api-access-nnzq9\") pod \"openshift-config-operator-7777fb866f-mkhh2\" (UID: \"28dcafba-7fbd-4ee4-aac0-431d46f0a438\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mkhh2" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.715926 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33ee8fad-d568-45d8-b55f-3302e5f3c9c0-service-ca-bundle\") pod \"router-default-5444994796-xtklb\" (UID: \"33ee8fad-d568-45d8-b55f-3302e5f3c9c0\") " pod="openshift-ingress/router-default-5444994796-xtklb" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.713918 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/902f37b4-ec5c-40ae-b110-f17b282f7ddb-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-b77qj\" (UID: \"902f37b4-ec5c-40ae-b110-f17b282f7ddb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b77qj" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.716908 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9d2009dc-5385-4529-b1b3-d14a75a50089-node-pullsecrets\") pod \"apiserver-76f77b778f-5fbkt\" (UID: \"9d2009dc-5385-4529-b1b3-d14a75a50089\") " pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.717273 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9bbc4953-c59b-46cd-8f17-513136731d2a-audit-dir\") pod \"apiserver-7bbb656c7d-ld54h\" (UID: \"9bbc4953-c59b-46cd-8f17-513136731d2a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ld54h" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.717454 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c033bce4-9921-49ec-bda6-ba7f79647c00-images\") pod \"machine-api-operator-5694c8668f-2rjhr\" (UID: \"c033bce4-9921-49ec-bda6-ba7f79647c00\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2rjhr" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.717928 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/28dcafba-7fbd-4ee4-aac0-431d46f0a438-available-featuregates\") pod \"openshift-config-operator-7777fb866f-mkhh2\" (UID: \"28dcafba-7fbd-4ee4-aac0-431d46f0a438\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mkhh2" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.718063 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/99adae54-586d-4a41-90e1-288285c47957-auth-proxy-config\") pod \"machine-approver-56656f9798-mmhdw\" (UID: \"99adae54-586d-4a41-90e1-288285c47957\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mmhdw" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.718138 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8734f760-160a-49c7-9eb3-65e33d816f02-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nsztn\" (UID: \"8734f760-160a-49c7-9eb3-65e33d816f02\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nsztn" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.718320 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-znzx2"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.718784 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9bbc4953-c59b-46cd-8f17-513136731d2a-encryption-config\") pod \"apiserver-7bbb656c7d-ld54h\" (UID: \"9bbc4953-c59b-46cd-8f17-513136731d2a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ld54h" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.718890 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c033bce4-9921-49ec-bda6-ba7f79647c00-config\") pod \"machine-api-operator-5694c8668f-2rjhr\" (UID: \"c033bce4-9921-49ec-bda6-ba7f79647c00\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2rjhr" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.718905 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9d2009dc-5385-4529-b1b3-d14a75a50089-etcd-client\") pod \"apiserver-76f77b778f-5fbkt\" (UID: \"9d2009dc-5385-4529-b1b3-d14a75a50089\") " pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.719042 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f3fa8c07-9947-4f5c-8295-bdec401113b0-service-ca\") pod \"console-f9d7485db-zwsbc\" (UID: \"f3fa8c07-9947-4f5c-8295-bdec401113b0\") " pod="openshift-console/console-f9d7485db-zwsbc" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.719437 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8734f760-160a-49c7-9eb3-65e33d816f02-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nsztn\" (UID: \"8734f760-160a-49c7-9eb3-65e33d816f02\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nsztn" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.719462 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-znzx2"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.719483 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-9btqh"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.719500 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9d2009dc-5385-4529-b1b3-d14a75a50089-audit-dir\") pod \"apiserver-76f77b778f-5fbkt\" (UID: \"9d2009dc-5385-4529-b1b3-d14a75a50089\") " pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.719653 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/902f37b4-ec5c-40ae-b110-f17b282f7ddb-service-ca-bundle\") pod \"authentication-operator-69f744f599-b77qj\" (UID: \"902f37b4-ec5c-40ae-b110-f17b282f7ddb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b77qj" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.719669 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/077b99e5-e95c-4afd-9008-d1f18a6b2f70-serving-cert\") pod \"controller-manager-879f6c89f-vtrkl\" (UID: \"077b99e5-e95c-4afd-9008-d1f18a6b2f70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vtrkl" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.720121 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bbc4953-c59b-46cd-8f17-513136731d2a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ld54h\" (UID: \"9bbc4953-c59b-46cd-8f17-513136731d2a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ld54h" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.720138 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-9btqh" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.720139 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/33ee8fad-d568-45d8-b55f-3302e5f3c9c0-stats-auth\") pod \"router-default-5444994796-xtklb\" (UID: \"33ee8fad-d568-45d8-b55f-3302e5f3c9c0\") " pod="openshift-ingress/router-default-5444994796-xtklb" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.720214 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f3fa8c07-9947-4f5c-8295-bdec401113b0-console-config\") pod \"console-f9d7485db-zwsbc\" (UID: \"f3fa8c07-9947-4f5c-8295-bdec401113b0\") " pod="openshift-console/console-f9d7485db-zwsbc" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.720434 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f3fa8c07-9947-4f5c-8295-bdec401113b0-oauth-serving-cert\") pod \"console-f9d7485db-zwsbc\" (UID: \"f3fa8c07-9947-4f5c-8295-bdec401113b0\") " pod="openshift-console/console-f9d7485db-zwsbc" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.720726 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/902f37b4-ec5c-40ae-b110-f17b282f7ddb-config\") pod \"authentication-operator-69f744f599-b77qj\" (UID: \"902f37b4-ec5c-40ae-b110-f17b282f7ddb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b77qj" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.721112 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28dcafba-7fbd-4ee4-aac0-431d46f0a438-serving-cert\") pod \"openshift-config-operator-7777fb866f-mkhh2\" (UID: \"28dcafba-7fbd-4ee4-aac0-431d46f0a438\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mkhh2" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.721394 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3aee29ce-f5ae-42d5-9c0c-7648739c6c49-metrics-tls\") pod \"ingress-operator-5b745b69d9-dsk9b\" (UID: \"3aee29ce-f5ae-42d5-9c0c-7648739c6c49\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dsk9b" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.721574 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/077b99e5-e95c-4afd-9008-d1f18a6b2f70-client-ca\") pod \"controller-manager-879f6c89f-vtrkl\" (UID: \"077b99e5-e95c-4afd-9008-d1f18a6b2f70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vtrkl" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.721606 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4943f829-0922-4e87-a750-1cfc2f2f1b72-client-ca\") pod \"route-controller-manager-6576b87f9c-42sb2\" (UID: \"4943f829-0922-4e87-a750-1cfc2f2f1b72\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42sb2" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.721660 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/33ee8fad-d568-45d8-b55f-3302e5f3c9c0-default-certificate\") pod \"router-default-5444994796-xtklb\" (UID: \"33ee8fad-d568-45d8-b55f-3302e5f3c9c0\") " pod="openshift-ingress/router-default-5444994796-xtklb" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.722208 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/077b99e5-e95c-4afd-9008-d1f18a6b2f70-config\") pod \"controller-manager-879f6c89f-vtrkl\" (UID: \"077b99e5-e95c-4afd-9008-d1f18a6b2f70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vtrkl" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.722328 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/112f572c-0d1d-4bb9-a66c-202a42a9aba1-config\") pod \"etcd-operator-b45778765-rr75f\" (UID: \"112f572c-0d1d-4bb9-a66c-202a42a9aba1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rr75f" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.722588 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-9btqh"] Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.722682 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-znzx2" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.722703 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/112f572c-0d1d-4bb9-a66c-202a42a9aba1-etcd-ca\") pod \"etcd-operator-b45778765-rr75f\" (UID: \"112f572c-0d1d-4bb9-a66c-202a42a9aba1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rr75f" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.723357 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/99adae54-586d-4a41-90e1-288285c47957-machine-approver-tls\") pod \"machine-approver-56656f9798-mmhdw\" (UID: \"99adae54-586d-4a41-90e1-288285c47957\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mmhdw" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.723475 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4943f829-0922-4e87-a750-1cfc2f2f1b72-config\") pod \"route-controller-manager-6576b87f9c-42sb2\" (UID: \"4943f829-0922-4e87-a750-1cfc2f2f1b72\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42sb2" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.723561 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3aee29ce-f5ae-42d5-9c0c-7648739c6c49-trusted-ca\") pod \"ingress-operator-5b745b69d9-dsk9b\" (UID: \"3aee29ce-f5ae-42d5-9c0c-7648739c6c49\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dsk9b" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.723983 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3fa8c07-9947-4f5c-8295-bdec401113b0-trusted-ca-bundle\") pod \"console-f9d7485db-zwsbc\" (UID: \"f3fa8c07-9947-4f5c-8295-bdec401113b0\") " pod="openshift-console/console-f9d7485db-zwsbc" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.724063 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99adae54-586d-4a41-90e1-288285c47957-config\") pod \"machine-approver-56656f9798-mmhdw\" (UID: \"99adae54-586d-4a41-90e1-288285c47957\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mmhdw" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.724176 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9bbc4953-c59b-46cd-8f17-513136731d2a-etcd-client\") pod \"apiserver-7bbb656c7d-ld54h\" (UID: \"9bbc4953-c59b-46cd-8f17-513136731d2a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ld54h" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.724532 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f3fa8c07-9947-4f5c-8295-bdec401113b0-console-oauth-config\") pod \"console-f9d7485db-zwsbc\" (UID: \"f3fa8c07-9947-4f5c-8295-bdec401113b0\") " pod="openshift-console/console-f9d7485db-zwsbc" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.724728 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/112f572c-0d1d-4bb9-a66c-202a42a9aba1-etcd-service-ca\") pod \"etcd-operator-b45778765-rr75f\" (UID: \"112f572c-0d1d-4bb9-a66c-202a42a9aba1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rr75f" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.725068 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/077b99e5-e95c-4afd-9008-d1f18a6b2f70-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vtrkl\" (UID: \"077b99e5-e95c-4afd-9008-d1f18a6b2f70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vtrkl" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.725241 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c033bce4-9921-49ec-bda6-ba7f79647c00-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-2rjhr\" (UID: \"c033bce4-9921-49ec-bda6-ba7f79647c00\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2rjhr" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.725796 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.725824 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/33ee8fad-d568-45d8-b55f-3302e5f3c9c0-metrics-certs\") pod \"router-default-5444994796-xtklb\" (UID: \"33ee8fad-d568-45d8-b55f-3302e5f3c9c0\") " pod="openshift-ingress/router-default-5444994796-xtklb" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.725831 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/112f572c-0d1d-4bb9-a66c-202a42a9aba1-etcd-client\") pod \"etcd-operator-b45778765-rr75f\" (UID: \"112f572c-0d1d-4bb9-a66c-202a42a9aba1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rr75f" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.726511 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a58008d3-84e1-425d-a7de-bc37a0f2664e-metrics-tls\") pod \"dns-operator-744455d44c-dp7xf\" (UID: \"a58008d3-84e1-425d-a7de-bc37a0f2664e\") " pod="openshift-dns-operator/dns-operator-744455d44c-dp7xf" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.726790 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/902f37b4-ec5c-40ae-b110-f17b282f7ddb-serving-cert\") pod \"authentication-operator-69f744f599-b77qj\" (UID: \"902f37b4-ec5c-40ae-b110-f17b282f7ddb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b77qj" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.727169 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d2009dc-5385-4529-b1b3-d14a75a50089-serving-cert\") pod \"apiserver-76f77b778f-5fbkt\" (UID: \"9d2009dc-5385-4529-b1b3-d14a75a50089\") " pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.727533 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bbc4953-c59b-46cd-8f17-513136731d2a-serving-cert\") pod \"apiserver-7bbb656c7d-ld54h\" (UID: \"9bbc4953-c59b-46cd-8f17-513136731d2a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ld54h" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.727624 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4943f829-0922-4e87-a750-1cfc2f2f1b72-serving-cert\") pod \"route-controller-manager-6576b87f9c-42sb2\" (UID: \"4943f829-0922-4e87-a750-1cfc2f2f1b72\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42sb2" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.727626 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9d2009dc-5385-4529-b1b3-d14a75a50089-encryption-config\") pod \"apiserver-76f77b778f-5fbkt\" (UID: \"9d2009dc-5385-4529-b1b3-d14a75a50089\") " pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.731000 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3fa8c07-9947-4f5c-8295-bdec401113b0-console-serving-cert\") pod \"console-f9d7485db-zwsbc\" (UID: \"f3fa8c07-9947-4f5c-8295-bdec401113b0\") " pod="openshift-console/console-f9d7485db-zwsbc" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.733731 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/112f572c-0d1d-4bb9-a66c-202a42a9aba1-serving-cert\") pod \"etcd-operator-b45778765-rr75f\" (UID: \"112f572c-0d1d-4bb9-a66c-202a42a9aba1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rr75f" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.734414 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.737972 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d2009dc-5385-4529-b1b3-d14a75a50089-config\") pod \"apiserver-76f77b778f-5fbkt\" (UID: \"9d2009dc-5385-4529-b1b3-d14a75a50089\") " pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.754355 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.759070 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9d2009dc-5385-4529-b1b3-d14a75a50089-audit\") pod \"apiserver-76f77b778f-5fbkt\" (UID: \"9d2009dc-5385-4529-b1b3-d14a75a50089\") " pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.774143 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.785414 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9d2009dc-5385-4529-b1b3-d14a75a50089-etcd-serving-ca\") pod \"apiserver-76f77b778f-5fbkt\" (UID: \"9d2009dc-5385-4529-b1b3-d14a75a50089\") " pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.790476 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.790513 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.790655 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.794841 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.806530 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9d2009dc-5385-4529-b1b3-d14a75a50089-image-import-ca\") pod \"apiserver-76f77b778f-5fbkt\" (UID: \"9d2009dc-5385-4529-b1b3-d14a75a50089\") " pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.819264 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.828871 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d2009dc-5385-4529-b1b3-d14a75a50089-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5fbkt\" (UID: \"9d2009dc-5385-4529-b1b3-d14a75a50089\") " pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.834160 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.854061 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.874276 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.879796 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f455dc97-cc72-4981-ac4d-097fe30413d1-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-hg4bz\" (UID: \"f455dc97-cc72-4981-ac4d-097fe30413d1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hg4bz" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.893915 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.902739 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f455dc97-cc72-4981-ac4d-097fe30413d1-config\") pod \"kube-controller-manager-operator-78b949d7b-hg4bz\" (UID: \"f455dc97-cc72-4981-ac4d-097fe30413d1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hg4bz" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.934550 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.954769 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.974478 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 17:01:29 crc kubenswrapper[4794]: I0216 17:01:29.994279 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.013847 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.033779 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.054116 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.074035 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.094570 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.115354 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.135108 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.154843 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.176511 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.193743 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.215326 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.234612 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.255171 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.274454 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.294388 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.313899 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.334787 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.354518 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.373842 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.394641 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.414799 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.440246 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.454908 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.474543 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.495556 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.514827 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.534598 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.564610 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.574225 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.595103 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.613217 4794 request.go:700] Waited for 1.002084814s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-cliconfig&limit=500&resourceVersion=0 Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.615952 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.634106 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.661863 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.674647 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.694973 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.714929 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.734626 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.753850 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.775380 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.791000 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.795196 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.835749 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.854716 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.874481 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.895025 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.915090 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.934157 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.954761 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.974468 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 17:01:30 crc kubenswrapper[4794]: I0216 17:01:30.994715 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.014764 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.034325 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.054677 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.074012 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.094445 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.125378 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.134678 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.154420 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.174383 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.195032 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.214903 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.234188 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.254636 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.275362 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.294856 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.315143 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.334742 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.354334 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.374702 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.393827 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.414140 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.434856 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.473797 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2td9r\" (UniqueName: \"kubernetes.io/projected/5cff129b-bd54-4115-bc42-d5617d10eae0-kube-api-access-2td9r\") pod \"downloads-7954f5f757-fg5gp\" (UID: \"5cff129b-bd54-4115-bc42-d5617d10eae0\") " pod="openshift-console/downloads-7954f5f757-fg5gp" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.489658 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f455dc97-cc72-4981-ac4d-097fe30413d1-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-hg4bz\" (UID: \"f455dc97-cc72-4981-ac4d-097fe30413d1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hg4bz" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.511506 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lztsk\" (UniqueName: \"kubernetes.io/projected/a58008d3-84e1-425d-a7de-bc37a0f2664e-kube-api-access-lztsk\") pod \"dns-operator-744455d44c-dp7xf\" (UID: \"a58008d3-84e1-425d-a7de-bc37a0f2664e\") " pod="openshift-dns-operator/dns-operator-744455d44c-dp7xf" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.541207 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnzq9\" (UniqueName: \"kubernetes.io/projected/28dcafba-7fbd-4ee4-aac0-431d46f0a438-kube-api-access-nnzq9\") pod \"openshift-config-operator-7777fb866f-mkhh2\" (UID: \"28dcafba-7fbd-4ee4-aac0-431d46f0a438\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mkhh2" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.564994 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z49tl\" (UniqueName: \"kubernetes.io/projected/99adae54-586d-4a41-90e1-288285c47957-kube-api-access-z49tl\") pod \"machine-approver-56656f9798-mmhdw\" (UID: \"99adae54-586d-4a41-90e1-288285c47957\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mmhdw" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.568804 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3aee29ce-f5ae-42d5-9c0c-7648739c6c49-bound-sa-token\") pod \"ingress-operator-5b745b69d9-dsk9b\" (UID: \"3aee29ce-f5ae-42d5-9c0c-7648739c6c49\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dsk9b" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.589502 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmnsm\" (UniqueName: \"kubernetes.io/projected/112f572c-0d1d-4bb9-a66c-202a42a9aba1-kube-api-access-nmnsm\") pod \"etcd-operator-b45778765-rr75f\" (UID: \"112f572c-0d1d-4bb9-a66c-202a42a9aba1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rr75f" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.608606 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hg4bz" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.620124 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx4ct\" (UniqueName: \"kubernetes.io/projected/9d2009dc-5385-4529-b1b3-d14a75a50089-kube-api-access-mx4ct\") pod \"apiserver-76f77b778f-5fbkt\" (UID: \"9d2009dc-5385-4529-b1b3-d14a75a50089\") " pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.629019 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prbtg\" (UniqueName: \"kubernetes.io/projected/8734f760-160a-49c7-9eb3-65e33d816f02-kube-api-access-prbtg\") pod \"openshift-controller-manager-operator-756b6f6bc6-nsztn\" (UID: \"8734f760-160a-49c7-9eb3-65e33d816f02\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nsztn" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.632618 4794 request.go:700] Waited for 1.912796348s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.650062 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv4m5\" (UniqueName: \"kubernetes.io/projected/4943f829-0922-4e87-a750-1cfc2f2f1b72-kube-api-access-tv4m5\") pod \"route-controller-manager-6576b87f9c-42sb2\" (UID: \"4943f829-0922-4e87-a750-1cfc2f2f1b72\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42sb2" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.668602 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5f4k\" (UniqueName: \"kubernetes.io/projected/9bbc4953-c59b-46cd-8f17-513136731d2a-kube-api-access-d5f4k\") pod \"apiserver-7bbb656c7d-ld54h\" (UID: \"9bbc4953-c59b-46cd-8f17-513136731d2a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ld54h" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.675490 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.679635 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42sb2" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.712047 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ld54h" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.715219 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl7qt\" (UniqueName: \"kubernetes.io/projected/3aee29ce-f5ae-42d5-9c0c-7648739c6c49-kube-api-access-sl7qt\") pod \"ingress-operator-5b745b69d9-dsk9b\" (UID: \"3aee29ce-f5ae-42d5-9c0c-7648739c6c49\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dsk9b" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.731166 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbs8f\" (UniqueName: \"kubernetes.io/projected/c033bce4-9921-49ec-bda6-ba7f79647c00-kube-api-access-hbs8f\") pod \"machine-api-operator-5694c8668f-2rjhr\" (UID: \"c033bce4-9921-49ec-bda6-ba7f79647c00\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2rjhr" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.735085 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.740487 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mmhdw" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.753863 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.758151 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mkhh2" Feb 16 17:01:31 crc kubenswrapper[4794]: W0216 17:01:31.759930 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99adae54_586d_4a41_90e1_288285c47957.slice/crio-c868570cc4e0b6345ae4d5787c8f6ff44b1fc034a1e899d3934183f45a5abe20 WatchSource:0}: Error finding container c868570cc4e0b6345ae4d5787c8f6ff44b1fc034a1e899d3934183f45a5abe20: Status 404 returned error can't find the container with id c868570cc4e0b6345ae4d5787c8f6ff44b1fc034a1e899d3934183f45a5abe20 Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.767546 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-fg5gp" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.774070 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.786667 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nsztn" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.794024 4794 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.804845 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-dp7xf" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.814403 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.849683 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hg4bz"] Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.853631 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btpks\" (UniqueName: \"kubernetes.io/projected/f3fa8c07-9947-4f5c-8295-bdec401113b0-kube-api-access-btpks\") pod \"console-f9d7485db-zwsbc\" (UID: \"f3fa8c07-9947-4f5c-8295-bdec401113b0\") " pod="openshift-console/console-f9d7485db-zwsbc" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.869776 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dsk9b" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.871504 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n26dq\" (UniqueName: \"kubernetes.io/projected/33ee8fad-d568-45d8-b55f-3302e5f3c9c0-kube-api-access-n26dq\") pod \"router-default-5444994796-xtklb\" (UID: \"33ee8fad-d568-45d8-b55f-3302e5f3c9c0\") " pod="openshift-ingress/router-default-5444994796-xtklb" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.875113 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-rr75f" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.892520 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xtklb" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.901517 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.913043 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79fts\" (UniqueName: \"kubernetes.io/projected/902f37b4-ec5c-40ae-b110-f17b282f7ddb-kube-api-access-79fts\") pod \"authentication-operator-69f744f599-b77qj\" (UID: \"902f37b4-ec5c-40ae-b110-f17b282f7ddb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b77qj" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.913118 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5zdw\" (UniqueName: \"kubernetes.io/projected/077b99e5-e95c-4afd-9008-d1f18a6b2f70-kube-api-access-p5zdw\") pod \"controller-manager-879f6c89f-vtrkl\" (UID: \"077b99e5-e95c-4afd-9008-d1f18a6b2f70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vtrkl" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.913860 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.928279 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vtrkl" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.933241 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ld54h"] Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.933421 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.935551 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-42sb2"] Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.954380 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.966343 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-2rjhr" Feb 16 17:01:31 crc kubenswrapper[4794]: W0216 17:01:31.967866 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bbc4953_c59b_46cd_8f17_513136731d2a.slice/crio-216798a086b0f29888e80d1b57d849ff2a58160fb4d7f418cf7795b310a6c582 WatchSource:0}: Error finding container 216798a086b0f29888e80d1b57d849ff2a58160fb4d7f418cf7795b310a6c582: Status 404 returned error can't find the container with id 216798a086b0f29888e80d1b57d849ff2a58160fb4d7f418cf7795b310a6c582 Feb 16 17:01:31 crc kubenswrapper[4794]: I0216 17:01:31.974262 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.016399 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.034286 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.046880 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zwsbc" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.050731 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shpd2\" (UniqueName: \"kubernetes.io/projected/789593ed-6d75-46b7-9c80-641a7b76a749-kube-api-access-shpd2\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.050777 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ec4460e3-d99e-4ef7-9768-1d033a3e2538-trusted-ca\") pod \"console-operator-58897d9998-lgmrt\" (UID: \"ec4460e3-d99e-4ef7-9768-1d033a3e2538\") " pod="openshift-console-operator/console-operator-58897d9998-lgmrt" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.050805 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/789593ed-6d75-46b7-9c80-641a7b76a749-ca-trust-extracted\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.050827 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/789593ed-6d75-46b7-9c80-641a7b76a749-installation-pull-secrets\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.050870 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm2rk\" (UniqueName: \"kubernetes.io/projected/ec4460e3-d99e-4ef7-9768-1d033a3e2538-kube-api-access-wm2rk\") pod \"console-operator-58897d9998-lgmrt\" (UID: \"ec4460e3-d99e-4ef7-9768-1d033a3e2538\") " pod="openshift-console-operator/console-operator-58897d9998-lgmrt" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.050896 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26zcr\" (UniqueName: \"kubernetes.io/projected/48aebd18-410a-4f26-8405-e618d55f7881-kube-api-access-26zcr\") pod \"cluster-samples-operator-665b6dd947-n5rq7\" (UID: \"48aebd18-410a-4f26-8405-e618d55f7881\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n5rq7" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.050918 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/789593ed-6d75-46b7-9c80-641a7b76a749-registry-tls\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.050944 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec4460e3-d99e-4ef7-9768-1d033a3e2538-serving-cert\") pod \"console-operator-58897d9998-lgmrt\" (UID: \"ec4460e3-d99e-4ef7-9768-1d033a3e2538\") " pod="openshift-console-operator/console-operator-58897d9998-lgmrt" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.050969 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec4460e3-d99e-4ef7-9768-1d033a3e2538-config\") pod \"console-operator-58897d9998-lgmrt\" (UID: \"ec4460e3-d99e-4ef7-9768-1d033a3e2538\") " pod="openshift-console-operator/console-operator-58897d9998-lgmrt" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.051005 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/789593ed-6d75-46b7-9c80-641a7b76a749-registry-certificates\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.051028 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfdff039-c0a8-4244-9f4e-7aeb01507348-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vhvsb\" (UID: \"cfdff039-c0a8-4244-9f4e-7aeb01507348\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vhvsb" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.051048 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfdff039-c0a8-4244-9f4e-7aeb01507348-config\") pod \"kube-apiserver-operator-766d6c64bb-vhvsb\" (UID: \"cfdff039-c0a8-4244-9f4e-7aeb01507348\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vhvsb" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.051068 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/48aebd18-410a-4f26-8405-e618d55f7881-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-n5rq7\" (UID: \"48aebd18-410a-4f26-8405-e618d55f7881\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n5rq7" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.051093 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/789593ed-6d75-46b7-9c80-641a7b76a749-bound-sa-token\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.051122 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.051148 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/789593ed-6d75-46b7-9c80-641a7b76a749-trusted-ca\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.051211 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cfdff039-c0a8-4244-9f4e-7aeb01507348-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vhvsb\" (UID: \"cfdff039-c0a8-4244-9f4e-7aeb01507348\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vhvsb" Feb 16 17:01:32 crc kubenswrapper[4794]: E0216 17:01:32.051667 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:32.551653776 +0000 UTC m=+118.499748423 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.153520 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.153991 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/39a20b8b-461a-4584-9555-03b93bc951d6-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5dxq9\" (UID: \"39a20b8b-461a-4584-9555-03b93bc951d6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5dxq9" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.154035 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/08547bee-d06e-467b-8be7-db65e24c7e49-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-s8r99\" (UID: \"08547bee-d06e-467b-8be7-db65e24c7e49\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s8r99" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.154053 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqtv9\" (UniqueName: \"kubernetes.io/projected/54320564-7237-45c8-b465-82f3546faf41-kube-api-access-bqtv9\") pod \"multus-admission-controller-857f4d67dd-6x29q\" (UID: \"54320564-7237-45c8-b465-82f3546faf41\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6x29q" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.154078 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cfdff039-c0a8-4244-9f4e-7aeb01507348-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vhvsb\" (UID: \"cfdff039-c0a8-4244-9f4e-7aeb01507348\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vhvsb" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.154097 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f63e5c5e-d547-4849-afe6-932beaf632a5-images\") pod \"machine-config-operator-74547568cd-hr6kg\" (UID: \"f63e5c5e-d547-4849-afe6-932beaf632a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hr6kg" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.154111 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e751341c-bffc-4204-b03c-5352f25323a0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l6ghx\" (UID: \"e751341c-bffc-4204-b03c-5352f25323a0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6ghx" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.154180 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqwq8\" (UniqueName: \"kubernetes.io/projected/08547bee-d06e-467b-8be7-db65e24c7e49-kube-api-access-bqwq8\") pod \"control-plane-machine-set-operator-78cbb6b69f-s8r99\" (UID: \"08547bee-d06e-467b-8be7-db65e24c7e49\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s8r99" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.154215 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7559b705-987c-4020-afac-604cf0e58bbf-node-bootstrap-token\") pod \"machine-config-server-hmkvr\" (UID: \"7559b705-987c-4020-afac-604cf0e58bbf\") " pod="openshift-machine-config-operator/machine-config-server-hmkvr" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.154233 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/abf42440-612e-4e7f-95ee-5a4860c9bc59-metrics-tls\") pod \"dns-default-9btqh\" (UID: \"abf42440-612e-4e7f-95ee-5a4860c9bc59\") " pod="openshift-dns/dns-default-9btqh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.154257 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a53e5ad-d4b9-4d98-b3c1-b3e59abf44e3-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-lzvbh\" (UID: \"3a53e5ad-d4b9-4d98-b3c1-b3e59abf44e3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lzvbh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.154272 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.154297 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/789593ed-6d75-46b7-9c80-641a7b76a749-ca-trust-extracted\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.154331 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f63e5c5e-d547-4849-afe6-932beaf632a5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hr6kg\" (UID: \"f63e5c5e-d547-4849-afe6-932beaf632a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hr6kg" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.154348 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9c029145-bf5d-4a8c-9419-fdcf93c96a4d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-85b84\" (UID: \"9c029145-bf5d-4a8c-9419-fdcf93c96a4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-85b84" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.154364 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8d72b4da-984e-4798-bfaa-d7a9c4b1c587-signing-key\") pod \"service-ca-9c57cc56f-vwnxb\" (UID: \"8d72b4da-984e-4798-bfaa-d7a9c4b1c587\") " pod="openshift-service-ca/service-ca-9c57cc56f-vwnxb" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.154382 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/54320564-7237-45c8-b465-82f3546faf41-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6x29q\" (UID: \"54320564-7237-45c8-b465-82f3546faf41\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6x29q" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.154421 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33b7f2f3-0621-4fb2-b8f4-d89dd18bf7f0-config\") pod \"service-ca-operator-777779d784-b6clj\" (UID: \"33b7f2f3-0621-4fb2-b8f4-d89dd18bf7f0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-b6clj" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.154453 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm2rk\" (UniqueName: \"kubernetes.io/projected/ec4460e3-d99e-4ef7-9768-1d033a3e2538-kube-api-access-wm2rk\") pod \"console-operator-58897d9998-lgmrt\" (UID: \"ec4460e3-d99e-4ef7-9768-1d033a3e2538\") " pod="openshift-console-operator/console-operator-58897d9998-lgmrt" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.154471 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h582\" (UniqueName: \"kubernetes.io/projected/ef8bb78b-0644-4319-8928-4ba08d325777-kube-api-access-7h582\") pod \"package-server-manager-789f6589d5-jv2jb\" (UID: \"ef8bb78b-0644-4319-8928-4ba08d325777\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jv2jb" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.154497 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26zcr\" (UniqueName: \"kubernetes.io/projected/48aebd18-410a-4f26-8405-e618d55f7881-kube-api-access-26zcr\") pod \"cluster-samples-operator-665b6dd947-n5rq7\" (UID: \"48aebd18-410a-4f26-8405-e618d55f7881\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n5rq7" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.154521 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/789593ed-6d75-46b7-9c80-641a7b76a749-registry-tls\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.154536 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.154589 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.154605 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.154629 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5kdd\" (UniqueName: \"kubernetes.io/projected/abf42440-612e-4e7f-95ee-5a4860c9bc59-kube-api-access-b5kdd\") pod \"dns-default-9btqh\" (UID: \"abf42440-612e-4e7f-95ee-5a4860c9bc59\") " pod="openshift-dns/dns-default-9btqh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.154667 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/cb3afbce-0480-4641-95db-17a3c9c28d2d-plugins-dir\") pod \"csi-hostpathplugin-znzx2\" (UID: \"cb3afbce-0480-4641-95db-17a3c9c28d2d\") " pod="hostpath-provisioner/csi-hostpathplugin-znzx2" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.154681 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/abf42440-612e-4e7f-95ee-5a4860c9bc59-config-volume\") pod \"dns-default-9btqh\" (UID: \"abf42440-612e-4e7f-95ee-5a4860c9bc59\") " pod="openshift-dns/dns-default-9btqh" Feb 16 17:01:32 crc kubenswrapper[4794]: E0216 17:01:32.154746 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:32.654717576 +0000 UTC m=+118.602812283 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.154820 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec4460e3-d99e-4ef7-9768-1d033a3e2538-config\") pod \"console-operator-58897d9998-lgmrt\" (UID: \"ec4460e3-d99e-4ef7-9768-1d033a3e2538\") " pod="openshift-console-operator/console-operator-58897d9998-lgmrt" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.154871 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39a20b8b-461a-4584-9555-03b93bc951d6-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5dxq9\" (UID: \"39a20b8b-461a-4584-9555-03b93bc951d6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5dxq9" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.154896 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b2c5bb52-5539-44fa-ae62-89450f1a97f2-proxy-tls\") pod \"machine-config-controller-84d6567774-swnwm\" (UID: \"b2c5bb52-5539-44fa-ae62-89450f1a97f2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-swnwm" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.154922 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/988b4b22-136d-441f-a51f-8209b7181c08-apiservice-cert\") pod \"packageserver-d55dfcdfc-7ngdp\" (UID: \"988b4b22-136d-441f-a51f-8209b7181c08\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7ngdp" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.154947 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/988b4b22-136d-441f-a51f-8209b7181c08-webhook-cert\") pod \"packageserver-d55dfcdfc-7ngdp\" (UID: \"988b4b22-136d-441f-a51f-8209b7181c08\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7ngdp" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.154974 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.154998 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k59r5\" (UniqueName: \"kubernetes.io/projected/ba10edb3-730f-4c8b-8380-54162faf0ba8-kube-api-access-k59r5\") pod \"olm-operator-6b444d44fb-5czgb\" (UID: \"ba10edb3-730f-4c8b-8380-54162faf0ba8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5czgb" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.155027 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.155049 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/789593ed-6d75-46b7-9c80-641a7b76a749-trusted-ca\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.155070 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/77e8b691-0679-4dd2-996e-10ee488c5594-cert\") pod \"ingress-canary-4vwx5\" (UID: \"77e8b691-0679-4dd2-996e-10ee488c5594\") " pod="openshift-ingress-canary/ingress-canary-4vwx5" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.155090 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7559b705-987c-4020-afac-604cf0e58bbf-certs\") pod \"machine-config-server-hmkvr\" (UID: \"7559b705-987c-4020-afac-604cf0e58bbf\") " pod="openshift-machine-config-operator/machine-config-server-hmkvr" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.155112 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqftx\" (UniqueName: \"kubernetes.io/projected/7559b705-987c-4020-afac-604cf0e58bbf-kube-api-access-vqftx\") pod \"machine-config-server-hmkvr\" (UID: \"7559b705-987c-4020-afac-604cf0e58bbf\") " pod="openshift-machine-config-operator/machine-config-server-hmkvr" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.155132 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ced56472-de39-44c3-af64-45e0c6dbe0c6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-h26hh\" (UID: \"ced56472-de39-44c3-af64-45e0c6dbe0c6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h26hh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.155199 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ef8bb78b-0644-4319-8928-4ba08d325777-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-jv2jb\" (UID: \"ef8bb78b-0644-4319-8928-4ba08d325777\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jv2jb" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.155243 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md746\" (UniqueName: \"kubernetes.io/projected/6f58f777-b916-4180-9e54-f138e10b2297-kube-api-access-md746\") pod \"migrator-59844c95c7-6z49s\" (UID: \"6f58f777-b916-4180-9e54-f138e10b2297\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6z49s" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.155264 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cb3afbce-0480-4641-95db-17a3c9c28d2d-socket-dir\") pod \"csi-hostpathplugin-znzx2\" (UID: \"cb3afbce-0480-4641-95db-17a3c9c28d2d\") " pod="hostpath-provisioner/csi-hostpathplugin-znzx2" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.155287 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8d72b4da-984e-4798-bfaa-d7a9c4b1c587-signing-cabundle\") pod \"service-ca-9c57cc56f-vwnxb\" (UID: \"8d72b4da-984e-4798-bfaa-d7a9c4b1c587\") " pod="openshift-service-ca/service-ca-9c57cc56f-vwnxb" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.155334 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggdpv\" (UniqueName: \"kubernetes.io/projected/ced56472-de39-44c3-af64-45e0c6dbe0c6-kube-api-access-ggdpv\") pod \"openshift-apiserver-operator-796bbdcf4f-h26hh\" (UID: \"ced56472-de39-44c3-af64-45e0c6dbe0c6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h26hh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.155355 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2b7e6568-6f15-4a8f-aca6-38be84a1a624-audit-dir\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.155420 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a6aade7-1b78-4753-a22d-7251a1b27c9e-secret-volume\") pod \"collect-profiles-29521020-v7cql\" (UID: \"1a6aade7-1b78-4753-a22d-7251a1b27c9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-v7cql" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.155470 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.155492 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2b7e6568-6f15-4a8f-aca6-38be84a1a624-audit-policies\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.155545 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33b7f2f3-0621-4fb2-b8f4-d89dd18bf7f0-serving-cert\") pod \"service-ca-operator-777779d784-b6clj\" (UID: \"33b7f2f3-0621-4fb2-b8f4-d89dd18bf7f0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-b6clj" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.155566 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk54j\" (UniqueName: \"kubernetes.io/projected/1c46f3c7-53f2-456a-80d2-0007d79b7980-kube-api-access-nk54j\") pod \"catalog-operator-68c6474976-g2zhh\" (UID: \"1c46f3c7-53f2-456a-80d2-0007d79b7980\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g2zhh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.155592 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jbgr\" (UniqueName: \"kubernetes.io/projected/9c029145-bf5d-4a8c-9419-fdcf93c96a4d-kube-api-access-9jbgr\") pod \"marketplace-operator-79b997595-85b84\" (UID: \"9c029145-bf5d-4a8c-9419-fdcf93c96a4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-85b84" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.155631 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6p66\" (UniqueName: \"kubernetes.io/projected/2b7e6568-6f15-4a8f-aca6-38be84a1a624-kube-api-access-d6p66\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.155663 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/789593ed-6d75-46b7-9c80-641a7b76a749-ca-trust-extracted\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.155673 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shpd2\" (UniqueName: \"kubernetes.io/projected/789593ed-6d75-46b7-9c80-641a7b76a749-kube-api-access-shpd2\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.155726 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e751341c-bffc-4204-b03c-5352f25323a0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l6ghx\" (UID: \"e751341c-bffc-4204-b03c-5352f25323a0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6ghx" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.155750 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ec4460e3-d99e-4ef7-9768-1d033a3e2538-trusted-ca\") pod \"console-operator-58897d9998-lgmrt\" (UID: \"ec4460e3-d99e-4ef7-9768-1d033a3e2538\") " pod="openshift-console-operator/console-operator-58897d9998-lgmrt" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.155772 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b7zz\" (UniqueName: \"kubernetes.io/projected/cb3afbce-0480-4641-95db-17a3c9c28d2d-kube-api-access-5b7zz\") pod \"csi-hostpathplugin-znzx2\" (UID: \"cb3afbce-0480-4641-95db-17a3c9c28d2d\") " pod="hostpath-provisioner/csi-hostpathplugin-znzx2" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.155788 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a6aade7-1b78-4753-a22d-7251a1b27c9e-config-volume\") pod \"collect-profiles-29521020-v7cql\" (UID: \"1a6aade7-1b78-4753-a22d-7251a1b27c9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-v7cql" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.155807 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvpsg\" (UniqueName: \"kubernetes.io/projected/3a53e5ad-d4b9-4d98-b3c1-b3e59abf44e3-kube-api-access-fvpsg\") pod \"kube-storage-version-migrator-operator-b67b599dd-lzvbh\" (UID: \"3a53e5ad-d4b9-4d98-b3c1-b3e59abf44e3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lzvbh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.155824 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ba10edb3-730f-4c8b-8380-54162faf0ba8-srv-cert\") pod \"olm-operator-6b444d44fb-5czgb\" (UID: \"ba10edb3-730f-4c8b-8380-54162faf0ba8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5czgb" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.155849 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t74qc\" (UniqueName: \"kubernetes.io/projected/77e8b691-0679-4dd2-996e-10ee488c5594-kube-api-access-t74qc\") pod \"ingress-canary-4vwx5\" (UID: \"77e8b691-0679-4dd2-996e-10ee488c5594\") " pod="openshift-ingress-canary/ingress-canary-4vwx5" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.155866 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g27gf\" (UniqueName: \"kubernetes.io/projected/f63e5c5e-d547-4849-afe6-932beaf632a5-kube-api-access-g27gf\") pod \"machine-config-operator-74547568cd-hr6kg\" (UID: \"f63e5c5e-d547-4849-afe6-932beaf632a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hr6kg" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.155881 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66l49\" (UniqueName: \"kubernetes.io/projected/988b4b22-136d-441f-a51f-8209b7181c08-kube-api-access-66l49\") pod \"packageserver-d55dfcdfc-7ngdp\" (UID: \"988b4b22-136d-441f-a51f-8209b7181c08\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7ngdp" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.155897 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e751341c-bffc-4204-b03c-5352f25323a0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l6ghx\" (UID: \"e751341c-bffc-4204-b03c-5352f25323a0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6ghx" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.155937 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/789593ed-6d75-46b7-9c80-641a7b76a749-installation-pull-secrets\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.155951 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/988b4b22-136d-441f-a51f-8209b7181c08-tmpfs\") pod \"packageserver-d55dfcdfc-7ngdp\" (UID: \"988b4b22-136d-441f-a51f-8209b7181c08\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7ngdp" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.155986 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ba10edb3-730f-4c8b-8380-54162faf0ba8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5czgb\" (UID: \"ba10edb3-730f-4c8b-8380-54162faf0ba8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5czgb" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.156001 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.156019 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.156061 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/cb3afbce-0480-4641-95db-17a3c9c28d2d-csi-data-dir\") pod \"csi-hostpathplugin-znzx2\" (UID: \"cb3afbce-0480-4641-95db-17a3c9c28d2d\") " pod="hostpath-provisioner/csi-hostpathplugin-znzx2" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.156076 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cctv\" (UniqueName: \"kubernetes.io/projected/1a6aade7-1b78-4753-a22d-7251a1b27c9e-kube-api-access-5cctv\") pod \"collect-profiles-29521020-v7cql\" (UID: \"1a6aade7-1b78-4753-a22d-7251a1b27c9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-v7cql" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.156103 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d74m\" (UniqueName: \"kubernetes.io/projected/b2c5bb52-5539-44fa-ae62-89450f1a97f2-kube-api-access-9d74m\") pod \"machine-config-controller-84d6567774-swnwm\" (UID: \"b2c5bb52-5539-44fa-ae62-89450f1a97f2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-swnwm" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.156139 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f63e5c5e-d547-4849-afe6-932beaf632a5-proxy-tls\") pod \"machine-config-operator-74547568cd-hr6kg\" (UID: \"f63e5c5e-d547-4849-afe6-932beaf632a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hr6kg" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.157547 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f582s\" (UniqueName: \"kubernetes.io/projected/8d72b4da-984e-4798-bfaa-d7a9c4b1c587-kube-api-access-f582s\") pod \"service-ca-9c57cc56f-vwnxb\" (UID: \"8d72b4da-984e-4798-bfaa-d7a9c4b1c587\") " pod="openshift-service-ca/service-ca-9c57cc56f-vwnxb" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.157590 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec4460e3-d99e-4ef7-9768-1d033a3e2538-serving-cert\") pod \"console-operator-58897d9998-lgmrt\" (UID: \"ec4460e3-d99e-4ef7-9768-1d033a3e2538\") " pod="openshift-console-operator/console-operator-58897d9998-lgmrt" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.157620 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1c46f3c7-53f2-456a-80d2-0007d79b7980-srv-cert\") pod \"catalog-operator-68c6474976-g2zhh\" (UID: \"1c46f3c7-53f2-456a-80d2-0007d79b7980\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g2zhh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.157638 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.157684 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cb3afbce-0480-4641-95db-17a3c9c28d2d-registration-dir\") pod \"csi-hostpathplugin-znzx2\" (UID: \"cb3afbce-0480-4641-95db-17a3c9c28d2d\") " pod="hostpath-provisioner/csi-hostpathplugin-znzx2" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.157699 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.157736 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f2nw\" (UniqueName: \"kubernetes.io/projected/33b7f2f3-0621-4fb2-b8f4-d89dd18bf7f0-kube-api-access-5f2nw\") pod \"service-ca-operator-777779d784-b6clj\" (UID: \"33b7f2f3-0621-4fb2-b8f4-d89dd18bf7f0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-b6clj" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.157759 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c029145-bf5d-4a8c-9419-fdcf93c96a4d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-85b84\" (UID: \"9c029145-bf5d-4a8c-9419-fdcf93c96a4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-85b84" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.157774 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64898\" (UniqueName: \"kubernetes.io/projected/39a20b8b-461a-4584-9555-03b93bc951d6-kube-api-access-64898\") pod \"cluster-image-registry-operator-dc59b4c8b-5dxq9\" (UID: \"39a20b8b-461a-4584-9555-03b93bc951d6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5dxq9" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.157810 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/789593ed-6d75-46b7-9c80-641a7b76a749-registry-certificates\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.157827 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfdff039-c0a8-4244-9f4e-7aeb01507348-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vhvsb\" (UID: \"cfdff039-c0a8-4244-9f4e-7aeb01507348\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vhvsb" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.157843 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfdff039-c0a8-4244-9f4e-7aeb01507348-config\") pod \"kube-apiserver-operator-766d6c64bb-vhvsb\" (UID: \"cfdff039-c0a8-4244-9f4e-7aeb01507348\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vhvsb" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.157869 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/48aebd18-410a-4f26-8405-e618d55f7881-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-n5rq7\" (UID: \"48aebd18-410a-4f26-8405-e618d55f7881\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n5rq7" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.157894 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/789593ed-6d75-46b7-9c80-641a7b76a749-bound-sa-token\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.157911 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b2c5bb52-5539-44fa-ae62-89450f1a97f2-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-swnwm\" (UID: \"b2c5bb52-5539-44fa-ae62-89450f1a97f2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-swnwm" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.157955 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1c46f3c7-53f2-456a-80d2-0007d79b7980-profile-collector-cert\") pod \"catalog-operator-68c6474976-g2zhh\" (UID: \"1c46f3c7-53f2-456a-80d2-0007d79b7980\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g2zhh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.157971 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/cb3afbce-0480-4641-95db-17a3c9c28d2d-mountpoint-dir\") pod \"csi-hostpathplugin-znzx2\" (UID: \"cb3afbce-0480-4641-95db-17a3c9c28d2d\") " pod="hostpath-provisioner/csi-hostpathplugin-znzx2" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.157986 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.158003 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a53e5ad-d4b9-4d98-b3c1-b3e59abf44e3-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-lzvbh\" (UID: \"3a53e5ad-d4b9-4d98-b3c1-b3e59abf44e3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lzvbh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.158018 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ced56472-de39-44c3-af64-45e0c6dbe0c6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-h26hh\" (UID: \"ced56472-de39-44c3-af64-45e0c6dbe0c6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h26hh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.158036 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/39a20b8b-461a-4584-9555-03b93bc951d6-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5dxq9\" (UID: \"39a20b8b-461a-4584-9555-03b93bc951d6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5dxq9" Feb 16 17:01:32 crc kubenswrapper[4794]: E0216 17:01:32.159224 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:32.659212885 +0000 UTC m=+118.607307532 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.160578 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/789593ed-6d75-46b7-9c80-641a7b76a749-trusted-ca\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.163881 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/789593ed-6d75-46b7-9c80-641a7b76a749-registry-tls\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.165814 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/789593ed-6d75-46b7-9c80-641a7b76a749-registry-certificates\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.167055 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfdff039-c0a8-4244-9f4e-7aeb01507348-config\") pod \"kube-apiserver-operator-766d6c64bb-vhvsb\" (UID: \"cfdff039-c0a8-4244-9f4e-7aeb01507348\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vhvsb" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.167181 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec4460e3-d99e-4ef7-9768-1d033a3e2538-config\") pod \"console-operator-58897d9998-lgmrt\" (UID: \"ec4460e3-d99e-4ef7-9768-1d033a3e2538\") " pod="openshift-console-operator/console-operator-58897d9998-lgmrt" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.170582 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ec4460e3-d99e-4ef7-9768-1d033a3e2538-trusted-ca\") pod \"console-operator-58897d9998-lgmrt\" (UID: \"ec4460e3-d99e-4ef7-9768-1d033a3e2538\") " pod="openshift-console-operator/console-operator-58897d9998-lgmrt" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.171867 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/789593ed-6d75-46b7-9c80-641a7b76a749-installation-pull-secrets\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.172731 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec4460e3-d99e-4ef7-9768-1d033a3e2538-serving-cert\") pod \"console-operator-58897d9998-lgmrt\" (UID: \"ec4460e3-d99e-4ef7-9768-1d033a3e2538\") " pod="openshift-console-operator/console-operator-58897d9998-lgmrt" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.173586 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/48aebd18-410a-4f26-8405-e618d55f7881-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-n5rq7\" (UID: \"48aebd18-410a-4f26-8405-e618d55f7881\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n5rq7" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.179606 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfdff039-c0a8-4244-9f4e-7aeb01507348-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vhvsb\" (UID: \"cfdff039-c0a8-4244-9f4e-7aeb01507348\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vhvsb" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.184353 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-b77qj" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.193182 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shpd2\" (UniqueName: \"kubernetes.io/projected/789593ed-6d75-46b7-9c80-641a7b76a749-kube-api-access-shpd2\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.219750 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm2rk\" (UniqueName: \"kubernetes.io/projected/ec4460e3-d99e-4ef7-9768-1d033a3e2538-kube-api-access-wm2rk\") pod \"console-operator-58897d9998-lgmrt\" (UID: \"ec4460e3-d99e-4ef7-9768-1d033a3e2538\") " pod="openshift-console-operator/console-operator-58897d9998-lgmrt" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.234959 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26zcr\" (UniqueName: \"kubernetes.io/projected/48aebd18-410a-4f26-8405-e618d55f7881-kube-api-access-26zcr\") pod \"cluster-samples-operator-665b6dd947-n5rq7\" (UID: \"48aebd18-410a-4f26-8405-e618d55f7881\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n5rq7" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.250491 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-fg5gp"] Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.255773 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-mkhh2"] Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.261764 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.262024 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/08547bee-d06e-467b-8be7-db65e24c7e49-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-s8r99\" (UID: \"08547bee-d06e-467b-8be7-db65e24c7e49\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s8r99" Feb 16 17:01:32 crc kubenswrapper[4794]: E0216 17:01:32.262451 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:32.762422368 +0000 UTC m=+118.710517015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.264120 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/39a20b8b-461a-4584-9555-03b93bc951d6-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5dxq9\" (UID: \"39a20b8b-461a-4584-9555-03b93bc951d6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5dxq9" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.264206 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f63e5c5e-d547-4849-afe6-932beaf632a5-images\") pod \"machine-config-operator-74547568cd-hr6kg\" (UID: \"f63e5c5e-d547-4849-afe6-932beaf632a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hr6kg" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.264260 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqtv9\" (UniqueName: \"kubernetes.io/projected/54320564-7237-45c8-b465-82f3546faf41-kube-api-access-bqtv9\") pod \"multus-admission-controller-857f4d67dd-6x29q\" (UID: \"54320564-7237-45c8-b465-82f3546faf41\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6x29q" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.264342 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e751341c-bffc-4204-b03c-5352f25323a0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l6ghx\" (UID: \"e751341c-bffc-4204-b03c-5352f25323a0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6ghx" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.264378 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqwq8\" (UniqueName: \"kubernetes.io/projected/08547bee-d06e-467b-8be7-db65e24c7e49-kube-api-access-bqwq8\") pod \"control-plane-machine-set-operator-78cbb6b69f-s8r99\" (UID: \"08547bee-d06e-467b-8be7-db65e24c7e49\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s8r99" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.264408 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7559b705-987c-4020-afac-604cf0e58bbf-node-bootstrap-token\") pod \"machine-config-server-hmkvr\" (UID: \"7559b705-987c-4020-afac-604cf0e58bbf\") " pod="openshift-machine-config-operator/machine-config-server-hmkvr" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.264434 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/abf42440-612e-4e7f-95ee-5a4860c9bc59-metrics-tls\") pod \"dns-default-9btqh\" (UID: \"abf42440-612e-4e7f-95ee-5a4860c9bc59\") " pod="openshift-dns/dns-default-9btqh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.264467 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.264509 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a53e5ad-d4b9-4d98-b3c1-b3e59abf44e3-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-lzvbh\" (UID: \"3a53e5ad-d4b9-4d98-b3c1-b3e59abf44e3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lzvbh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.264536 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9c029145-bf5d-4a8c-9419-fdcf93c96a4d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-85b84\" (UID: \"9c029145-bf5d-4a8c-9419-fdcf93c96a4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-85b84" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.264563 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8d72b4da-984e-4798-bfaa-d7a9c4b1c587-signing-key\") pod \"service-ca-9c57cc56f-vwnxb\" (UID: \"8d72b4da-984e-4798-bfaa-d7a9c4b1c587\") " pod="openshift-service-ca/service-ca-9c57cc56f-vwnxb" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.264584 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/54320564-7237-45c8-b465-82f3546faf41-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6x29q\" (UID: \"54320564-7237-45c8-b465-82f3546faf41\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6x29q" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.264612 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f63e5c5e-d547-4849-afe6-932beaf632a5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hr6kg\" (UID: \"f63e5c5e-d547-4849-afe6-932beaf632a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hr6kg" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.264650 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33b7f2f3-0621-4fb2-b8f4-d89dd18bf7f0-config\") pod \"service-ca-operator-777779d784-b6clj\" (UID: \"33b7f2f3-0621-4fb2-b8f4-d89dd18bf7f0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-b6clj" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.264681 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h582\" (UniqueName: \"kubernetes.io/projected/ef8bb78b-0644-4319-8928-4ba08d325777-kube-api-access-7h582\") pod \"package-server-manager-789f6589d5-jv2jb\" (UID: \"ef8bb78b-0644-4319-8928-4ba08d325777\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jv2jb" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.264742 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.264772 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.264804 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.264906 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/cb3afbce-0480-4641-95db-17a3c9c28d2d-plugins-dir\") pod \"csi-hostpathplugin-znzx2\" (UID: \"cb3afbce-0480-4641-95db-17a3c9c28d2d\") " pod="hostpath-provisioner/csi-hostpathplugin-znzx2" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.264939 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/abf42440-612e-4e7f-95ee-5a4860c9bc59-config-volume\") pod \"dns-default-9btqh\" (UID: \"abf42440-612e-4e7f-95ee-5a4860c9bc59\") " pod="openshift-dns/dns-default-9btqh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.264965 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5kdd\" (UniqueName: \"kubernetes.io/projected/abf42440-612e-4e7f-95ee-5a4860c9bc59-kube-api-access-b5kdd\") pod \"dns-default-9btqh\" (UID: \"abf42440-612e-4e7f-95ee-5a4860c9bc59\") " pod="openshift-dns/dns-default-9btqh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.264998 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b2c5bb52-5539-44fa-ae62-89450f1a97f2-proxy-tls\") pod \"machine-config-controller-84d6567774-swnwm\" (UID: \"b2c5bb52-5539-44fa-ae62-89450f1a97f2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-swnwm" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.265172 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39a20b8b-461a-4584-9555-03b93bc951d6-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5dxq9\" (UID: \"39a20b8b-461a-4584-9555-03b93bc951d6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5dxq9" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.265232 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/988b4b22-136d-441f-a51f-8209b7181c08-apiservice-cert\") pod \"packageserver-d55dfcdfc-7ngdp\" (UID: \"988b4b22-136d-441f-a51f-8209b7181c08\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7ngdp" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.265259 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/988b4b22-136d-441f-a51f-8209b7181c08-webhook-cert\") pod \"packageserver-d55dfcdfc-7ngdp\" (UID: \"988b4b22-136d-441f-a51f-8209b7181c08\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7ngdp" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.265336 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7559b705-987c-4020-afac-604cf0e58bbf-certs\") pod \"machine-config-server-hmkvr\" (UID: \"7559b705-987c-4020-afac-604cf0e58bbf\") " pod="openshift-machine-config-operator/machine-config-server-hmkvr" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.265366 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqftx\" (UniqueName: \"kubernetes.io/projected/7559b705-987c-4020-afac-604cf0e58bbf-kube-api-access-vqftx\") pod \"machine-config-server-hmkvr\" (UID: \"7559b705-987c-4020-afac-604cf0e58bbf\") " pod="openshift-machine-config-operator/machine-config-server-hmkvr" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.265397 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ced56472-de39-44c3-af64-45e0c6dbe0c6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-h26hh\" (UID: \"ced56472-de39-44c3-af64-45e0c6dbe0c6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h26hh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.265428 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.265458 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k59r5\" (UniqueName: \"kubernetes.io/projected/ba10edb3-730f-4c8b-8380-54162faf0ba8-kube-api-access-k59r5\") pod \"olm-operator-6b444d44fb-5czgb\" (UID: \"ba10edb3-730f-4c8b-8380-54162faf0ba8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5czgb" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.265495 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.265524 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/77e8b691-0679-4dd2-996e-10ee488c5594-cert\") pod \"ingress-canary-4vwx5\" (UID: \"77e8b691-0679-4dd2-996e-10ee488c5594\") " pod="openshift-ingress-canary/ingress-canary-4vwx5" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.265548 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ef8bb78b-0644-4319-8928-4ba08d325777-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-jv2jb\" (UID: \"ef8bb78b-0644-4319-8928-4ba08d325777\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jv2jb" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.265582 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8d72b4da-984e-4798-bfaa-d7a9c4b1c587-signing-cabundle\") pod \"service-ca-9c57cc56f-vwnxb\" (UID: \"8d72b4da-984e-4798-bfaa-d7a9c4b1c587\") " pod="openshift-service-ca/service-ca-9c57cc56f-vwnxb" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.265609 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggdpv\" (UniqueName: \"kubernetes.io/projected/ced56472-de39-44c3-af64-45e0c6dbe0c6-kube-api-access-ggdpv\") pod \"openshift-apiserver-operator-796bbdcf4f-h26hh\" (UID: \"ced56472-de39-44c3-af64-45e0c6dbe0c6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h26hh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.265645 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md746\" (UniqueName: \"kubernetes.io/projected/6f58f777-b916-4180-9e54-f138e10b2297-kube-api-access-md746\") pod \"migrator-59844c95c7-6z49s\" (UID: \"6f58f777-b916-4180-9e54-f138e10b2297\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6z49s" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.265667 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cb3afbce-0480-4641-95db-17a3c9c28d2d-socket-dir\") pod \"csi-hostpathplugin-znzx2\" (UID: \"cb3afbce-0480-4641-95db-17a3c9c28d2d\") " pod="hostpath-provisioner/csi-hostpathplugin-znzx2" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.265697 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2b7e6568-6f15-4a8f-aca6-38be84a1a624-audit-dir\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.265741 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a6aade7-1b78-4753-a22d-7251a1b27c9e-secret-volume\") pod \"collect-profiles-29521020-v7cql\" (UID: \"1a6aade7-1b78-4753-a22d-7251a1b27c9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-v7cql" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.265757 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f63e5c5e-d547-4849-afe6-932beaf632a5-images\") pod \"machine-config-operator-74547568cd-hr6kg\" (UID: \"f63e5c5e-d547-4849-afe6-932beaf632a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hr6kg" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.265781 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.265812 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2b7e6568-6f15-4a8f-aca6-38be84a1a624-audit-policies\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.265838 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33b7f2f3-0621-4fb2-b8f4-d89dd18bf7f0-serving-cert\") pod \"service-ca-operator-777779d784-b6clj\" (UID: \"33b7f2f3-0621-4fb2-b8f4-d89dd18bf7f0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-b6clj" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.265867 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk54j\" (UniqueName: \"kubernetes.io/projected/1c46f3c7-53f2-456a-80d2-0007d79b7980-kube-api-access-nk54j\") pod \"catalog-operator-68c6474976-g2zhh\" (UID: \"1c46f3c7-53f2-456a-80d2-0007d79b7980\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g2zhh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.265897 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jbgr\" (UniqueName: \"kubernetes.io/projected/9c029145-bf5d-4a8c-9419-fdcf93c96a4d-kube-api-access-9jbgr\") pod \"marketplace-operator-79b997595-85b84\" (UID: \"9c029145-bf5d-4a8c-9419-fdcf93c96a4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-85b84" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.265928 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6p66\" (UniqueName: \"kubernetes.io/projected/2b7e6568-6f15-4a8f-aca6-38be84a1a624-kube-api-access-d6p66\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.265952 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e751341c-bffc-4204-b03c-5352f25323a0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l6ghx\" (UID: \"e751341c-bffc-4204-b03c-5352f25323a0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6ghx" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.265962 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e751341c-bffc-4204-b03c-5352f25323a0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l6ghx\" (UID: \"e751341c-bffc-4204-b03c-5352f25323a0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6ghx" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.265988 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a6aade7-1b78-4753-a22d-7251a1b27c9e-config-volume\") pod \"collect-profiles-29521020-v7cql\" (UID: \"1a6aade7-1b78-4753-a22d-7251a1b27c9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-v7cql" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.266024 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b7zz\" (UniqueName: \"kubernetes.io/projected/cb3afbce-0480-4641-95db-17a3c9c28d2d-kube-api-access-5b7zz\") pod \"csi-hostpathplugin-znzx2\" (UID: \"cb3afbce-0480-4641-95db-17a3c9c28d2d\") " pod="hostpath-provisioner/csi-hostpathplugin-znzx2" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.266056 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvpsg\" (UniqueName: \"kubernetes.io/projected/3a53e5ad-d4b9-4d98-b3c1-b3e59abf44e3-kube-api-access-fvpsg\") pod \"kube-storage-version-migrator-operator-b67b599dd-lzvbh\" (UID: \"3a53e5ad-d4b9-4d98-b3c1-b3e59abf44e3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lzvbh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.266087 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ba10edb3-730f-4c8b-8380-54162faf0ba8-srv-cert\") pod \"olm-operator-6b444d44fb-5czgb\" (UID: \"ba10edb3-730f-4c8b-8380-54162faf0ba8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5czgb" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.266115 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t74qc\" (UniqueName: \"kubernetes.io/projected/77e8b691-0679-4dd2-996e-10ee488c5594-kube-api-access-t74qc\") pod \"ingress-canary-4vwx5\" (UID: \"77e8b691-0679-4dd2-996e-10ee488c5594\") " pod="openshift-ingress-canary/ingress-canary-4vwx5" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.266142 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g27gf\" (UniqueName: \"kubernetes.io/projected/f63e5c5e-d547-4849-afe6-932beaf632a5-kube-api-access-g27gf\") pod \"machine-config-operator-74547568cd-hr6kg\" (UID: \"f63e5c5e-d547-4849-afe6-932beaf632a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hr6kg" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.266172 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/988b4b22-136d-441f-a51f-8209b7181c08-tmpfs\") pod \"packageserver-d55dfcdfc-7ngdp\" (UID: \"988b4b22-136d-441f-a51f-8209b7181c08\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7ngdp" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.266202 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66l49\" (UniqueName: \"kubernetes.io/projected/988b4b22-136d-441f-a51f-8209b7181c08-kube-api-access-66l49\") pod \"packageserver-d55dfcdfc-7ngdp\" (UID: \"988b4b22-136d-441f-a51f-8209b7181c08\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7ngdp" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.266231 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e751341c-bffc-4204-b03c-5352f25323a0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l6ghx\" (UID: \"e751341c-bffc-4204-b03c-5352f25323a0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6ghx" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.266261 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ba10edb3-730f-4c8b-8380-54162faf0ba8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5czgb\" (UID: \"ba10edb3-730f-4c8b-8380-54162faf0ba8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5czgb" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.266292 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/cb3afbce-0480-4641-95db-17a3c9c28d2d-csi-data-dir\") pod \"csi-hostpathplugin-znzx2\" (UID: \"cb3afbce-0480-4641-95db-17a3c9c28d2d\") " pod="hostpath-provisioner/csi-hostpathplugin-znzx2" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.266356 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cctv\" (UniqueName: \"kubernetes.io/projected/1a6aade7-1b78-4753-a22d-7251a1b27c9e-kube-api-access-5cctv\") pod \"collect-profiles-29521020-v7cql\" (UID: \"1a6aade7-1b78-4753-a22d-7251a1b27c9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-v7cql" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.266388 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.266412 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.266444 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f63e5c5e-d547-4849-afe6-932beaf632a5-proxy-tls\") pod \"machine-config-operator-74547568cd-hr6kg\" (UID: \"f63e5c5e-d547-4849-afe6-932beaf632a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hr6kg" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.266474 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d74m\" (UniqueName: \"kubernetes.io/projected/b2c5bb52-5539-44fa-ae62-89450f1a97f2-kube-api-access-9d74m\") pod \"machine-config-controller-84d6567774-swnwm\" (UID: \"b2c5bb52-5539-44fa-ae62-89450f1a97f2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-swnwm" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.266505 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f582s\" (UniqueName: \"kubernetes.io/projected/8d72b4da-984e-4798-bfaa-d7a9c4b1c587-kube-api-access-f582s\") pod \"service-ca-9c57cc56f-vwnxb\" (UID: \"8d72b4da-984e-4798-bfaa-d7a9c4b1c587\") " pod="openshift-service-ca/service-ca-9c57cc56f-vwnxb" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.266538 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1c46f3c7-53f2-456a-80d2-0007d79b7980-srv-cert\") pod \"catalog-operator-68c6474976-g2zhh\" (UID: \"1c46f3c7-53f2-456a-80d2-0007d79b7980\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g2zhh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.266565 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.267383 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.267423 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cb3afbce-0480-4641-95db-17a3c9c28d2d-registration-dir\") pod \"csi-hostpathplugin-znzx2\" (UID: \"cb3afbce-0480-4641-95db-17a3c9c28d2d\") " pod="hostpath-provisioner/csi-hostpathplugin-znzx2" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.267457 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.267482 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f2nw\" (UniqueName: \"kubernetes.io/projected/33b7f2f3-0621-4fb2-b8f4-d89dd18bf7f0-kube-api-access-5f2nw\") pod \"service-ca-operator-777779d784-b6clj\" (UID: \"33b7f2f3-0621-4fb2-b8f4-d89dd18bf7f0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-b6clj" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.267507 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c029145-bf5d-4a8c-9419-fdcf93c96a4d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-85b84\" (UID: \"9c029145-bf5d-4a8c-9419-fdcf93c96a4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-85b84" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.267670 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64898\" (UniqueName: \"kubernetes.io/projected/39a20b8b-461a-4584-9555-03b93bc951d6-kube-api-access-64898\") pod \"cluster-image-registry-operator-dc59b4c8b-5dxq9\" (UID: \"39a20b8b-461a-4584-9555-03b93bc951d6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5dxq9" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.267711 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b2c5bb52-5539-44fa-ae62-89450f1a97f2-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-swnwm\" (UID: \"b2c5bb52-5539-44fa-ae62-89450f1a97f2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-swnwm" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.267749 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1c46f3c7-53f2-456a-80d2-0007d79b7980-profile-collector-cert\") pod \"catalog-operator-68c6474976-g2zhh\" (UID: \"1c46f3c7-53f2-456a-80d2-0007d79b7980\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g2zhh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.267775 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/cb3afbce-0480-4641-95db-17a3c9c28d2d-mountpoint-dir\") pod \"csi-hostpathplugin-znzx2\" (UID: \"cb3afbce-0480-4641-95db-17a3c9c28d2d\") " pod="hostpath-provisioner/csi-hostpathplugin-znzx2" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.267805 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.267835 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/39a20b8b-461a-4584-9555-03b93bc951d6-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5dxq9\" (UID: \"39a20b8b-461a-4584-9555-03b93bc951d6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5dxq9" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.267868 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a53e5ad-d4b9-4d98-b3c1-b3e59abf44e3-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-lzvbh\" (UID: \"3a53e5ad-d4b9-4d98-b3c1-b3e59abf44e3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lzvbh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.267898 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ced56472-de39-44c3-af64-45e0c6dbe0c6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-h26hh\" (UID: \"ced56472-de39-44c3-af64-45e0c6dbe0c6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h26hh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.268179 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/08547bee-d06e-467b-8be7-db65e24c7e49-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-s8r99\" (UID: \"08547bee-d06e-467b-8be7-db65e24c7e49\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s8r99" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.266882 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/abf42440-612e-4e7f-95ee-5a4860c9bc59-config-volume\") pod \"dns-default-9btqh\" (UID: \"abf42440-612e-4e7f-95ee-5a4860c9bc59\") " pod="openshift-dns/dns-default-9btqh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.269023 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/cb3afbce-0480-4641-95db-17a3c9c28d2d-csi-data-dir\") pod \"csi-hostpathplugin-znzx2\" (UID: \"cb3afbce-0480-4641-95db-17a3c9c28d2d\") " pod="hostpath-provisioner/csi-hostpathplugin-znzx2" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.269019 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/789593ed-6d75-46b7-9c80-641a7b76a749-bound-sa-token\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.269065 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2b7e6568-6f15-4a8f-aca6-38be84a1a624-audit-dir\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.269504 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ced56472-de39-44c3-af64-45e0c6dbe0c6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-h26hh\" (UID: \"ced56472-de39-44c3-af64-45e0c6dbe0c6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h26hh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.270644 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/988b4b22-136d-441f-a51f-8209b7181c08-tmpfs\") pod \"packageserver-d55dfcdfc-7ngdp\" (UID: \"988b4b22-136d-441f-a51f-8209b7181c08\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7ngdp" Feb 16 17:01:32 crc kubenswrapper[4794]: E0216 17:01:32.270971 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:32.770951695 +0000 UTC m=+118.719046342 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.272731 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f63e5c5e-d547-4849-afe6-932beaf632a5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hr6kg\" (UID: \"f63e5c5e-d547-4849-afe6-932beaf632a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hr6kg" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.273693 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a6aade7-1b78-4753-a22d-7251a1b27c9e-config-volume\") pod \"collect-profiles-29521020-v7cql\" (UID: \"1a6aade7-1b78-4753-a22d-7251a1b27c9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-v7cql" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.275666 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8d72b4da-984e-4798-bfaa-d7a9c4b1c587-signing-cabundle\") pod \"service-ca-9c57cc56f-vwnxb\" (UID: \"8d72b4da-984e-4798-bfaa-d7a9c4b1c587\") " pod="openshift-service-ca/service-ca-9c57cc56f-vwnxb" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.276706 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cb3afbce-0480-4641-95db-17a3c9c28d2d-registration-dir\") pod \"csi-hostpathplugin-znzx2\" (UID: \"cb3afbce-0480-4641-95db-17a3c9c28d2d\") " pod="hostpath-provisioner/csi-hostpathplugin-znzx2" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.276838 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cb3afbce-0480-4641-95db-17a3c9c28d2d-socket-dir\") pod \"csi-hostpathplugin-znzx2\" (UID: \"cb3afbce-0480-4641-95db-17a3c9c28d2d\") " pod="hostpath-provisioner/csi-hostpathplugin-znzx2" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.278978 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/cb3afbce-0480-4641-95db-17a3c9c28d2d-mountpoint-dir\") pod \"csi-hostpathplugin-znzx2\" (UID: \"cb3afbce-0480-4641-95db-17a3c9c28d2d\") " pod="hostpath-provisioner/csi-hostpathplugin-znzx2" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.279021 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c029145-bf5d-4a8c-9419-fdcf93c96a4d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-85b84\" (UID: \"9c029145-bf5d-4a8c-9419-fdcf93c96a4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-85b84" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.280112 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cfdff039-c0a8-4244-9f4e-7aeb01507348-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vhvsb\" (UID: \"cfdff039-c0a8-4244-9f4e-7aeb01507348\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vhvsb" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.280454 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ced56472-de39-44c3-af64-45e0c6dbe0c6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-h26hh\" (UID: \"ced56472-de39-44c3-af64-45e0c6dbe0c6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h26hh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.280464 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7559b705-987c-4020-afac-604cf0e58bbf-node-bootstrap-token\") pod \"machine-config-server-hmkvr\" (UID: \"7559b705-987c-4020-afac-604cf0e58bbf\") " pod="openshift-machine-config-operator/machine-config-server-hmkvr" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.280668 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.282547 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b2c5bb52-5539-44fa-ae62-89450f1a97f2-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-swnwm\" (UID: \"b2c5bb52-5539-44fa-ae62-89450f1a97f2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-swnwm" Feb 16 17:01:32 crc kubenswrapper[4794]: W0216 17:01:32.282729 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28dcafba_7fbd_4ee4_aac0_431d46f0a438.slice/crio-fe8aae61dbc426930b0f7f924e8333f94d306f0f45b1c8a11d60269384bc84b7 WatchSource:0}: Error finding container fe8aae61dbc426930b0f7f924e8333f94d306f0f45b1c8a11d60269384bc84b7: Status 404 returned error can't find the container with id fe8aae61dbc426930b0f7f924e8333f94d306f0f45b1c8a11d60269384bc84b7 Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.282860 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/cb3afbce-0480-4641-95db-17a3c9c28d2d-plugins-dir\") pod \"csi-hostpathplugin-znzx2\" (UID: \"cb3afbce-0480-4641-95db-17a3c9c28d2d\") " pod="hostpath-provisioner/csi-hostpathplugin-znzx2" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.283387 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b2c5bb52-5539-44fa-ae62-89450f1a97f2-proxy-tls\") pod \"machine-config-controller-84d6567774-swnwm\" (UID: \"b2c5bb52-5539-44fa-ae62-89450f1a97f2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-swnwm" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.283522 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.285115 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/988b4b22-136d-441f-a51f-8209b7181c08-apiservice-cert\") pod \"packageserver-d55dfcdfc-7ngdp\" (UID: \"988b4b22-136d-441f-a51f-8209b7181c08\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7ngdp" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.285454 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.285474 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/abf42440-612e-4e7f-95ee-5a4860c9bc59-metrics-tls\") pod \"dns-default-9btqh\" (UID: \"abf42440-612e-4e7f-95ee-5a4860c9bc59\") " pod="openshift-dns/dns-default-9btqh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.286077 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33b7f2f3-0621-4fb2-b8f4-d89dd18bf7f0-config\") pod \"service-ca-operator-777779d784-b6clj\" (UID: \"33b7f2f3-0621-4fb2-b8f4-d89dd18bf7f0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-b6clj" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.287220 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9c029145-bf5d-4a8c-9419-fdcf93c96a4d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-85b84\" (UID: \"9c029145-bf5d-4a8c-9419-fdcf93c96a4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-85b84" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.287434 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39a20b8b-461a-4584-9555-03b93bc951d6-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5dxq9\" (UID: \"39a20b8b-461a-4584-9555-03b93bc951d6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5dxq9" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.287570 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/988b4b22-136d-441f-a51f-8209b7181c08-webhook-cert\") pod \"packageserver-d55dfcdfc-7ngdp\" (UID: \"988b4b22-136d-441f-a51f-8209b7181c08\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7ngdp" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.289254 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/39a20b8b-461a-4584-9555-03b93bc951d6-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5dxq9\" (UID: \"39a20b8b-461a-4584-9555-03b93bc951d6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5dxq9" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.289803 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.290146 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33b7f2f3-0621-4fb2-b8f4-d89dd18bf7f0-serving-cert\") pod \"service-ca-operator-777779d784-b6clj\" (UID: \"33b7f2f3-0621-4fb2-b8f4-d89dd18bf7f0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-b6clj" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.290403 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a53e5ad-d4b9-4d98-b3c1-b3e59abf44e3-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-lzvbh\" (UID: \"3a53e5ad-d4b9-4d98-b3c1-b3e59abf44e3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lzvbh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.290531 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.291245 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.291555 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/54320564-7237-45c8-b465-82f3546faf41-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6x29q\" (UID: \"54320564-7237-45c8-b465-82f3546faf41\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6x29q" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.291700 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a6aade7-1b78-4753-a22d-7251a1b27c9e-secret-volume\") pod \"collect-profiles-29521020-v7cql\" (UID: \"1a6aade7-1b78-4753-a22d-7251a1b27c9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-v7cql" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.291909 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2b7e6568-6f15-4a8f-aca6-38be84a1a624-audit-policies\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.292198 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e751341c-bffc-4204-b03c-5352f25323a0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l6ghx\" (UID: \"e751341c-bffc-4204-b03c-5352f25323a0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6ghx" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.293690 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.293741 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8d72b4da-984e-4798-bfaa-d7a9c4b1c587-signing-key\") pod \"service-ca-9c57cc56f-vwnxb\" (UID: \"8d72b4da-984e-4798-bfaa-d7a9c4b1c587\") " pod="openshift-service-ca/service-ca-9c57cc56f-vwnxb" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.293904 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/77e8b691-0679-4dd2-996e-10ee488c5594-cert\") pod \"ingress-canary-4vwx5\" (UID: \"77e8b691-0679-4dd2-996e-10ee488c5594\") " pod="openshift-ingress-canary/ingress-canary-4vwx5" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.295618 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ba10edb3-730f-4c8b-8380-54162faf0ba8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5czgb\" (UID: \"ba10edb3-730f-4c8b-8380-54162faf0ba8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5czgb" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.296287 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ba10edb3-730f-4c8b-8380-54162faf0ba8-srv-cert\") pod \"olm-operator-6b444d44fb-5czgb\" (UID: \"ba10edb3-730f-4c8b-8380-54162faf0ba8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5czgb" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.297579 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.313815 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ef8bb78b-0644-4319-8928-4ba08d325777-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-jv2jb\" (UID: \"ef8bb78b-0644-4319-8928-4ba08d325777\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jv2jb" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.314442 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1c46f3c7-53f2-456a-80d2-0007d79b7980-srv-cert\") pod \"catalog-operator-68c6474976-g2zhh\" (UID: \"1c46f3c7-53f2-456a-80d2-0007d79b7980\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g2zhh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.315485 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7559b705-987c-4020-afac-604cf0e58bbf-certs\") pod \"machine-config-server-hmkvr\" (UID: \"7559b705-987c-4020-afac-604cf0e58bbf\") " pod="openshift-machine-config-operator/machine-config-server-hmkvr" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.315860 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.315921 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a53e5ad-d4b9-4d98-b3c1-b3e59abf44e3-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-lzvbh\" (UID: \"3a53e5ad-d4b9-4d98-b3c1-b3e59abf44e3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lzvbh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.316978 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f63e5c5e-d547-4849-afe6-932beaf632a5-proxy-tls\") pod \"machine-config-operator-74547568cd-hr6kg\" (UID: \"f63e5c5e-d547-4849-afe6-932beaf632a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hr6kg" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.317228 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1c46f3c7-53f2-456a-80d2-0007d79b7980-profile-collector-cert\") pod \"catalog-operator-68c6474976-g2zhh\" (UID: \"1c46f3c7-53f2-456a-80d2-0007d79b7980\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g2zhh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.317721 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.321992 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-dp7xf"] Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.322038 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nsztn"] Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.322631 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqtv9\" (UniqueName: \"kubernetes.io/projected/54320564-7237-45c8-b465-82f3546faf41-kube-api-access-bqtv9\") pod \"multus-admission-controller-857f4d67dd-6x29q\" (UID: \"54320564-7237-45c8-b465-82f3546faf41\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6x29q" Feb 16 17:01:32 crc kubenswrapper[4794]: W0216 17:01:32.330593 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda58008d3_84e1_425d_a7de_bc37a0f2664e.slice/crio-5c51403af8ec4a0a9afee7cc3901abc206114ee2aaa52b097b86620aa3fad70e WatchSource:0}: Error finding container 5c51403af8ec4a0a9afee7cc3901abc206114ee2aaa52b097b86620aa3fad70e: Status 404 returned error can't find the container with id 5c51403af8ec4a0a9afee7cc3901abc206114ee2aaa52b097b86620aa3fad70e Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.341263 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-6x29q" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.344478 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqwq8\" (UniqueName: \"kubernetes.io/projected/08547bee-d06e-467b-8be7-db65e24c7e49-kube-api-access-bqwq8\") pod \"control-plane-machine-set-operator-78cbb6b69f-s8r99\" (UID: \"08547bee-d06e-467b-8be7-db65e24c7e49\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s8r99" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.356810 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h582\" (UniqueName: \"kubernetes.io/projected/ef8bb78b-0644-4319-8928-4ba08d325777-kube-api-access-7h582\") pod \"package-server-manager-789f6589d5-jv2jb\" (UID: \"ef8bb78b-0644-4319-8928-4ba08d325777\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jv2jb" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.365489 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-2rjhr"] Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.368460 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-zwsbc"] Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.369148 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:32 crc kubenswrapper[4794]: E0216 17:01:32.369711 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:32.869695379 +0000 UTC m=+118.817790026 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.372918 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jbgr\" (UniqueName: \"kubernetes.io/projected/9c029145-bf5d-4a8c-9419-fdcf93c96a4d-kube-api-access-9jbgr\") pod \"marketplace-operator-79b997595-85b84\" (UID: \"9c029145-bf5d-4a8c-9419-fdcf93c96a4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-85b84" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.379408 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-lgmrt" Feb 16 17:01:32 crc kubenswrapper[4794]: W0216 17:01:32.385425 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3fa8c07_9947_4f5c_8295_bdec401113b0.slice/crio-8394e321ba83aa666298dfb95a4ffa24ba1962f848e7aaa0ba51dea48acd3aa3 WatchSource:0}: Error finding container 8394e321ba83aa666298dfb95a4ffa24ba1962f848e7aaa0ba51dea48acd3aa3: Status 404 returned error can't find the container with id 8394e321ba83aa666298dfb95a4ffa24ba1962f848e7aaa0ba51dea48acd3aa3 Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.398283 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5kdd\" (UniqueName: \"kubernetes.io/projected/abf42440-612e-4e7f-95ee-5a4860c9bc59-kube-api-access-b5kdd\") pod \"dns-default-9btqh\" (UID: \"abf42440-612e-4e7f-95ee-5a4860c9bc59\") " pod="openshift-dns/dns-default-9btqh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.407458 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6p66\" (UniqueName: \"kubernetes.io/projected/2b7e6568-6f15-4a8f-aca6-38be84a1a624-kube-api-access-d6p66\") pod \"oauth-openshift-558db77b4-qfr5h\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.416570 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vhvsb" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.430501 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-dsk9b"] Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.432191 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggdpv\" (UniqueName: \"kubernetes.io/projected/ced56472-de39-44c3-af64-45e0c6dbe0c6-kube-api-access-ggdpv\") pod \"openshift-apiserver-operator-796bbdcf4f-h26hh\" (UID: \"ced56472-de39-44c3-af64-45e0c6dbe0c6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h26hh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.446920 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42sb2" event={"ID":"4943f829-0922-4e87-a750-1cfc2f2f1b72","Type":"ContainerStarted","Data":"9abd71c82915eb373f82de656aee2691b053ba7809d24c2611c0a857d4f7f0e6"} Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.446979 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42sb2" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.446990 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42sb2" event={"ID":"4943f829-0922-4e87-a750-1cfc2f2f1b72","Type":"ContainerStarted","Data":"a9f7c12a30cf59fc961479a168a2afccfa40ba600e849f884889fcf850d6de01"} Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.452443 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqftx\" (UniqueName: \"kubernetes.io/projected/7559b705-987c-4020-afac-604cf0e58bbf-kube-api-access-vqftx\") pod \"machine-config-server-hmkvr\" (UID: \"7559b705-987c-4020-afac-604cf0e58bbf\") " pod="openshift-machine-config-operator/machine-config-server-hmkvr" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.460695 4794 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-42sb2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.460755 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42sb2" podUID="4943f829-0922-4e87-a750-1cfc2f2f1b72" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.463562 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n5rq7" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.470107 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d74m\" (UniqueName: \"kubernetes.io/projected/b2c5bb52-5539-44fa-ae62-89450f1a97f2-kube-api-access-9d74m\") pod \"machine-config-controller-84d6567774-swnwm\" (UID: \"b2c5bb52-5539-44fa-ae62-89450f1a97f2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-swnwm" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.470463 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:32 crc kubenswrapper[4794]: E0216 17:01:32.471156 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:32.971123595 +0000 UTC m=+118.919218242 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.471397 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hg4bz" event={"ID":"f455dc97-cc72-4981-ac4d-097fe30413d1","Type":"ContainerStarted","Data":"3b4f650e1efdfca20b9cab28b2a6e22fbec5d17de3360b4f91cdb4c10bc4ef9f"} Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.471455 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hg4bz" event={"ID":"f455dc97-cc72-4981-ac4d-097fe30413d1","Type":"ContainerStarted","Data":"948339bf188a75f550c29282b32d5c7d8edce3948c36f2f9e0b49146fb34203f"} Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.475269 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-rr75f"] Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.476035 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fg5gp" event={"ID":"5cff129b-bd54-4115-bc42-d5617d10eae0","Type":"ContainerStarted","Data":"bb505b42db0a30ecfbf7160e5a9551388e9651ead01c0852d3d2f33b4a197af4"} Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.481610 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nsztn" event={"ID":"8734f760-160a-49c7-9eb3-65e33d816f02","Type":"ContainerStarted","Data":"b9b41b3401e0a9f45f6430bc5ad2d4a843f4201cc6c4cb635f2cf1291b6af2f1"} Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.486657 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zwsbc" event={"ID":"f3fa8c07-9947-4f5c-8295-bdec401113b0","Type":"ContainerStarted","Data":"8394e321ba83aa666298dfb95a4ffa24ba1962f848e7aaa0ba51dea48acd3aa3"} Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.491149 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-2rjhr" event={"ID":"c033bce4-9921-49ec-bda6-ba7f79647c00","Type":"ContainerStarted","Data":"82e315110ec7f8d6a8e52ca2d45b52f8c47ea33ba79da035c978598fbcde9383"} Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.493526 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e751341c-bffc-4204-b03c-5352f25323a0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l6ghx\" (UID: \"e751341c-bffc-4204-b03c-5352f25323a0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6ghx" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.509868 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mkhh2" event={"ID":"28dcafba-7fbd-4ee4-aac0-431d46f0a438","Type":"ContainerStarted","Data":"fe8aae61dbc426930b0f7f924e8333f94d306f0f45b1c8a11d60269384bc84b7"} Feb 16 17:01:32 crc kubenswrapper[4794]: W0216 17:01:32.510941 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod112f572c_0d1d_4bb9_a66c_202a42a9aba1.slice/crio-5fcf4f862de544ef9a911e25279e08acdfef8fbcb09eb46378581fdbf5ffe3f6 WatchSource:0}: Error finding container 5fcf4f862de544ef9a911e25279e08acdfef8fbcb09eb46378581fdbf5ffe3f6: Status 404 returned error can't find the container with id 5fcf4f862de544ef9a911e25279e08acdfef8fbcb09eb46378581fdbf5ffe3f6 Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.511965 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mmhdw" event={"ID":"99adae54-586d-4a41-90e1-288285c47957","Type":"ContainerStarted","Data":"6d86072b07f11f443652ac3c5f58844cca0932c7b5979e85e27d18233d8c0d36"} Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.511989 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mmhdw" event={"ID":"99adae54-586d-4a41-90e1-288285c47957","Type":"ContainerStarted","Data":"6f0c810cbfd5c0c0f8ffaed40f9efdba34db0018903446b821311807221d1f8e"} Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.511998 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mmhdw" event={"ID":"99adae54-586d-4a41-90e1-288285c47957","Type":"ContainerStarted","Data":"c868570cc4e0b6345ae4d5787c8f6ff44b1fc034a1e899d3934183f45a5abe20"} Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.513362 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-dp7xf" event={"ID":"a58008d3-84e1-425d-a7de-bc37a0f2664e","Type":"ContainerStarted","Data":"5c51403af8ec4a0a9afee7cc3901abc206114ee2aaa52b097b86620aa3fad70e"} Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.530910 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5fbkt"] Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.530983 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vtrkl"] Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.534964 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cctv\" (UniqueName: \"kubernetes.io/projected/1a6aade7-1b78-4753-a22d-7251a1b27c9e-kube-api-access-5cctv\") pod \"collect-profiles-29521020-v7cql\" (UID: \"1a6aade7-1b78-4753-a22d-7251a1b27c9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-v7cql" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.554556 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s8r99" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.560053 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k59r5\" (UniqueName: \"kubernetes.io/projected/ba10edb3-730f-4c8b-8380-54162faf0ba8-kube-api-access-k59r5\") pod \"olm-operator-6b444d44fb-5czgb\" (UID: \"ba10edb3-730f-4c8b-8380-54162faf0ba8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5czgb" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.564836 4794 generic.go:334] "Generic (PLEG): container finished" podID="9bbc4953-c59b-46cd-8f17-513136731d2a" containerID="7c7bf7a668f7889bd4183785263a8119d4dfeddf325788e734a731607ff77aa9" exitCode=0 Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.564924 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ld54h" event={"ID":"9bbc4953-c59b-46cd-8f17-513136731d2a","Type":"ContainerDied","Data":"7c7bf7a668f7889bd4183785263a8119d4dfeddf325788e734a731607ff77aa9"} Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.564950 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ld54h" event={"ID":"9bbc4953-c59b-46cd-8f17-513136731d2a","Type":"ContainerStarted","Data":"216798a086b0f29888e80d1b57d849ff2a58160fb4d7f418cf7795b310a6c582"} Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.567682 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h26hh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.571879 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:32 crc kubenswrapper[4794]: E0216 17:01:32.573109 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:33.073093815 +0000 UTC m=+119.021188462 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.575930 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xtklb" event={"ID":"33ee8fad-d568-45d8-b55f-3302e5f3c9c0","Type":"ContainerStarted","Data":"34112389fc3d1daaa3d4e709570fef8098ac7c4f83d40886bbd5dd4080c32dc0"} Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.575977 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xtklb" event={"ID":"33ee8fad-d568-45d8-b55f-3302e5f3c9c0","Type":"ContainerStarted","Data":"0962fc300c17540a468ebe8e876dffbbc51cdb629a1b42aeefa701c3b315b850"} Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.587559 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-swnwm" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.596087 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6ghx" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.600800 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f2nw\" (UniqueName: \"kubernetes.io/projected/33b7f2f3-0621-4fb2-b8f4-d89dd18bf7f0-kube-api-access-5f2nw\") pod \"service-ca-operator-777779d784-b6clj\" (UID: \"33b7f2f3-0621-4fb2-b8f4-d89dd18bf7f0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-b6clj" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.602721 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.611443 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jv2jb" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.611521 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md746\" (UniqueName: \"kubernetes.io/projected/6f58f777-b916-4180-9e54-f138e10b2297-kube-api-access-md746\") pod \"migrator-59844c95c7-6z49s\" (UID: \"6f58f777-b916-4180-9e54-f138e10b2297\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6z49s" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.620746 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5czgb" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.631491 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6z49s" Feb 16 17:01:32 crc kubenswrapper[4794]: W0216 17:01:32.632480 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d2009dc_5385_4529_b1b3_d14a75a50089.slice/crio-6aa17702391053b429ec79239e5b2f4cdd4c82047e0d43bb98f88e7d51c3be1c WatchSource:0}: Error finding container 6aa17702391053b429ec79239e5b2f4cdd4c82047e0d43bb98f88e7d51c3be1c: Status 404 returned error can't find the container with id 6aa17702391053b429ec79239e5b2f4cdd4c82047e0d43bb98f88e7d51c3be1c Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.633235 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b7zz\" (UniqueName: \"kubernetes.io/projected/cb3afbce-0480-4641-95db-17a3c9c28d2d-kube-api-access-5b7zz\") pod \"csi-hostpathplugin-znzx2\" (UID: \"cb3afbce-0480-4641-95db-17a3c9c28d2d\") " pod="hostpath-provisioner/csi-hostpathplugin-znzx2" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.635473 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvpsg\" (UniqueName: \"kubernetes.io/projected/3a53e5ad-d4b9-4d98-b3c1-b3e59abf44e3-kube-api-access-fvpsg\") pod \"kube-storage-version-migrator-operator-b67b599dd-lzvbh\" (UID: \"3a53e5ad-d4b9-4d98-b3c1-b3e59abf44e3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lzvbh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.651101 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t74qc\" (UniqueName: \"kubernetes.io/projected/77e8b691-0679-4dd2-996e-10ee488c5594-kube-api-access-t74qc\") pod \"ingress-canary-4vwx5\" (UID: \"77e8b691-0679-4dd2-996e-10ee488c5594\") " pod="openshift-ingress-canary/ingress-canary-4vwx5" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.657323 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-85b84" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.661260 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-b6clj" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.662563 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64898\" (UniqueName: \"kubernetes.io/projected/39a20b8b-461a-4584-9555-03b93bc951d6-kube-api-access-64898\") pod \"cluster-image-registry-operator-dc59b4c8b-5dxq9\" (UID: \"39a20b8b-461a-4584-9555-03b93bc951d6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5dxq9" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.674865 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-hmkvr" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.677621 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.678143 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-v7cql" Feb 16 17:01:32 crc kubenswrapper[4794]: E0216 17:01:32.679074 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:33.179062282 +0000 UTC m=+119.127156929 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.682785 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g27gf\" (UniqueName: \"kubernetes.io/projected/f63e5c5e-d547-4849-afe6-932beaf632a5-kube-api-access-g27gf\") pod \"machine-config-operator-74547568cd-hr6kg\" (UID: \"f63e5c5e-d547-4849-afe6-932beaf632a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hr6kg" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.689728 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4vwx5" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.693337 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66l49\" (UniqueName: \"kubernetes.io/projected/988b4b22-136d-441f-a51f-8209b7181c08-kube-api-access-66l49\") pod \"packageserver-d55dfcdfc-7ngdp\" (UID: \"988b4b22-136d-441f-a51f-8209b7181c08\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7ngdp" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.697131 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-9btqh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.713167 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-znzx2" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.719328 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-b77qj"] Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.729846 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f582s\" (UniqueName: \"kubernetes.io/projected/8d72b4da-984e-4798-bfaa-d7a9c4b1c587-kube-api-access-f582s\") pod \"service-ca-9c57cc56f-vwnxb\" (UID: \"8d72b4da-984e-4798-bfaa-d7a9c4b1c587\") " pod="openshift-service-ca/service-ca-9c57cc56f-vwnxb" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.733078 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/39a20b8b-461a-4584-9555-03b93bc951d6-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5dxq9\" (UID: \"39a20b8b-461a-4584-9555-03b93bc951d6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5dxq9" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.739164 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6x29q"] Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.755098 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk54j\" (UniqueName: \"kubernetes.io/projected/1c46f3c7-53f2-456a-80d2-0007d79b7980-kube-api-access-nk54j\") pod \"catalog-operator-68c6474976-g2zhh\" (UID: \"1c46f3c7-53f2-456a-80d2-0007d79b7980\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g2zhh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.785280 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:32 crc kubenswrapper[4794]: E0216 17:01:32.785643 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:33.285595133 +0000 UTC m=+119.233689780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.785844 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:32 crc kubenswrapper[4794]: E0216 17:01:32.786153 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:33.286140588 +0000 UTC m=+119.234235235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.787908 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vhvsb"] Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.821716 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lzvbh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.837552 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5dxq9" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.841890 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-lgmrt"] Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.887975 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:32 crc kubenswrapper[4794]: E0216 17:01:32.888694 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:33.388675483 +0000 UTC m=+119.336770130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.906065 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-xtklb" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.910564 4794 patch_prober.go:28] interesting pod/router-default-5444994796-xtklb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:01:32 crc kubenswrapper[4794]: [-]has-synced failed: reason withheld Feb 16 17:01:32 crc kubenswrapper[4794]: [+]process-running ok Feb 16 17:01:32 crc kubenswrapper[4794]: healthz check failed Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.910686 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xtklb" podUID="33ee8fad-d568-45d8-b55f-3302e5f3c9c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.911821 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n5rq7"] Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.926655 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hr6kg" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.936355 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-vwnxb" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.948849 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g2zhh" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.987093 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7ngdp" Feb 16 17:01:32 crc kubenswrapper[4794]: I0216 17:01:32.989804 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:32 crc kubenswrapper[4794]: E0216 17:01:32.990138 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:33.49012582 +0000 UTC m=+119.438220467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:33 crc kubenswrapper[4794]: W0216 17:01:33.051646 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7559b705_987c_4020_afac_604cf0e58bbf.slice/crio-887968d24d8658c576e8ece7cf270777e21134d9843840a92ddebbf70b876f00 WatchSource:0}: Error finding container 887968d24d8658c576e8ece7cf270777e21134d9843840a92ddebbf70b876f00: Status 404 returned error can't find the container with id 887968d24d8658c576e8ece7cf270777e21134d9843840a92ddebbf70b876f00 Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.091125 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:33 crc kubenswrapper[4794]: E0216 17:01:33.092108 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:33.592070799 +0000 UTC m=+119.540165446 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.192712 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:33 crc kubenswrapper[4794]: E0216 17:01:33.193018 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:33.692997072 +0000 UTC m=+119.641091719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.283093 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h26hh"] Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.293361 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:33 crc kubenswrapper[4794]: E0216 17:01:33.293619 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:33.793592825 +0000 UTC m=+119.741687482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.394955 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:33 crc kubenswrapper[4794]: E0216 17:01:33.395398 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:33.895380881 +0000 UTC m=+119.843475528 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:33 crc kubenswrapper[4794]: W0216 17:01:33.423907 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podced56472_de39_44c3_af64_45e0c6dbe0c6.slice/crio-09b7c28046a13a3831cc381408f52ead87836ef2cf5ef0f89efb125a3ae62201 WatchSource:0}: Error finding container 09b7c28046a13a3831cc381408f52ead87836ef2cf5ef0f89efb125a3ae62201: Status 404 returned error can't find the container with id 09b7c28046a13a3831cc381408f52ead87836ef2cf5ef0f89efb125a3ae62201 Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.498362 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:33 crc kubenswrapper[4794]: E0216 17:01:33.499096 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:33.999081127 +0000 UTC m=+119.947175764 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.527453 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-xtklb" podStartSLOduration=98.5274264 podStartE2EDuration="1m38.5274264s" podCreationTimestamp="2026-02-16 16:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:33.488718381 +0000 UTC m=+119.436813028" watchObservedRunningTime="2026-02-16 17:01:33.5274264 +0000 UTC m=+119.475521047" Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.570463 4794 csr.go:261] certificate signing request csr-h8hwn is approved, waiting to be issued Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.575588 4794 csr.go:257] certificate signing request csr-h8hwn is issued Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.585110 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vtrkl" event={"ID":"077b99e5-e95c-4afd-9008-d1f18a6b2f70","Type":"ContainerStarted","Data":"add7e3e5400922633073ebdd7a32eccd94de433e0c188355401f3ae32fe751dc"} Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.585172 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vtrkl" event={"ID":"077b99e5-e95c-4afd-9008-d1f18a6b2f70","Type":"ContainerStarted","Data":"c32d6153944692c4ac9d2529aae40bd2b1f6ea68674de196d99841de270f21ff"} Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.585706 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-vtrkl" Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.589350 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dsk9b" event={"ID":"3aee29ce-f5ae-42d5-9c0c-7648739c6c49","Type":"ContainerStarted","Data":"b69b122895b47bd348cb12d8a2f40483b039312438df11cd52448c98ff3934a8"} Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.589409 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dsk9b" event={"ID":"3aee29ce-f5ae-42d5-9c0c-7648739c6c49","Type":"ContainerStarted","Data":"1a33e0133965b309d5a32152d3d9d215df4b492f61ce27921b70fb46cdff26fa"} Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.591249 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-b77qj" event={"ID":"902f37b4-ec5c-40ae-b110-f17b282f7ddb","Type":"ContainerStarted","Data":"be73a9200df78e7d4d4795952ac59e6b4ed1f1bd9d33221ca4c116225b576c31"} Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.591298 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-b77qj" event={"ID":"902f37b4-ec5c-40ae-b110-f17b282f7ddb","Type":"ContainerStarted","Data":"ea7089c5cec4e4477fdbfa4f44e37a5f0e6689fc9a5cceef94fd80f4075a794d"} Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.593566 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n5rq7" event={"ID":"48aebd18-410a-4f26-8405-e618d55f7881","Type":"ContainerStarted","Data":"8fc137de8efd7174a08582e9f80e9a839cbd62fb4b9f1e917c47a5c081009b78"} Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.595560 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zwsbc" event={"ID":"f3fa8c07-9947-4f5c-8295-bdec401113b0","Type":"ContainerStarted","Data":"920b53d1bf849546d05cda8efc0685b0a211289a07c9e4dcd4802b3251e52136"} Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.597094 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-lgmrt" event={"ID":"ec4460e3-d99e-4ef7-9768-1d033a3e2538","Type":"ContainerStarted","Data":"ce45f29a9c2a818e2ea4a1bcc0082800a6ed95042c39f26a03fb7f2782486988"} Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.599045 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" event={"ID":"9d2009dc-5385-4529-b1b3-d14a75a50089","Type":"ContainerStarted","Data":"6aa17702391053b429ec79239e5b2f4cdd4c82047e0d43bb98f88e7d51c3be1c"} Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.600203 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:33 crc kubenswrapper[4794]: E0216 17:01:33.600527 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:34.100514693 +0000 UTC m=+120.048609340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.630000 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6x29q" event={"ID":"54320564-7237-45c8-b465-82f3546faf41","Type":"ContainerStarted","Data":"f3c3e3ab7b6ad9fa20ba8cd4aaa174337d9b07b2ec3f8d55c29912355330d3dc"} Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.632106 4794 generic.go:334] "Generic (PLEG): container finished" podID="28dcafba-7fbd-4ee4-aac0-431d46f0a438" containerID="e9f4d510dbd2241bdaf7be1216630dee0b51da5add5713bf853f1022789e781c" exitCode=0 Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.632171 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mkhh2" event={"ID":"28dcafba-7fbd-4ee4-aac0-431d46f0a438","Type":"ContainerDied","Data":"e9f4d510dbd2241bdaf7be1216630dee0b51da5add5713bf853f1022789e781c"} Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.637983 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-rr75f" event={"ID":"112f572c-0d1d-4bb9-a66c-202a42a9aba1","Type":"ContainerStarted","Data":"5fcf4f862de544ef9a911e25279e08acdfef8fbcb09eb46378581fdbf5ffe3f6"} Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.639771 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nsztn" event={"ID":"8734f760-160a-49c7-9eb3-65e33d816f02","Type":"ContainerStarted","Data":"044da145c1148ea8de85be917b1f5a27f4c758e74bd10e19938c1b65d1ecfcc3"} Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.650175 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h26hh" event={"ID":"ced56472-de39-44c3-af64-45e0c6dbe0c6","Type":"ContainerStarted","Data":"09b7c28046a13a3831cc381408f52ead87836ef2cf5ef0f89efb125a3ae62201"} Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.659571 4794 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-vtrkl container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.659648 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-vtrkl" podUID="077b99e5-e95c-4afd-9008-d1f18a6b2f70" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.691949 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-2rjhr" event={"ID":"c033bce4-9921-49ec-bda6-ba7f79647c00","Type":"ContainerStarted","Data":"838d3c46d3db0b16099faf92c3f91eb78c562d30f0fa62b6bf99560329d75ffe"} Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.705054 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:33 crc kubenswrapper[4794]: E0216 17:01:33.707424 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:34.207387513 +0000 UTC m=+120.155482160 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.719787 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vhvsb" event={"ID":"cfdff039-c0a8-4244-9f4e-7aeb01507348","Type":"ContainerStarted","Data":"f301e8b23556e96938e4bb48b34422e6098f61974ed855606aaa0cf08dfa7040"} Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.729072 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-hmkvr" event={"ID":"7559b705-987c-4020-afac-604cf0e58bbf","Type":"ContainerStarted","Data":"887968d24d8658c576e8ece7cf270777e21134d9843840a92ddebbf70b876f00"} Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.733975 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fg5gp" event={"ID":"5cff129b-bd54-4115-bc42-d5617d10eae0","Type":"ContainerStarted","Data":"2a41b4d122fd663f5a09ac085ccf8d112e3502f96a085f8947bf7b63de27c557"} Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.734944 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-fg5gp" Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.740731 4794 patch_prober.go:28] interesting pod/downloads-7954f5f757-fg5gp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.740747 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-dp7xf" event={"ID":"a58008d3-84e1-425d-a7de-bc37a0f2664e","Type":"ContainerStarted","Data":"613526d2de628d8a3c014d954bb95e8d6692c41bdd176028134920e96f63a40c"} Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.740820 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fg5gp" podUID="5cff129b-bd54-4115-bc42-d5617d10eae0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.752673 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5czgb"] Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.758217 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jv2jb"] Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.760405 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42sb2" Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.808255 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:33 crc kubenswrapper[4794]: E0216 17:01:33.809602 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:34.30959016 +0000 UTC m=+120.257684797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.857844 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42sb2" podStartSLOduration=97.857824932 podStartE2EDuration="1m37.857824932s" podCreationTimestamp="2026-02-16 16:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:33.855432018 +0000 UTC m=+119.803526675" watchObservedRunningTime="2026-02-16 17:01:33.857824932 +0000 UTC m=+119.805919579" Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.909261 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:33 crc kubenswrapper[4794]: E0216 17:01:33.909533 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:34.409486865 +0000 UTC m=+120.357581512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.910695 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:33 crc kubenswrapper[4794]: E0216 17:01:33.914612 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:34.414590441 +0000 UTC m=+120.362685268 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.918208 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hg4bz" podStartSLOduration=98.918189686 podStartE2EDuration="1m38.918189686s" podCreationTimestamp="2026-02-16 16:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:33.884343947 +0000 UTC m=+119.832438604" watchObservedRunningTime="2026-02-16 17:01:33.918189686 +0000 UTC m=+119.866284333" Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.920370 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-b6clj"] Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.930386 4794 patch_prober.go:28] interesting pod/router-default-5444994796-xtklb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:01:33 crc kubenswrapper[4794]: [-]has-synced failed: reason withheld Feb 16 17:01:33 crc kubenswrapper[4794]: [+]process-running ok Feb 16 17:01:33 crc kubenswrapper[4794]: healthz check failed Feb 16 17:01:33 crc kubenswrapper[4794]: I0216 17:01:33.930442 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xtklb" podUID="33ee8fad-d568-45d8-b55f-3302e5f3c9c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.012359 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:34 crc kubenswrapper[4794]: E0216 17:01:34.016074 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:34.516043187 +0000 UTC m=+120.464137844 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.017316 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:34 crc kubenswrapper[4794]: E0216 17:01:34.018465 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:34.518450641 +0000 UTC m=+120.466545288 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.121360 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:34 crc kubenswrapper[4794]: E0216 17:01:34.122139 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:34.622123537 +0000 UTC m=+120.570218184 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.194789 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s8r99"] Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.214469 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521020-v7cql"] Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.227424 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:34 crc kubenswrapper[4794]: E0216 17:01:34.227747 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:34.727734103 +0000 UTC m=+120.675828750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:34 crc kubenswrapper[4794]: W0216 17:01:34.249534 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08547bee_d06e_467b_8be7_db65e24c7e49.slice/crio-7e1ad1cf5fa9ba1101ed45b84fe60d6a6bd246d82311916a3dbf996488d48a13 WatchSource:0}: Error finding container 7e1ad1cf5fa9ba1101ed45b84fe60d6a6bd246d82311916a3dbf996488d48a13: Status 404 returned error can't find the container with id 7e1ad1cf5fa9ba1101ed45b84fe60d6a6bd246d82311916a3dbf996488d48a13 Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.258593 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4vwx5"] Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.308634 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6ghx"] Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.320583 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-9btqh"] Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.329185 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:34 crc kubenswrapper[4794]: E0216 17:01:34.329767 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:34.829672152 +0000 UTC m=+120.777766799 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.329814 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:34 crc kubenswrapper[4794]: E0216 17:01:34.330962 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:34.830950816 +0000 UTC m=+120.779045463 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.385159 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-znzx2"] Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.404919 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-85b84"] Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.412245 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-swnwm"] Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.426795 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mmhdw" podStartSLOduration=100.426773073 podStartE2EDuration="1m40.426773073s" podCreationTimestamp="2026-02-16 16:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:34.384087598 +0000 UTC m=+120.332182245" watchObservedRunningTime="2026-02-16 17:01:34.426773073 +0000 UTC m=+120.374867720" Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.431825 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:34 crc kubenswrapper[4794]: E0216 17:01:34.432161 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:34.932134215 +0000 UTC m=+120.880228862 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.451190 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5dxq9"] Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.456352 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-fg5gp" podStartSLOduration=99.456325328 podStartE2EDuration="1m39.456325328s" podCreationTimestamp="2026-02-16 16:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:34.405911988 +0000 UTC m=+120.354006655" watchObservedRunningTime="2026-02-16 17:01:34.456325328 +0000 UTC m=+120.404419975" Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.478928 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-6z49s"] Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.481844 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qfr5h"] Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.483735 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g2zhh"] Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.485695 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nsztn" podStartSLOduration=99.485675138 podStartE2EDuration="1m39.485675138s" podCreationTimestamp="2026-02-16 16:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:34.446914858 +0000 UTC m=+120.395009505" watchObservedRunningTime="2026-02-16 17:01:34.485675138 +0000 UTC m=+120.433769785" Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.516867 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7ngdp"] Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.524328 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hr6kg"] Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.524393 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lzvbh"] Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.537507 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:34 crc kubenswrapper[4794]: E0216 17:01:34.541371 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:35.037960978 +0000 UTC m=+120.986055625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.562748 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-vtrkl" podStartSLOduration=99.562729346 podStartE2EDuration="1m39.562729346s" podCreationTimestamp="2026-02-16 16:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:34.540800753 +0000 UTC m=+120.488895400" watchObservedRunningTime="2026-02-16 17:01:34.562729346 +0000 UTC m=+120.510823993" Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.564523 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vwnxb"] Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.579649 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-16 16:56:33 +0000 UTC, rotation deadline is 2026-11-30 21:24:19.268287901 +0000 UTC Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.579744 4794 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6892h22m44.688547213s for next certificate rotation Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.583325 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-b77qj" podStartSLOduration=99.583264552 podStartE2EDuration="1m39.583264552s" podCreationTimestamp="2026-02-16 16:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:34.579390139 +0000 UTC m=+120.527484786" watchObservedRunningTime="2026-02-16 17:01:34.583264552 +0000 UTC m=+120.531359199" Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.620272 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-zwsbc" podStartSLOduration=99.620248285 podStartE2EDuration="1m39.620248285s" podCreationTimestamp="2026-02-16 16:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:34.609766327 +0000 UTC m=+120.557860974" watchObservedRunningTime="2026-02-16 17:01:34.620248285 +0000 UTC m=+120.568342932" Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.643514 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:34 crc kubenswrapper[4794]: E0216 17:01:34.644337 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:35.144318335 +0000 UTC m=+121.092412982 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:34 crc kubenswrapper[4794]: W0216 17:01:34.653439 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d72b4da_984e_4798_bfaa_d7a9c4b1c587.slice/crio-9825e3410977e3381d9efcada9d2f12d730fa9b35c855f4694a95b2e5397784e WatchSource:0}: Error finding container 9825e3410977e3381d9efcada9d2f12d730fa9b35c855f4694a95b2e5397784e: Status 404 returned error can't find the container with id 9825e3410977e3381d9efcada9d2f12d730fa9b35c855f4694a95b2e5397784e Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.759269 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:34 crc kubenswrapper[4794]: E0216 17:01:34.760446 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:35.260428001 +0000 UTC m=+121.208522638 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.789769 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-lgmrt" event={"ID":"ec4460e3-d99e-4ef7-9768-1d033a3e2538","Type":"ContainerStarted","Data":"7fbea3009fb833bdbe44addf0160d787f94102e0da6d8798ed519b84f7ebe85e"} Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.790394 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-lgmrt" Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.791375 4794 patch_prober.go:28] interesting pod/console-operator-58897d9998-lgmrt container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.791437 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-lgmrt" podUID="ec4460e3-d99e-4ef7-9768-1d033a3e2538" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.834516 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-lgmrt" podStartSLOduration=99.83449721 podStartE2EDuration="1m39.83449721s" podCreationTimestamp="2026-02-16 16:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:34.833160474 +0000 UTC m=+120.781255121" watchObservedRunningTime="2026-02-16 17:01:34.83449721 +0000 UTC m=+120.782591857" Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.848729 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-85b84" event={"ID":"9c029145-bf5d-4a8c-9419-fdcf93c96a4d","Type":"ContainerStarted","Data":"0c85789584138b14fa1f1c4029ec1f6fff79042b1fdd1262df8c4445cb5ae128"} Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.848763 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n5rq7" event={"ID":"48aebd18-410a-4f26-8405-e618d55f7881","Type":"ContainerStarted","Data":"97705f0a9d1eee01421d82ca57d11cf7bb5beed7d0a39bdfebfa07e5afbeb46d"} Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.848773 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n5rq7" event={"ID":"48aebd18-410a-4f26-8405-e618d55f7881","Type":"ContainerStarted","Data":"cf3c8eeba06b6d41c2c9def284a233b0bb9c18ea7a604d3c663871dac6a86eab"} Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.848782 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6ghx" event={"ID":"e751341c-bffc-4204-b03c-5352f25323a0","Type":"ContainerStarted","Data":"d61cc85f1d3180a5c62938daad170809d390d12962cc0b74673212a7a4a412f7"} Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.848791 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4vwx5" event={"ID":"77e8b691-0679-4dd2-996e-10ee488c5594","Type":"ContainerStarted","Data":"254c512e2dfd2970134fb5d165545b39606ecca67db7cb26f795b81ca678c8a2"} Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.852327 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s8r99" event={"ID":"08547bee-d06e-467b-8be7-db65e24c7e49","Type":"ContainerStarted","Data":"f769dbc22fa38bb64a45765e45add515925d422c2538707b35690cbe86e81d64"} Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.852351 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s8r99" event={"ID":"08547bee-d06e-467b-8be7-db65e24c7e49","Type":"ContainerStarted","Data":"7e1ad1cf5fa9ba1101ed45b84fe60d6a6bd246d82311916a3dbf996488d48a13"} Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.861892 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:34 crc kubenswrapper[4794]: E0216 17:01:34.862965 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:35.362946856 +0000 UTC m=+121.311041503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.888525 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-hmkvr" event={"ID":"7559b705-987c-4020-afac-604cf0e58bbf","Type":"ContainerStarted","Data":"3890bde8d3671f23480898b5e4d72778d5dd57b8f2dabd8e4c8b2da13abf7a05"} Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.897793 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g2zhh" event={"ID":"1c46f3c7-53f2-456a-80d2-0007d79b7980","Type":"ContainerStarted","Data":"1c5310304073d27938d9c117db241ee661501cd6ee0218bbfb728160b8090f0b"} Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.900130 4794 patch_prober.go:28] interesting pod/router-default-5444994796-xtklb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:01:34 crc kubenswrapper[4794]: [-]has-synced failed: reason withheld Feb 16 17:01:34 crc kubenswrapper[4794]: [+]process-running ok Feb 16 17:01:34 crc kubenswrapper[4794]: healthz check failed Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.900195 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xtklb" podUID="33ee8fad-d568-45d8-b55f-3302e5f3c9c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.901207 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h26hh" event={"ID":"ced56472-de39-44c3-af64-45e0c6dbe0c6","Type":"ContainerStarted","Data":"f9123afed92844b835190cf3f5d9110ac78181d72a2fc8cfcc0d9daba1816435"} Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.910293 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5dxq9" event={"ID":"39a20b8b-461a-4584-9555-03b93bc951d6","Type":"ContainerStarted","Data":"6fa2d3710eba8c527cc2f9dc8b23d7aba7002b26b1d40a4926ae0d63a4fa32cd"} Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.927593 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-znzx2" event={"ID":"cb3afbce-0480-4641-95db-17a3c9c28d2d","Type":"ContainerStarted","Data":"11f5da277a25f61e822afa0619030d4ba55b033d9896cf933f0591c6eaa42b7d"} Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.931492 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-swnwm" event={"ID":"b2c5bb52-5539-44fa-ae62-89450f1a97f2","Type":"ContainerStarted","Data":"024e219bd8a73577f4341d1f3ffba32028118a48d676ec829a9243d9b4b7a979"} Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.948957 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-2rjhr" event={"ID":"c033bce4-9921-49ec-bda6-ba7f79647c00","Type":"ContainerStarted","Data":"45f626078816b5116841bc02cfb8caed5c5713e6a80559cfab01c9ce7942aef5"} Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.964059 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:34 crc kubenswrapper[4794]: E0216 17:01:34.965251 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:35.465237904 +0000 UTC m=+121.413332551 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:34 crc kubenswrapper[4794]: I0216 17:01:34.983378 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" event={"ID":"2b7e6568-6f15-4a8f-aca6-38be84a1a624","Type":"ContainerStarted","Data":"8f4bb954ca2e086af5e8e513e1547c5bf64b67356c4c1467fe4366c4032b7a74"} Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.003714 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-vwnxb" event={"ID":"8d72b4da-984e-4798-bfaa-d7a9c4b1c587","Type":"ContainerStarted","Data":"9825e3410977e3381d9efcada9d2f12d730fa9b35c855f4694a95b2e5397784e"} Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.007552 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mkhh2" event={"ID":"28dcafba-7fbd-4ee4-aac0-431d46f0a438","Type":"ContainerStarted","Data":"8999ee3d15df726fede6f5d893a71b6bd971b6b6df4f500862ecfe631b3fb4a9"} Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.008295 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mkhh2" Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.020227 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-rr75f" event={"ID":"112f572c-0d1d-4bb9-a66c-202a42a9aba1","Type":"ContainerStarted","Data":"241c561142a957e2b3b2411ac5e3ddb0238fea0a2d794321a85ef39832f6065e"} Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.041271 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-dp7xf" event={"ID":"a58008d3-84e1-425d-a7de-bc37a0f2664e","Type":"ContainerStarted","Data":"312ec82d39409b953afe6bdca0408aa326706b65d20d210beee0b90493f4b599"} Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.054695 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jv2jb" event={"ID":"ef8bb78b-0644-4319-8928-4ba08d325777","Type":"ContainerStarted","Data":"42b5a4feb33a7108416b56f2e1e55112d586853c9f769628d610f827f3306047"} Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.054742 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jv2jb" event={"ID":"ef8bb78b-0644-4319-8928-4ba08d325777","Type":"ContainerStarted","Data":"0b28abd9e19379cba4aabdd3439a5fcd978143ae51e30b075fd607e7d08a5a83"} Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.064872 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:35 crc kubenswrapper[4794]: E0216 17:01:35.065683 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:35.565665894 +0000 UTC m=+121.513760541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.105025 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dsk9b" event={"ID":"3aee29ce-f5ae-42d5-9c0c-7648739c6c49","Type":"ContainerStarted","Data":"0e6446e5bb361a5b3214a54c60556869b207f363dd01c4ba0826f21ad121b7ff"} Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.136891 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vhvsb" event={"ID":"cfdff039-c0a8-4244-9f4e-7aeb01507348","Type":"ContainerStarted","Data":"4ac7e1fe51919dc9865b56de9772703b9cdaf55bb10be6f2c72804e1aa4f274c"} Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.167468 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6z49s" event={"ID":"6f58f777-b916-4180-9e54-f138e10b2297","Type":"ContainerStarted","Data":"6b5ef59c8c2ab1226e8f68c112dc3759408b362ab01b708b11794deea4970b55"} Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.167874 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:35 crc kubenswrapper[4794]: E0216 17:01:35.168154 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:35.668140167 +0000 UTC m=+121.616234814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.182630 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-v7cql" event={"ID":"1a6aade7-1b78-4753-a22d-7251a1b27c9e","Type":"ContainerStarted","Data":"f29f00ae2ffdde34bb1c46c703b87e7473eed209f751f1a3a774a72120fde604"} Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.182671 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-v7cql" event={"ID":"1a6aade7-1b78-4753-a22d-7251a1b27c9e","Type":"ContainerStarted","Data":"660ca2126ccd857b7dbceb03e3fd87a4dd1d93029487e6c8512be6fcf1962de8"} Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.269981 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:35 crc kubenswrapper[4794]: E0216 17:01:35.270979 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:35.77095848 +0000 UTC m=+121.719053127 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.279049 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-b6clj" event={"ID":"33b7f2f3-0621-4fb2-b8f4-d89dd18bf7f0","Type":"ContainerStarted","Data":"620947fefe17948ee3a8d1b036f2fbf40dfd0f58dd5216f4303d5e170e18ae81"} Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.279102 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-b6clj" event={"ID":"33b7f2f3-0621-4fb2-b8f4-d89dd18bf7f0","Type":"ContainerStarted","Data":"0760544d4b383072783a4dc5cbb62de651415e360bf67f44e3033080e0ec2b95"} Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.356725 4794 generic.go:334] "Generic (PLEG): container finished" podID="9d2009dc-5385-4529-b1b3-d14a75a50089" containerID="590ba1d3e2662a1856e47ba85dace8b0491a8d723fb197c11d97434a395ae0d3" exitCode=0 Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.357587 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" event={"ID":"9d2009dc-5385-4529-b1b3-d14a75a50089","Type":"ContainerDied","Data":"590ba1d3e2662a1856e47ba85dace8b0491a8d723fb197c11d97434a395ae0d3"} Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.371837 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:35 crc kubenswrapper[4794]: E0216 17:01:35.373451 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:35.873434404 +0000 UTC m=+121.821529051 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.384661 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6x29q" event={"ID":"54320564-7237-45c8-b465-82f3546faf41","Type":"ContainerStarted","Data":"a4500a1b9ac4dfdf8ee23519fb520c1b47821cc04479bfdc4cd9095b2c986ac1"} Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.397689 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5czgb" event={"ID":"ba10edb3-730f-4c8b-8380-54162faf0ba8","Type":"ContainerStarted","Data":"36c8def4f4f8d2280d9c7ae327854d7334fdeb9aeaa40e4debbdacfb0b8181d6"} Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.397723 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5czgb" event={"ID":"ba10edb3-730f-4c8b-8380-54162faf0ba8","Type":"ContainerStarted","Data":"ac0ab9c2485d745a1b5c5507d8d5e49bd26a142f794ff72b91572c6ee77abc7e"} Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.398526 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5czgb" Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.399341 4794 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5czgb container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.399379 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5czgb" podUID="ba10edb3-730f-4c8b-8380-54162faf0ba8" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.427978 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ld54h" event={"ID":"9bbc4953-c59b-46cd-8f17-513136731d2a","Type":"ContainerStarted","Data":"6bd8ff356bba17dc0e6c0a91b7cca6cd9cf9badc609860cbc5a512a1071340d9"} Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.468853 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hr6kg" event={"ID":"f63e5c5e-d547-4849-afe6-932beaf632a5","Type":"ContainerStarted","Data":"6df799f063104ae0cba02b75bc8205fcf53b0d58754f526cc18a53c41e7b61f1"} Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.472969 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:35 crc kubenswrapper[4794]: E0216 17:01:35.474624 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:35.974601573 +0000 UTC m=+121.922696250 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.489529 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9btqh" event={"ID":"abf42440-612e-4e7f-95ee-5a4860c9bc59","Type":"ContainerStarted","Data":"fc4696299bc2a8ac14dee5a0a3f15be24d972c2b72702da10dce3e56d2af265c"} Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.518898 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7ngdp" event={"ID":"988b4b22-136d-441f-a51f-8209b7181c08","Type":"ContainerStarted","Data":"50db84a49431206afa4fe8c351f636e28e652d2c2064ed718fb4507f47fd704c"} Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.519341 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7ngdp" Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.520176 4794 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-7ngdp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.520216 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7ngdp" podUID="988b4b22-136d-441f-a51f-8209b7181c08" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.521855 4794 patch_prober.go:28] interesting pod/downloads-7954f5f757-fg5gp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.521891 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fg5gp" podUID="5cff129b-bd54-4115-bc42-d5617d10eae0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.522421 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lzvbh" event={"ID":"3a53e5ad-d4b9-4d98-b3c1-b3e59abf44e3","Type":"ContainerStarted","Data":"c347cab9e03a6e0bf36d143dab6a12bce27aa1565718ba70a244689727b5e3cc"} Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.555641 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-vtrkl" Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.569230 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-dp7xf" podStartSLOduration=100.569215267 podStartE2EDuration="1m40.569215267s" podCreationTimestamp="2026-02-16 16:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:35.564912183 +0000 UTC m=+121.513006840" watchObservedRunningTime="2026-02-16 17:01:35.569215267 +0000 UTC m=+121.517309914" Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.575701 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:35 crc kubenswrapper[4794]: E0216 17:01:35.577581 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:36.077567069 +0000 UTC m=+122.025661776 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.613105 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5czgb" podStartSLOduration=99.613081963 podStartE2EDuration="1m39.613081963s" podCreationTimestamp="2026-02-16 16:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:35.585204902 +0000 UTC m=+121.533299549" watchObservedRunningTime="2026-02-16 17:01:35.613081963 +0000 UTC m=+121.561176620" Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.646807 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s8r99" podStartSLOduration=99.643290126 podStartE2EDuration="1m39.643290126s" podCreationTimestamp="2026-02-16 16:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:35.624409914 +0000 UTC m=+121.572504561" watchObservedRunningTime="2026-02-16 17:01:35.643290126 +0000 UTC m=+121.591384773" Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.677979 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:35 crc kubenswrapper[4794]: E0216 17:01:35.678455 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:36.17843569 +0000 UTC m=+122.126530337 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.680132 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-v7cql" podStartSLOduration=95.680119535 podStartE2EDuration="1m35.680119535s" podCreationTimestamp="2026-02-16 17:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:35.65359513 +0000 UTC m=+121.601689777" watchObservedRunningTime="2026-02-16 17:01:35.680119535 +0000 UTC m=+121.628214182" Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.680361 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vhvsb" podStartSLOduration=100.680355291 podStartE2EDuration="1m40.680355291s" podCreationTimestamp="2026-02-16 16:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:35.674161677 +0000 UTC m=+121.622256324" watchObservedRunningTime="2026-02-16 17:01:35.680355291 +0000 UTC m=+121.628449938" Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.712867 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n5rq7" podStartSLOduration=101.712852475 podStartE2EDuration="1m41.712852475s" podCreationTimestamp="2026-02-16 16:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:35.71266851 +0000 UTC m=+121.660763157" watchObservedRunningTime="2026-02-16 17:01:35.712852475 +0000 UTC m=+121.660947122" Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.756696 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h26hh" podStartSLOduration=100.75667953 podStartE2EDuration="1m40.75667953s" podCreationTimestamp="2026-02-16 16:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:35.756454454 +0000 UTC m=+121.704549111" watchObservedRunningTime="2026-02-16 17:01:35.75667953 +0000 UTC m=+121.704774177" Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.779946 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:35 crc kubenswrapper[4794]: E0216 17:01:35.780275 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:36.280263107 +0000 UTC m=+122.228357754 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.798389 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ld54h" podStartSLOduration=99.798360028 podStartE2EDuration="1m39.798360028s" podCreationTimestamp="2026-02-16 16:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:35.792014109 +0000 UTC m=+121.740108756" watchObservedRunningTime="2026-02-16 17:01:35.798360028 +0000 UTC m=+121.746454675" Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.879558 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dsk9b" podStartSLOduration=100.879543595 podStartE2EDuration="1m40.879543595s" podCreationTimestamp="2026-02-16 16:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:35.838817913 +0000 UTC m=+121.786912560" watchObservedRunningTime="2026-02-16 17:01:35.879543595 +0000 UTC m=+121.827638242" Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.882159 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-b6clj" podStartSLOduration=99.882147545 podStartE2EDuration="1m39.882147545s" podCreationTimestamp="2026-02-16 16:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:35.878383185 +0000 UTC m=+121.826477842" watchObservedRunningTime="2026-02-16 17:01:35.882147545 +0000 UTC m=+121.830242192" Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.883213 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:35 crc kubenswrapper[4794]: E0216 17:01:35.884020 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:36.384007724 +0000 UTC m=+122.332102371 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.899017 4794 patch_prober.go:28] interesting pod/router-default-5444994796-xtklb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:01:35 crc kubenswrapper[4794]: [-]has-synced failed: reason withheld Feb 16 17:01:35 crc kubenswrapper[4794]: [+]process-running ok Feb 16 17:01:35 crc kubenswrapper[4794]: healthz check failed Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.899064 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xtklb" podUID="33ee8fad-d568-45d8-b55f-3302e5f3c9c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.919177 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lzvbh" podStartSLOduration=99.919159228 podStartE2EDuration="1m39.919159228s" podCreationTimestamp="2026-02-16 16:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:35.918812299 +0000 UTC m=+121.866906946" watchObservedRunningTime="2026-02-16 17:01:35.919159228 +0000 UTC m=+121.867253875" Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.965153 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7ngdp" podStartSLOduration=99.96513695 podStartE2EDuration="1m39.96513695s" podCreationTimestamp="2026-02-16 16:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:35.961334739 +0000 UTC m=+121.909429386" watchObservedRunningTime="2026-02-16 17:01:35.96513695 +0000 UTC m=+121.913231597" Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.991834 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-hmkvr" podStartSLOduration=6.99181948 podStartE2EDuration="6.99181948s" podCreationTimestamp="2026-02-16 17:01:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:35.991136741 +0000 UTC m=+121.939231388" watchObservedRunningTime="2026-02-16 17:01:35.99181948 +0000 UTC m=+121.939914127" Feb 16 17:01:35 crc kubenswrapper[4794]: I0216 17:01:35.996030 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:35 crc kubenswrapper[4794]: E0216 17:01:35.996530 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:36.496515244 +0000 UTC m=+122.444609891 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.054505 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-rr75f" podStartSLOduration=101.054482385 podStartE2EDuration="1m41.054482385s" podCreationTimestamp="2026-02-16 16:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:36.052720898 +0000 UTC m=+122.000815545" watchObservedRunningTime="2026-02-16 17:01:36.054482385 +0000 UTC m=+122.002577042" Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.096659 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:36 crc kubenswrapper[4794]: E0216 17:01:36.096967 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:36.596952924 +0000 UTC m=+122.545047561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.131474 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mkhh2" podStartSLOduration=101.131456841 podStartE2EDuration="1m41.131456841s" podCreationTimestamp="2026-02-16 16:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:36.086052374 +0000 UTC m=+122.034147021" watchObservedRunningTime="2026-02-16 17:01:36.131456841 +0000 UTC m=+122.079551488" Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.198278 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:36 crc kubenswrapper[4794]: E0216 17:01:36.201901 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:36.701874483 +0000 UTC m=+122.649969130 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.205126 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-2rjhr" podStartSLOduration=100.205109589 podStartE2EDuration="1m40.205109589s" podCreationTimestamp="2026-02-16 16:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:36.158687645 +0000 UTC m=+122.106782292" watchObservedRunningTime="2026-02-16 17:01:36.205109589 +0000 UTC m=+122.153204236" Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.306160 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:36 crc kubenswrapper[4794]: E0216 17:01:36.306593 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:36.806567025 +0000 UTC m=+122.754661672 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.306633 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:36 crc kubenswrapper[4794]: E0216 17:01:36.306971 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:36.806958676 +0000 UTC m=+122.755053323 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.408136 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:36 crc kubenswrapper[4794]: E0216 17:01:36.408308 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:36.908274138 +0000 UTC m=+122.856368785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.408467 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:36 crc kubenswrapper[4794]: E0216 17:01:36.408789 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:36.908777532 +0000 UTC m=+122.856872179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.509650 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:36 crc kubenswrapper[4794]: E0216 17:01:36.510114 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:37.010096195 +0000 UTC m=+122.958190842 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.529157 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" event={"ID":"9d2009dc-5385-4529-b1b3-d14a75a50089","Type":"ContainerStarted","Data":"e483fc48c96f1992de2e193f994e2dcd6603810880c0d6ea94d83bf575491d03"} Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.530381 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7ngdp" event={"ID":"988b4b22-136d-441f-a51f-8209b7181c08","Type":"ContainerStarted","Data":"9dbfa9cedeaa4b5c3da88dc59391d3602f9a6afdf71a6001eafa6bd2522dd991"} Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.531619 4794 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-7ngdp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.531678 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7ngdp" podUID="988b4b22-136d-441f-a51f-8209b7181c08" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.532716 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-znzx2" event={"ID":"cb3afbce-0480-4641-95db-17a3c9c28d2d","Type":"ContainerStarted","Data":"2e6515b3da65c35a118de86083a63609a4e006b910c32f8bbf58a0890d037017"} Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.533748 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6ghx" event={"ID":"e751341c-bffc-4204-b03c-5352f25323a0","Type":"ContainerStarted","Data":"d8140e8a78f0759da3b31b7eab987b15cc4a20b8fa3a0614dc683bcb80e3ee22"} Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.544332 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jv2jb" event={"ID":"ef8bb78b-0644-4319-8928-4ba08d325777","Type":"ContainerStarted","Data":"8a73ccf4bd921e8158e40fab8e87c8132e6f40d68139264c45de9204ca68beed"} Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.544869 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jv2jb" Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.549771 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9btqh" event={"ID":"abf42440-612e-4e7f-95ee-5a4860c9bc59","Type":"ContainerStarted","Data":"beb33c383bdb4be96b13bc1ed6565a795a5f74e7ff657f0a2ec379046e0c2c70"} Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.549811 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9btqh" event={"ID":"abf42440-612e-4e7f-95ee-5a4860c9bc59","Type":"ContainerStarted","Data":"f426660ee64c72ed8e440ba6a260144e2c8355672db6e159c2981a1724a179cf"} Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.550583 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-9btqh" Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.552016 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6z49s" event={"ID":"6f58f777-b916-4180-9e54-f138e10b2297","Type":"ContainerStarted","Data":"da7bb38c4cfe4595286b5700966b3be213a7190f67cf47383612ca0fb08894d0"} Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.552057 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6z49s" event={"ID":"6f58f777-b916-4180-9e54-f138e10b2297","Type":"ContainerStarted","Data":"af85826c02b5a26285511869897a39b622d3d3ccb81cc63785fa402f321f74c4"} Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.553062 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5dxq9" event={"ID":"39a20b8b-461a-4584-9555-03b93bc951d6","Type":"ContainerStarted","Data":"2e562f40b3e06b93afef33a71091a0a68735e0468d262374480f1e0e76c950f4"} Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.554376 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-vwnxb" event={"ID":"8d72b4da-984e-4798-bfaa-d7a9c4b1c587","Type":"ContainerStarted","Data":"ef451c561b7250c63bf3f802bee36fbefac18cbe3bcbbd95b29baf10f53cdf80"} Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.558023 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4vwx5" event={"ID":"77e8b691-0679-4dd2-996e-10ee488c5594","Type":"ContainerStarted","Data":"9c7e897a0a801be6f3edb7d0496fc7c0a57db66ee0f8088d9431c3820222f622"} Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.570476 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g2zhh" event={"ID":"1c46f3c7-53f2-456a-80d2-0007d79b7980","Type":"ContainerStarted","Data":"735d1d9d43a1b6a433b3b7603ce930cecbe77560cbadb3d71bd43647ae878d8b"} Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.570934 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g2zhh" Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.574722 4794 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-g2zhh container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.574988 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g2zhh" podUID="1c46f3c7-53f2-456a-80d2-0007d79b7980" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.587061 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6x29q" event={"ID":"54320564-7237-45c8-b465-82f3546faf41","Type":"ContainerStarted","Data":"f29251046ebbdc085098d1dba317691440c6acb80d45ef75441a782337be5136"} Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.612265 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.612968 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lzvbh" event={"ID":"3a53e5ad-d4b9-4d98-b3c1-b3e59abf44e3","Type":"ContainerStarted","Data":"c8573ddfddc137c308eb9136ad4a3f8c995ed8f9312331d2a80547d567b78b9b"} Feb 16 17:01:36 crc kubenswrapper[4794]: E0216 17:01:36.615061 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:37.115046084 +0000 UTC m=+123.063140731 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.615127 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6ghx" podStartSLOduration=101.615112386 podStartE2EDuration="1m41.615112386s" podCreationTimestamp="2026-02-16 16:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:36.604871394 +0000 UTC m=+122.552966041" watchObservedRunningTime="2026-02-16 17:01:36.615112386 +0000 UTC m=+122.563207033" Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.639647 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hr6kg" event={"ID":"f63e5c5e-d547-4849-afe6-932beaf632a5","Type":"ContainerStarted","Data":"c1a3bd9662dac4a5ad5250a5231523e01b3f35252d86d5cf282badd481c0368a"} Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.639697 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hr6kg" event={"ID":"f63e5c5e-d547-4849-afe6-932beaf632a5","Type":"ContainerStarted","Data":"806ca01b21ce8c28878b1c79a02c3039766f741a5be7322f396b2d8cbc19be8f"} Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.670544 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" event={"ID":"2b7e6568-6f15-4a8f-aca6-38be84a1a624","Type":"ContainerStarted","Data":"b71ade9bb24bbfcad1d6e843429935de8b6387450f68313e7ed1f54116cc34e9"} Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.671580 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.682239 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5dxq9" podStartSLOduration=101.68222533 podStartE2EDuration="1m41.68222533s" podCreationTimestamp="2026-02-16 16:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:36.681609623 +0000 UTC m=+122.629704270" watchObservedRunningTime="2026-02-16 17:01:36.68222533 +0000 UTC m=+122.630319977" Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.684000 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-swnwm" event={"ID":"b2c5bb52-5539-44fa-ae62-89450f1a97f2","Type":"ContainerStarted","Data":"d3366336730f10aa14bb075505bad7ffc71f0a64f62191b89c1566172a7f3e61"} Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.684049 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-swnwm" event={"ID":"b2c5bb52-5539-44fa-ae62-89450f1a97f2","Type":"ContainerStarted","Data":"b3be73f8f58d7757f08c5eeaefb42db33dbf25e02b961b45e19efec81a9167d7"} Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.691027 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-85b84" event={"ID":"9c029145-bf5d-4a8c-9419-fdcf93c96a4d","Type":"ContainerStarted","Data":"f9cf8e3246408184e6b3aa25436ea6945ac6e95059e56bb5f8c5bec5791fe540"} Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.691066 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-85b84" Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.697109 4794 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-qfr5h container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.27:6443/healthz\": dial tcp 10.217.0.27:6443: connect: connection refused" start-of-body= Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.697506 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" podUID="2b7e6568-6f15-4a8f-aca6-38be84a1a624" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.27:6443/healthz\": dial tcp 10.217.0.27:6443: connect: connection refused" Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.701451 4794 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-85b84 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.701650 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-85b84" podUID="9c029145-bf5d-4a8c-9419-fdcf93c96a4d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.712466 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ld54h" Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.713363 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ld54h" Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.716833 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:36 crc kubenswrapper[4794]: E0216 17:01:36.718265 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:37.218243547 +0000 UTC m=+123.166338204 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.741649 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5czgb" Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.770876 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6z49s" podStartSLOduration=100.770858965 podStartE2EDuration="1m40.770858965s" podCreationTimestamp="2026-02-16 16:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:36.770438854 +0000 UTC m=+122.718533521" watchObservedRunningTime="2026-02-16 17:01:36.770858965 +0000 UTC m=+122.718953612" Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.801236 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-9btqh" podStartSLOduration=7.801213332 podStartE2EDuration="7.801213332s" podCreationTimestamp="2026-02-16 17:01:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:36.797796191 +0000 UTC m=+122.745890838" watchObservedRunningTime="2026-02-16 17:01:36.801213332 +0000 UTC m=+122.749307979" Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.824019 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:36 crc kubenswrapper[4794]: E0216 17:01:36.825872 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:37.325849857 +0000 UTC m=+123.273944504 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.832151 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-4vwx5" podStartSLOduration=7.832126474 podStartE2EDuration="7.832126474s" podCreationTimestamp="2026-02-16 17:01:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:36.826487394 +0000 UTC m=+122.774582041" watchObservedRunningTime="2026-02-16 17:01:36.832126474 +0000 UTC m=+122.780221131" Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.859144 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jv2jb" podStartSLOduration=100.859118071 podStartE2EDuration="1m40.859118071s" podCreationTimestamp="2026-02-16 16:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:36.856110061 +0000 UTC m=+122.804204708" watchObservedRunningTime="2026-02-16 17:01:36.859118071 +0000 UTC m=+122.807212718" Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.900807 4794 patch_prober.go:28] interesting pod/router-default-5444994796-xtklb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:01:36 crc kubenswrapper[4794]: [-]has-synced failed: reason withheld Feb 16 17:01:36 crc kubenswrapper[4794]: [+]process-running ok Feb 16 17:01:36 crc kubenswrapper[4794]: healthz check failed Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.902490 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xtklb" podUID="33ee8fad-d568-45d8-b55f-3302e5f3c9c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.909981 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g2zhh" podStartSLOduration=100.909956383 podStartE2EDuration="1m40.909956383s" podCreationTimestamp="2026-02-16 16:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:36.907534458 +0000 UTC m=+122.855629105" watchObservedRunningTime="2026-02-16 17:01:36.909956383 +0000 UTC m=+122.858051030" Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.925471 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:36 crc kubenswrapper[4794]: E0216 17:01:36.926273 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:37.426250106 +0000 UTC m=+123.374344753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:36 crc kubenswrapper[4794]: I0216 17:01:36.929747 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-vwnxb" podStartSLOduration=100.929714618 podStartE2EDuration="1m40.929714618s" podCreationTimestamp="2026-02-16 16:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:36.926674827 +0000 UTC m=+122.874769474" watchObservedRunningTime="2026-02-16 17:01:36.929714618 +0000 UTC m=+122.877809265" Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:36.999076 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-6x29q" podStartSLOduration=100.99904528 podStartE2EDuration="1m40.99904528s" podCreationTimestamp="2026-02-16 16:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:36.968850758 +0000 UTC m=+122.916945415" watchObservedRunningTime="2026-02-16 17:01:36.99904528 +0000 UTC m=+122.947139927" Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.018406 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ld54h" Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.021232 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-swnwm" podStartSLOduration=101.02121712 podStartE2EDuration="1m41.02121712s" podCreationTimestamp="2026-02-16 16:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:37.017424319 +0000 UTC m=+122.965518966" watchObservedRunningTime="2026-02-16 17:01:37.02121712 +0000 UTC m=+122.969311767" Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.030200 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:37 crc kubenswrapper[4794]: E0216 17:01:37.030627 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:37.530609329 +0000 UTC m=+123.478703986 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.078977 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-85b84" podStartSLOduration=101.078959654 podStartE2EDuration="1m41.078959654s" podCreationTimestamp="2026-02-16 16:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:37.03966563 +0000 UTC m=+122.987760277" watchObservedRunningTime="2026-02-16 17:01:37.078959654 +0000 UTC m=+123.027054301" Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.080276 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hr6kg" podStartSLOduration=101.080269909 podStartE2EDuration="1m41.080269909s" podCreationTimestamp="2026-02-16 16:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:37.077915447 +0000 UTC m=+123.026010104" watchObservedRunningTime="2026-02-16 17:01:37.080269909 +0000 UTC m=+123.028364556" Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.131445 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" podStartSLOduration=102.131430189 podStartE2EDuration="1m42.131430189s" podCreationTimestamp="2026-02-16 16:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:37.114796817 +0000 UTC m=+123.062891484" watchObservedRunningTime="2026-02-16 17:01:37.131430189 +0000 UTC m=+123.079524836" Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.133834 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:37 crc kubenswrapper[4794]: E0216 17:01:37.134238 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:37.634226433 +0000 UTC m=+123.582321080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.235088 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:37 crc kubenswrapper[4794]: E0216 17:01:37.235594 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:37.735576567 +0000 UTC m=+123.683671214 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.336553 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:37 crc kubenswrapper[4794]: E0216 17:01:37.336903 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:37.836693495 +0000 UTC m=+123.784788152 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.337077 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:37 crc kubenswrapper[4794]: E0216 17:01:37.337390 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:37.837379923 +0000 UTC m=+123.785474570 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.438075 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:37 crc kubenswrapper[4794]: E0216 17:01:37.438233 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:37.938203473 +0000 UTC m=+123.886298120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.438663 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:37 crc kubenswrapper[4794]: E0216 17:01:37.438953 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:37.938941422 +0000 UTC m=+123.887036069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.532113 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-lgmrt" Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.540164 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:37 crc kubenswrapper[4794]: E0216 17:01:37.540493 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:38.04046249 +0000 UTC m=+123.988557137 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.540658 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:37 crc kubenswrapper[4794]: E0216 17:01:37.541035 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:38.041023105 +0000 UTC m=+123.989117752 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.575001 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.575969 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.583017 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.588044 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.595966 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.642084 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:37 crc kubenswrapper[4794]: E0216 17:01:37.642432 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:38.14241255 +0000 UTC m=+124.090507197 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.729942 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" event={"ID":"9d2009dc-5385-4529-b1b3-d14a75a50089","Type":"ContainerStarted","Data":"5574ebf018df7c271dbd5aaa969aa04643b574874a3e8e803df6b609db0d6829"} Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.736995 4794 generic.go:334] "Generic (PLEG): container finished" podID="1a6aade7-1b78-4753-a22d-7251a1b27c9e" containerID="f29f00ae2ffdde34bb1c46c703b87e7473eed209f751f1a3a774a72120fde604" exitCode=0 Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.738434 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-v7cql" event={"ID":"1a6aade7-1b78-4753-a22d-7251a1b27c9e","Type":"ContainerDied","Data":"f29f00ae2ffdde34bb1c46c703b87e7473eed209f751f1a3a774a72120fde604"} Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.741930 4794 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-85b84 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.741968 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-85b84" podUID="9c029145-bf5d-4a8c-9419-fdcf93c96a4d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.743382 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae8b5f6d-af1b-4be6-a043-4e829a09c334-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ae8b5f6d-af1b-4be6-a043-4e829a09c334\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.743448 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.743534 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae8b5f6d-af1b-4be6-a043-4e829a09c334-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ae8b5f6d-af1b-4be6-a043-4e829a09c334\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 17:01:37 crc kubenswrapper[4794]: E0216 17:01:37.743858 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:38.243844856 +0000 UTC m=+124.191939513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.747912 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ld54h" Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.756660 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g2zhh" Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.797676 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mkhh2" Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.843207 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" podStartSLOduration=102.843189406 podStartE2EDuration="1m42.843189406s" podCreationTimestamp="2026-02-16 16:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:37.819162317 +0000 UTC m=+123.767256964" watchObservedRunningTime="2026-02-16 17:01:37.843189406 +0000 UTC m=+123.791284063" Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.845862 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:37 crc kubenswrapper[4794]: E0216 17:01:37.846051 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:38.346023311 +0000 UTC m=+124.294117958 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.847710 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae8b5f6d-af1b-4be6-a043-4e829a09c334-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ae8b5f6d-af1b-4be6-a043-4e829a09c334\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.848225 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae8b5f6d-af1b-4be6-a043-4e829a09c334-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ae8b5f6d-af1b-4be6-a043-4e829a09c334\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.848635 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.849903 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae8b5f6d-af1b-4be6-a043-4e829a09c334-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ae8b5f6d-af1b-4be6-a043-4e829a09c334\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 17:01:37 crc kubenswrapper[4794]: E0216 17:01:37.851713 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:38.351698492 +0000 UTC m=+124.299793139 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.901374 4794 patch_prober.go:28] interesting pod/router-default-5444994796-xtklb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:01:37 crc kubenswrapper[4794]: [-]has-synced failed: reason withheld Feb 16 17:01:37 crc kubenswrapper[4794]: [+]process-running ok Feb 16 17:01:37 crc kubenswrapper[4794]: healthz check failed Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.901436 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xtklb" podUID="33ee8fad-d568-45d8-b55f-3302e5f3c9c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.907023 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae8b5f6d-af1b-4be6-a043-4e829a09c334-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ae8b5f6d-af1b-4be6-a043-4e829a09c334\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.949631 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:37 crc kubenswrapper[4794]: E0216 17:01:37.949781 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:38.449758828 +0000 UTC m=+124.397853475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:37 crc kubenswrapper[4794]: I0216 17:01:37.949910 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:37 crc kubenswrapper[4794]: E0216 17:01:37.950890 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:38.450877878 +0000 UTC m=+124.398972525 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.051017 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:38 crc kubenswrapper[4794]: E0216 17:01:38.051574 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:38.551554614 +0000 UTC m=+124.499649261 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.152412 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:38 crc kubenswrapper[4794]: E0216 17:01:38.152756 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:38.652745273 +0000 UTC m=+124.600839920 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.194382 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.205595 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7ngdp" Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.253357 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:38 crc kubenswrapper[4794]: E0216 17:01:38.253646 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:38.753615364 +0000 UTC m=+124.701710011 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.254039 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:38 crc kubenswrapper[4794]: E0216 17:01:38.254443 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:38.754434806 +0000 UTC m=+124.702529453 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.357017 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:38 crc kubenswrapper[4794]: E0216 17:01:38.357164 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:38.857139346 +0000 UTC m=+124.805233993 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.357266 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:38 crc kubenswrapper[4794]: E0216 17:01:38.357564 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:38.857556407 +0000 UTC m=+124.805651054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.458758 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:38 crc kubenswrapper[4794]: E0216 17:01:38.458889 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:38.95887061 +0000 UTC m=+124.906965257 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.459016 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:38 crc kubenswrapper[4794]: E0216 17:01:38.459376 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:38.959368873 +0000 UTC m=+124.907463510 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.559880 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:38 crc kubenswrapper[4794]: E0216 17:01:38.560010 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:39.059977927 +0000 UTC m=+125.008072574 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.560207 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:38 crc kubenswrapper[4794]: E0216 17:01:38.560626 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:39.060619054 +0000 UTC m=+125.008713701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.661822 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:38 crc kubenswrapper[4794]: E0216 17:01:38.662135 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:39.162094221 +0000 UTC m=+125.110188868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.662541 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:38 crc kubenswrapper[4794]: E0216 17:01:38.662867 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:39.162852611 +0000 UTC m=+125.110947258 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.741217 4794 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-qfr5h container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.27:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.741262 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" podUID="2b7e6568-6f15-4a8f-aca6-38be84a1a624" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.27:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.757422 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7cctn"] Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.758489 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7cctn" Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.762248 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.763411 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:38 crc kubenswrapper[4794]: E0216 17:01:38.763514 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:39.263494506 +0000 UTC m=+125.211589153 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.763656 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:38 crc kubenswrapper[4794]: E0216 17:01:38.764091 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:39.264074601 +0000 UTC m=+125.212169238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.770899 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-znzx2" event={"ID":"cb3afbce-0480-4641-95db-17a3c9c28d2d","Type":"ContainerStarted","Data":"84c2f17fd97d3e932adf932fb4f4b30a5ab9067077d48be70a1a968409ee50ca"} Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.780613 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7cctn"] Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.812742 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.864615 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.865368 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f9ab6e7-980e-4a61-9072-cd2baa7c51ab-catalog-content\") pod \"community-operators-7cctn\" (UID: \"0f9ab6e7-980e-4a61-9072-cd2baa7c51ab\") " pod="openshift-marketplace/community-operators-7cctn" Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.865509 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq6f6\" (UniqueName: \"kubernetes.io/projected/0f9ab6e7-980e-4a61-9072-cd2baa7c51ab-kube-api-access-lq6f6\") pod \"community-operators-7cctn\" (UID: \"0f9ab6e7-980e-4a61-9072-cd2baa7c51ab\") " pod="openshift-marketplace/community-operators-7cctn" Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.865548 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f9ab6e7-980e-4a61-9072-cd2baa7c51ab-utilities\") pod \"community-operators-7cctn\" (UID: \"0f9ab6e7-980e-4a61-9072-cd2baa7c51ab\") " pod="openshift-marketplace/community-operators-7cctn" Feb 16 17:01:38 crc kubenswrapper[4794]: E0216 17:01:38.865639 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:39.365626841 +0000 UTC m=+125.313721488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.900643 4794 patch_prober.go:28] interesting pod/router-default-5444994796-xtklb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:01:38 crc kubenswrapper[4794]: [-]has-synced failed: reason withheld Feb 16 17:01:38 crc kubenswrapper[4794]: [+]process-running ok Feb 16 17:01:38 crc kubenswrapper[4794]: healthz check failed Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.900695 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xtklb" podUID="33ee8fad-d568-45d8-b55f-3302e5f3c9c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.960738 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6v5np"] Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.961639 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6v5np" Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.966661 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.968128 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.968208 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f9ab6e7-980e-4a61-9072-cd2baa7c51ab-catalog-content\") pod \"community-operators-7cctn\" (UID: \"0f9ab6e7-980e-4a61-9072-cd2baa7c51ab\") " pod="openshift-marketplace/community-operators-7cctn" Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.968254 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq6f6\" (UniqueName: \"kubernetes.io/projected/0f9ab6e7-980e-4a61-9072-cd2baa7c51ab-kube-api-access-lq6f6\") pod \"community-operators-7cctn\" (UID: \"0f9ab6e7-980e-4a61-9072-cd2baa7c51ab\") " pod="openshift-marketplace/community-operators-7cctn" Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.968276 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f9ab6e7-980e-4a61-9072-cd2baa7c51ab-utilities\") pod \"community-operators-7cctn\" (UID: \"0f9ab6e7-980e-4a61-9072-cd2baa7c51ab\") " pod="openshift-marketplace/community-operators-7cctn" Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.968715 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f9ab6e7-980e-4a61-9072-cd2baa7c51ab-utilities\") pod \"community-operators-7cctn\" (UID: \"0f9ab6e7-980e-4a61-9072-cd2baa7c51ab\") " pod="openshift-marketplace/community-operators-7cctn" Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.968907 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f9ab6e7-980e-4a61-9072-cd2baa7c51ab-catalog-content\") pod \"community-operators-7cctn\" (UID: \"0f9ab6e7-980e-4a61-9072-cd2baa7c51ab\") " pod="openshift-marketplace/community-operators-7cctn" Feb 16 17:01:38 crc kubenswrapper[4794]: E0216 17:01:38.969112 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:39.469099481 +0000 UTC m=+125.417194128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:38 crc kubenswrapper[4794]: I0216 17:01:38.994273 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6v5np"] Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.022221 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq6f6\" (UniqueName: \"kubernetes.io/projected/0f9ab6e7-980e-4a61-9072-cd2baa7c51ab-kube-api-access-lq6f6\") pod \"community-operators-7cctn\" (UID: \"0f9ab6e7-980e-4a61-9072-cd2baa7c51ab\") " pod="openshift-marketplace/community-operators-7cctn" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.068732 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.068999 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa756591-c2f4-430e-8f17-bd040051f77d-catalog-content\") pod \"certified-operators-6v5np\" (UID: \"aa756591-c2f4-430e-8f17-bd040051f77d\") " pod="openshift-marketplace/certified-operators-6v5np" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.069041 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n8rd\" (UniqueName: \"kubernetes.io/projected/aa756591-c2f4-430e-8f17-bd040051f77d-kube-api-access-7n8rd\") pod \"certified-operators-6v5np\" (UID: \"aa756591-c2f4-430e-8f17-bd040051f77d\") " pod="openshift-marketplace/certified-operators-6v5np" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.069106 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa756591-c2f4-430e-8f17-bd040051f77d-utilities\") pod \"certified-operators-6v5np\" (UID: \"aa756591-c2f4-430e-8f17-bd040051f77d\") " pod="openshift-marketplace/certified-operators-6v5np" Feb 16 17:01:39 crc kubenswrapper[4794]: E0216 17:01:39.069382 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:39.569366906 +0000 UTC m=+125.517461543 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.099145 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7cctn" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.112560 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.168724 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gxf57"] Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.170129 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gxf57" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.171253 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa756591-c2f4-430e-8f17-bd040051f77d-utilities\") pod \"certified-operators-6v5np\" (UID: \"aa756591-c2f4-430e-8f17-bd040051f77d\") " pod="openshift-marketplace/certified-operators-6v5np" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.171282 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.171333 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa756591-c2f4-430e-8f17-bd040051f77d-catalog-content\") pod \"certified-operators-6v5np\" (UID: \"aa756591-c2f4-430e-8f17-bd040051f77d\") " pod="openshift-marketplace/certified-operators-6v5np" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.171365 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n8rd\" (UniqueName: \"kubernetes.io/projected/aa756591-c2f4-430e-8f17-bd040051f77d-kube-api-access-7n8rd\") pod \"certified-operators-6v5np\" (UID: \"aa756591-c2f4-430e-8f17-bd040051f77d\") " pod="openshift-marketplace/certified-operators-6v5np" Feb 16 17:01:39 crc kubenswrapper[4794]: E0216 17:01:39.171681 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:39.671666015 +0000 UTC m=+125.619760662 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.172055 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa756591-c2f4-430e-8f17-bd040051f77d-catalog-content\") pod \"certified-operators-6v5np\" (UID: \"aa756591-c2f4-430e-8f17-bd040051f77d\") " pod="openshift-marketplace/certified-operators-6v5np" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.172398 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa756591-c2f4-430e-8f17-bd040051f77d-utilities\") pod \"certified-operators-6v5np\" (UID: \"aa756591-c2f4-430e-8f17-bd040051f77d\") " pod="openshift-marketplace/certified-operators-6v5np" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.192656 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gxf57"] Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.214615 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n8rd\" (UniqueName: \"kubernetes.io/projected/aa756591-c2f4-430e-8f17-bd040051f77d-kube-api-access-7n8rd\") pod \"certified-operators-6v5np\" (UID: \"aa756591-c2f4-430e-8f17-bd040051f77d\") " pod="openshift-marketplace/certified-operators-6v5np" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.272596 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:39 crc kubenswrapper[4794]: E0216 17:01:39.272775 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:39.772749661 +0000 UTC m=+125.720844308 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.273150 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c2a611b-e699-45f3-a8ca-a687be266a1f-catalog-content\") pod \"community-operators-gxf57\" (UID: \"0c2a611b-e699-45f3-a8ca-a687be266a1f\") " pod="openshift-marketplace/community-operators-gxf57" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.273243 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.273318 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c2a611b-e699-45f3-a8ca-a687be266a1f-utilities\") pod \"community-operators-gxf57\" (UID: \"0c2a611b-e699-45f3-a8ca-a687be266a1f\") " pod="openshift-marketplace/community-operators-gxf57" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.273387 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td8nh\" (UniqueName: \"kubernetes.io/projected/0c2a611b-e699-45f3-a8ca-a687be266a1f-kube-api-access-td8nh\") pod \"community-operators-gxf57\" (UID: \"0c2a611b-e699-45f3-a8ca-a687be266a1f\") " pod="openshift-marketplace/community-operators-gxf57" Feb 16 17:01:39 crc kubenswrapper[4794]: E0216 17:01:39.273513 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:39.773505902 +0000 UTC m=+125.721600539 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.313589 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6v5np" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.313750 4794 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.323888 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-v7cql" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.350876 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gj5mf"] Feb 16 17:01:39 crc kubenswrapper[4794]: E0216 17:01:39.351112 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a6aade7-1b78-4753-a22d-7251a1b27c9e" containerName="collect-profiles" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.351126 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a6aade7-1b78-4753-a22d-7251a1b27c9e" containerName="collect-profiles" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.351228 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a6aade7-1b78-4753-a22d-7251a1b27c9e" containerName="collect-profiles" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.352526 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gj5mf" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.373931 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a6aade7-1b78-4753-a22d-7251a1b27c9e-config-volume\") pod \"1a6aade7-1b78-4753-a22d-7251a1b27c9e\" (UID: \"1a6aade7-1b78-4753-a22d-7251a1b27c9e\") " Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.374210 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a6aade7-1b78-4753-a22d-7251a1b27c9e-secret-volume\") pod \"1a6aade7-1b78-4753-a22d-7251a1b27c9e\" (UID: \"1a6aade7-1b78-4753-a22d-7251a1b27c9e\") " Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.374383 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.374431 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cctv\" (UniqueName: \"kubernetes.io/projected/1a6aade7-1b78-4753-a22d-7251a1b27c9e-kube-api-access-5cctv\") pod \"1a6aade7-1b78-4753-a22d-7251a1b27c9e\" (UID: \"1a6aade7-1b78-4753-a22d-7251a1b27c9e\") " Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.374646 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c2a611b-e699-45f3-a8ca-a687be266a1f-catalog-content\") pod \"community-operators-gxf57\" (UID: \"0c2a611b-e699-45f3-a8ca-a687be266a1f\") " pod="openshift-marketplace/community-operators-gxf57" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.374717 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c2a611b-e699-45f3-a8ca-a687be266a1f-utilities\") pod \"community-operators-gxf57\" (UID: \"0c2a611b-e699-45f3-a8ca-a687be266a1f\") " pod="openshift-marketplace/community-operators-gxf57" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.374765 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td8nh\" (UniqueName: \"kubernetes.io/projected/0c2a611b-e699-45f3-a8ca-a687be266a1f-kube-api-access-td8nh\") pod \"community-operators-gxf57\" (UID: \"0c2a611b-e699-45f3-a8ca-a687be266a1f\") " pod="openshift-marketplace/community-operators-gxf57" Feb 16 17:01:39 crc kubenswrapper[4794]: E0216 17:01:39.375385 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:39.875366919 +0000 UTC m=+125.823461566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.375646 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a6aade7-1b78-4753-a22d-7251a1b27c9e-config-volume" (OuterVolumeSpecName: "config-volume") pod "1a6aade7-1b78-4753-a22d-7251a1b27c9e" (UID: "1a6aade7-1b78-4753-a22d-7251a1b27c9e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.375866 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c2a611b-e699-45f3-a8ca-a687be266a1f-catalog-content\") pod \"community-operators-gxf57\" (UID: \"0c2a611b-e699-45f3-a8ca-a687be266a1f\") " pod="openshift-marketplace/community-operators-gxf57" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.376035 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c2a611b-e699-45f3-a8ca-a687be266a1f-utilities\") pod \"community-operators-gxf57\" (UID: \"0c2a611b-e699-45f3-a8ca-a687be266a1f\") " pod="openshift-marketplace/community-operators-gxf57" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.385487 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gj5mf"] Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.385822 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a6aade7-1b78-4753-a22d-7251a1b27c9e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1a6aade7-1b78-4753-a22d-7251a1b27c9e" (UID: "1a6aade7-1b78-4753-a22d-7251a1b27c9e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.391647 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a6aade7-1b78-4753-a22d-7251a1b27c9e-kube-api-access-5cctv" (OuterVolumeSpecName: "kube-api-access-5cctv") pod "1a6aade7-1b78-4753-a22d-7251a1b27c9e" (UID: "1a6aade7-1b78-4753-a22d-7251a1b27c9e"). InnerVolumeSpecName "kube-api-access-5cctv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.400854 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td8nh\" (UniqueName: \"kubernetes.io/projected/0c2a611b-e699-45f3-a8ca-a687be266a1f-kube-api-access-td8nh\") pod \"community-operators-gxf57\" (UID: \"0c2a611b-e699-45f3-a8ca-a687be266a1f\") " pod="openshift-marketplace/community-operators-gxf57" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.475860 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.475910 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbtrp\" (UniqueName: \"kubernetes.io/projected/68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4-kube-api-access-xbtrp\") pod \"certified-operators-gj5mf\" (UID: \"68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4\") " pod="openshift-marketplace/certified-operators-gj5mf" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.476012 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4-utilities\") pod \"certified-operators-gj5mf\" (UID: \"68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4\") " pod="openshift-marketplace/certified-operators-gj5mf" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.476055 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4-catalog-content\") pod \"certified-operators-gj5mf\" (UID: \"68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4\") " pod="openshift-marketplace/certified-operators-gj5mf" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.476141 4794 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a6aade7-1b78-4753-a22d-7251a1b27c9e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.476162 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cctv\" (UniqueName: \"kubernetes.io/projected/1a6aade7-1b78-4753-a22d-7251a1b27c9e-kube-api-access-5cctv\") on node \"crc\" DevicePath \"\"" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.476187 4794 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a6aade7-1b78-4753-a22d-7251a1b27c9e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 17:01:39 crc kubenswrapper[4794]: E0216 17:01:39.476249 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:39.97623046 +0000 UTC m=+125.924325187 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.517932 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7cctn"] Feb 16 17:01:39 crc kubenswrapper[4794]: W0216 17:01:39.526219 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f9ab6e7_980e_4a61_9072_cd2baa7c51ab.slice/crio-2d17d3d6a236065cd7e811f43742b7a5f2dae8d121ac92522410e06373e9ed16 WatchSource:0}: Error finding container 2d17d3d6a236065cd7e811f43742b7a5f2dae8d121ac92522410e06373e9ed16: Status 404 returned error can't find the container with id 2d17d3d6a236065cd7e811f43742b7a5f2dae8d121ac92522410e06373e9ed16 Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.536260 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gxf57" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.577216 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:39 crc kubenswrapper[4794]: E0216 17:01:39.577518 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:40.077499211 +0000 UTC m=+126.025593858 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.577603 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.577638 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbtrp\" (UniqueName: \"kubernetes.io/projected/68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4-kube-api-access-xbtrp\") pod \"certified-operators-gj5mf\" (UID: \"68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4\") " pod="openshift-marketplace/certified-operators-gj5mf" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.577714 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4-utilities\") pod \"certified-operators-gj5mf\" (UID: \"68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4\") " pod="openshift-marketplace/certified-operators-gj5mf" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.577735 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4-catalog-content\") pod \"certified-operators-gj5mf\" (UID: \"68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4\") " pod="openshift-marketplace/certified-operators-gj5mf" Feb 16 17:01:39 crc kubenswrapper[4794]: E0216 17:01:39.577923 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:40.077913612 +0000 UTC m=+126.026008259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.578195 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4-catalog-content\") pod \"certified-operators-gj5mf\" (UID: \"68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4\") " pod="openshift-marketplace/certified-operators-gj5mf" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.578499 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4-utilities\") pod \"certified-operators-gj5mf\" (UID: \"68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4\") " pod="openshift-marketplace/certified-operators-gj5mf" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.604602 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbtrp\" (UniqueName: \"kubernetes.io/projected/68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4-kube-api-access-xbtrp\") pod \"certified-operators-gj5mf\" (UID: \"68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4\") " pod="openshift-marketplace/certified-operators-gj5mf" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.678388 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:39 crc kubenswrapper[4794]: E0216 17:01:39.678902 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:40.178871296 +0000 UTC m=+126.126965953 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.686944 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6v5np"] Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.698616 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gj5mf" Feb 16 17:01:39 crc kubenswrapper[4794]: W0216 17:01:39.709271 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa756591_c2f4_430e_8f17_bd040051f77d.slice/crio-a7a07ef59f3883f8372eff4d9509e673c252c5232022c8b539a224431faa3010 WatchSource:0}: Error finding container a7a07ef59f3883f8372eff4d9509e673c252c5232022c8b539a224431faa3010: Status 404 returned error can't find the container with id a7a07ef59f3883f8372eff4d9509e673c252c5232022c8b539a224431faa3010 Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.779966 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:39 crc kubenswrapper[4794]: E0216 17:01:39.780362 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:40.280345843 +0000 UTC m=+126.228440490 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.781988 4794 generic.go:334] "Generic (PLEG): container finished" podID="0f9ab6e7-980e-4a61-9072-cd2baa7c51ab" containerID="71bdf376a8635ab531c452263b9c2823c51d4966b981aa47b197296044fdd364" exitCode=0 Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.782062 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cctn" event={"ID":"0f9ab6e7-980e-4a61-9072-cd2baa7c51ab","Type":"ContainerDied","Data":"71bdf376a8635ab531c452263b9c2823c51d4966b981aa47b197296044fdd364"} Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.782090 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cctn" event={"ID":"0f9ab6e7-980e-4a61-9072-cd2baa7c51ab","Type":"ContainerStarted","Data":"2d17d3d6a236065cd7e811f43742b7a5f2dae8d121ac92522410e06373e9ed16"} Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.787851 4794 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.793048 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-znzx2" event={"ID":"cb3afbce-0480-4641-95db-17a3c9c28d2d","Type":"ContainerStarted","Data":"e6b030dc1407b3d6363a6bbe6254eeab7026a3a8c71df1810f9f369c4bfd6d60"} Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.793086 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-znzx2" event={"ID":"cb3afbce-0480-4641-95db-17a3c9c28d2d","Type":"ContainerStarted","Data":"830fb63bcdd7c8cf6218c63dec6c0f90694c5e296d23fcaac1e0b2858fe3e27b"} Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.794915 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ae8b5f6d-af1b-4be6-a043-4e829a09c334","Type":"ContainerStarted","Data":"5fe7d591404662e616d2d45d02537248710db623229cf5e5ccba3c098ee270df"} Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.794954 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ae8b5f6d-af1b-4be6-a043-4e829a09c334","Type":"ContainerStarted","Data":"6ee15f942a115fcfd992078df82c6d112c5486bf3d7c797ccfd8b22887d1e3a8"} Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.795560 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6v5np" event={"ID":"aa756591-c2f4-430e-8f17-bd040051f77d","Type":"ContainerStarted","Data":"a7a07ef59f3883f8372eff4d9509e673c252c5232022c8b539a224431faa3010"} Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.798583 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-v7cql" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.799212 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-v7cql" event={"ID":"1a6aade7-1b78-4753-a22d-7251a1b27c9e","Type":"ContainerDied","Data":"660ca2126ccd857b7dbceb03e3fd87a4dd1d93029487e6c8512be6fcf1962de8"} Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.799264 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="660ca2126ccd857b7dbceb03e3fd87a4dd1d93029487e6c8512be6fcf1962de8" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.819863 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gxf57"] Feb 16 17:01:39 crc kubenswrapper[4794]: W0216 17:01:39.820590 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c2a611b_e699_45f3_a8ca_a687be266a1f.slice/crio-66b4e9d64f78da64a1b2d7d51d30d62ffff6140465fe7c2997814c53c2ca3a58 WatchSource:0}: Error finding container 66b4e9d64f78da64a1b2d7d51d30d62ffff6140465fe7c2997814c53c2ca3a58: Status 404 returned error can't find the container with id 66b4e9d64f78da64a1b2d7d51d30d62ffff6140465fe7c2997814c53c2ca3a58 Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.828120 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.828098502 podStartE2EDuration="2.828098502s" podCreationTimestamp="2026-02-16 17:01:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:39.824925508 +0000 UTC m=+125.773020145" watchObservedRunningTime="2026-02-16 17:01:39.828098502 +0000 UTC m=+125.776193149" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.843044 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-znzx2" podStartSLOduration=10.843027249 podStartE2EDuration="10.843027249s" podCreationTimestamp="2026-02-16 17:01:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:39.839180796 +0000 UTC m=+125.787275453" watchObservedRunningTime="2026-02-16 17:01:39.843027249 +0000 UTC m=+125.791121896" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.882804 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:39 crc kubenswrapper[4794]: E0216 17:01:39.882973 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:40.38294754 +0000 UTC m=+126.331042187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.884595 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:39 crc kubenswrapper[4794]: E0216 17:01:39.892404 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:01:40.39238375 +0000 UTC m=+126.340478397 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h6xgf" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.897149 4794 patch_prober.go:28] interesting pod/router-default-5444994796-xtklb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:01:39 crc kubenswrapper[4794]: [-]has-synced failed: reason withheld Feb 16 17:01:39 crc kubenswrapper[4794]: [+]process-running ok Feb 16 17:01:39 crc kubenswrapper[4794]: healthz check failed Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.897204 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xtklb" podUID="33ee8fad-d568-45d8-b55f-3302e5f3c9c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.914084 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gj5mf"] Feb 16 17:01:39 crc kubenswrapper[4794]: W0216 17:01:39.955286 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68b9b9c6_df6b_4125_bd0c_6352e6f4f2d4.slice/crio-dcaada487ce6dc13cfa8ffb912817d09ba242370b8fee3c36e9b1aa6aa10768d WatchSource:0}: Error finding container dcaada487ce6dc13cfa8ffb912817d09ba242370b8fee3c36e9b1aa6aa10768d: Status 404 returned error can't find the container with id dcaada487ce6dc13cfa8ffb912817d09ba242370b8fee3c36e9b1aa6aa10768d Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.962875 4794 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-16T17:01:39.313766792Z","Handler":null,"Name":""} Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.965452 4794 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.965493 4794 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 16 17:01:39 crc kubenswrapper[4794]: I0216 17:01:39.994709 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.008471 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.096941 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.105285 4794 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.105364 4794 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.174853 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h6xgf\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.194852 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.204398 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.527134 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.528026 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.529915 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.530090 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.536094 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.596546 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-h6xgf"] Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.603120 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/abe25642-f7aa-45c2-8d68-82e02528e51d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"abe25642-f7aa-45c2-8d68-82e02528e51d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.603233 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/abe25642-f7aa-45c2-8d68-82e02528e51d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"abe25642-f7aa-45c2-8d68-82e02528e51d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.705617 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/abe25642-f7aa-45c2-8d68-82e02528e51d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"abe25642-f7aa-45c2-8d68-82e02528e51d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.705719 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/abe25642-f7aa-45c2-8d68-82e02528e51d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"abe25642-f7aa-45c2-8d68-82e02528e51d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.705837 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/abe25642-f7aa-45c2-8d68-82e02528e51d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"abe25642-f7aa-45c2-8d68-82e02528e51d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.725422 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/abe25642-f7aa-45c2-8d68-82e02528e51d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"abe25642-f7aa-45c2-8d68-82e02528e51d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.745572 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5sk9z"] Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.746595 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5sk9z" Feb 16 17:01:40 crc kubenswrapper[4794]: W0216 17:01:40.748058 4794 reflector.go:561] object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb": failed to list *v1.Secret: secrets "redhat-marketplace-dockercfg-x2ctb" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Feb 16 17:01:40 crc kubenswrapper[4794]: E0216 17:01:40.748102 4794 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-x2ctb\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"redhat-marketplace-dockercfg-x2ctb\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.755058 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5sk9z"] Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.797245 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.802984 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" event={"ID":"789593ed-6d75-46b7-9c80-641a7b76a749","Type":"ContainerStarted","Data":"91c2ff5c3f37f22bdd653102c9b771591f66a337535d5dc54f1705a24aab60c2"} Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.803062 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" event={"ID":"789593ed-6d75-46b7-9c80-641a7b76a749","Type":"ContainerStarted","Data":"6cc72aa74e08be7b7f90c37d77fee398e40cd0ec7d74a50ff72a7b6ca094498d"} Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.803101 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.804412 4794 generic.go:334] "Generic (PLEG): container finished" podID="aa756591-c2f4-430e-8f17-bd040051f77d" containerID="8be47568071a475c4b7ba4c8c9f0978791a2a0e64a8f50e98b9aeb572de37aa6" exitCode=0 Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.804470 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6v5np" event={"ID":"aa756591-c2f4-430e-8f17-bd040051f77d","Type":"ContainerDied","Data":"8be47568071a475c4b7ba4c8c9f0978791a2a0e64a8f50e98b9aeb572de37aa6"} Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.806420 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d9a576c-db95-4e07-9d36-c93e7adfbc46-utilities\") pod \"redhat-marketplace-5sk9z\" (UID: \"3d9a576c-db95-4e07-9d36-c93e7adfbc46\") " pod="openshift-marketplace/redhat-marketplace-5sk9z" Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.806449 4794 generic.go:334] "Generic (PLEG): container finished" podID="68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4" containerID="4343b2a6b0391ee86617460676f89afd50fb3a665a1e28a2601eaa2c6f530a4d" exitCode=0 Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.806467 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d9a576c-db95-4e07-9d36-c93e7adfbc46-catalog-content\") pod \"redhat-marketplace-5sk9z\" (UID: \"3d9a576c-db95-4e07-9d36-c93e7adfbc46\") " pod="openshift-marketplace/redhat-marketplace-5sk9z" Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.806510 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gj5mf" event={"ID":"68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4","Type":"ContainerDied","Data":"4343b2a6b0391ee86617460676f89afd50fb3a665a1e28a2601eaa2c6f530a4d"} Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.806530 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gj5mf" event={"ID":"68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4","Type":"ContainerStarted","Data":"dcaada487ce6dc13cfa8ffb912817d09ba242370b8fee3c36e9b1aa6aa10768d"} Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.806539 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9mgj\" (UniqueName: \"kubernetes.io/projected/3d9a576c-db95-4e07-9d36-c93e7adfbc46-kube-api-access-x9mgj\") pod \"redhat-marketplace-5sk9z\" (UID: \"3d9a576c-db95-4e07-9d36-c93e7adfbc46\") " pod="openshift-marketplace/redhat-marketplace-5sk9z" Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.809924 4794 generic.go:334] "Generic (PLEG): container finished" podID="0c2a611b-e699-45f3-a8ca-a687be266a1f" containerID="d654ef9a7b68f323e80fcbb4e0dfe5554d402a5b222e4ad4a3c28618ff8d01b3" exitCode=0 Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.809986 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gxf57" event={"ID":"0c2a611b-e699-45f3-a8ca-a687be266a1f","Type":"ContainerDied","Data":"d654ef9a7b68f323e80fcbb4e0dfe5554d402a5b222e4ad4a3c28618ff8d01b3"} Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.810012 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gxf57" event={"ID":"0c2a611b-e699-45f3-a8ca-a687be266a1f","Type":"ContainerStarted","Data":"66b4e9d64f78da64a1b2d7d51d30d62ffff6140465fe7c2997814c53c2ca3a58"} Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.812168 4794 generic.go:334] "Generic (PLEG): container finished" podID="ae8b5f6d-af1b-4be6-a043-4e829a09c334" containerID="5fe7d591404662e616d2d45d02537248710db623229cf5e5ccba3c098ee270df" exitCode=0 Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.812233 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ae8b5f6d-af1b-4be6-a043-4e829a09c334","Type":"ContainerDied","Data":"5fe7d591404662e616d2d45d02537248710db623229cf5e5ccba3c098ee270df"} Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.826477 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" podStartSLOduration=105.826457117 podStartE2EDuration="1m45.826457117s" podCreationTimestamp="2026-02-16 16:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:40.823865188 +0000 UTC m=+126.771959855" watchObservedRunningTime="2026-02-16 17:01:40.826457117 +0000 UTC m=+126.774551764" Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.845887 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.896174 4794 patch_prober.go:28] interesting pod/router-default-5444994796-xtklb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:01:40 crc kubenswrapper[4794]: [-]has-synced failed: reason withheld Feb 16 17:01:40 crc kubenswrapper[4794]: [+]process-running ok Feb 16 17:01:40 crc kubenswrapper[4794]: healthz check failed Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.896229 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xtklb" podUID="33ee8fad-d568-45d8-b55f-3302e5f3c9c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.910670 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d9a576c-db95-4e07-9d36-c93e7adfbc46-catalog-content\") pod \"redhat-marketplace-5sk9z\" (UID: \"3d9a576c-db95-4e07-9d36-c93e7adfbc46\") " pod="openshift-marketplace/redhat-marketplace-5sk9z" Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.910734 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9mgj\" (UniqueName: \"kubernetes.io/projected/3d9a576c-db95-4e07-9d36-c93e7adfbc46-kube-api-access-x9mgj\") pod \"redhat-marketplace-5sk9z\" (UID: \"3d9a576c-db95-4e07-9d36-c93e7adfbc46\") " pod="openshift-marketplace/redhat-marketplace-5sk9z" Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.910874 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d9a576c-db95-4e07-9d36-c93e7adfbc46-utilities\") pod \"redhat-marketplace-5sk9z\" (UID: \"3d9a576c-db95-4e07-9d36-c93e7adfbc46\") " pod="openshift-marketplace/redhat-marketplace-5sk9z" Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.914131 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d9a576c-db95-4e07-9d36-c93e7adfbc46-utilities\") pod \"redhat-marketplace-5sk9z\" (UID: \"3d9a576c-db95-4e07-9d36-c93e7adfbc46\") " pod="openshift-marketplace/redhat-marketplace-5sk9z" Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.916565 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d9a576c-db95-4e07-9d36-c93e7adfbc46-catalog-content\") pod \"redhat-marketplace-5sk9z\" (UID: \"3d9a576c-db95-4e07-9d36-c93e7adfbc46\") " pod="openshift-marketplace/redhat-marketplace-5sk9z" Feb 16 17:01:40 crc kubenswrapper[4794]: I0216 17:01:40.960456 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9mgj\" (UniqueName: \"kubernetes.io/projected/3d9a576c-db95-4e07-9d36-c93e7adfbc46-kube-api-access-x9mgj\") pod \"redhat-marketplace-5sk9z\" (UID: \"3d9a576c-db95-4e07-9d36-c93e7adfbc46\") " pod="openshift-marketplace/redhat-marketplace-5sk9z" Feb 16 17:01:41 crc kubenswrapper[4794]: I0216 17:01:41.146678 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-69m8b"] Feb 16 17:01:41 crc kubenswrapper[4794]: I0216 17:01:41.147704 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-69m8b" Feb 16 17:01:41 crc kubenswrapper[4794]: I0216 17:01:41.168179 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-69m8b"] Feb 16 17:01:41 crc kubenswrapper[4794]: I0216 17:01:41.214171 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8kfl\" (UniqueName: \"kubernetes.io/projected/7039bec8-af08-4439-be97-c6ee7d3a1c3b-kube-api-access-f8kfl\") pod \"redhat-marketplace-69m8b\" (UID: \"7039bec8-af08-4439-be97-c6ee7d3a1c3b\") " pod="openshift-marketplace/redhat-marketplace-69m8b" Feb 16 17:01:41 crc kubenswrapper[4794]: I0216 17:01:41.214253 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7039bec8-af08-4439-be97-c6ee7d3a1c3b-catalog-content\") pod \"redhat-marketplace-69m8b\" (UID: \"7039bec8-af08-4439-be97-c6ee7d3a1c3b\") " pod="openshift-marketplace/redhat-marketplace-69m8b" Feb 16 17:01:41 crc kubenswrapper[4794]: I0216 17:01:41.214379 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7039bec8-af08-4439-be97-c6ee7d3a1c3b-utilities\") pod \"redhat-marketplace-69m8b\" (UID: \"7039bec8-af08-4439-be97-c6ee7d3a1c3b\") " pod="openshift-marketplace/redhat-marketplace-69m8b" Feb 16 17:01:41 crc kubenswrapper[4794]: I0216 17:01:41.283596 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 17:01:41 crc kubenswrapper[4794]: I0216 17:01:41.315149 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8kfl\" (UniqueName: \"kubernetes.io/projected/7039bec8-af08-4439-be97-c6ee7d3a1c3b-kube-api-access-f8kfl\") pod \"redhat-marketplace-69m8b\" (UID: \"7039bec8-af08-4439-be97-c6ee7d3a1c3b\") " pod="openshift-marketplace/redhat-marketplace-69m8b" Feb 16 17:01:41 crc kubenswrapper[4794]: I0216 17:01:41.315221 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7039bec8-af08-4439-be97-c6ee7d3a1c3b-catalog-content\") pod \"redhat-marketplace-69m8b\" (UID: \"7039bec8-af08-4439-be97-c6ee7d3a1c3b\") " pod="openshift-marketplace/redhat-marketplace-69m8b" Feb 16 17:01:41 crc kubenswrapper[4794]: I0216 17:01:41.315264 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7039bec8-af08-4439-be97-c6ee7d3a1c3b-utilities\") pod \"redhat-marketplace-69m8b\" (UID: \"7039bec8-af08-4439-be97-c6ee7d3a1c3b\") " pod="openshift-marketplace/redhat-marketplace-69m8b" Feb 16 17:01:41 crc kubenswrapper[4794]: I0216 17:01:41.316245 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7039bec8-af08-4439-be97-c6ee7d3a1c3b-utilities\") pod \"redhat-marketplace-69m8b\" (UID: \"7039bec8-af08-4439-be97-c6ee7d3a1c3b\") " pod="openshift-marketplace/redhat-marketplace-69m8b" Feb 16 17:01:41 crc kubenswrapper[4794]: I0216 17:01:41.316387 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7039bec8-af08-4439-be97-c6ee7d3a1c3b-catalog-content\") pod \"redhat-marketplace-69m8b\" (UID: \"7039bec8-af08-4439-be97-c6ee7d3a1c3b\") " pod="openshift-marketplace/redhat-marketplace-69m8b" Feb 16 17:01:41 crc kubenswrapper[4794]: I0216 17:01:41.337914 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8kfl\" (UniqueName: \"kubernetes.io/projected/7039bec8-af08-4439-be97-c6ee7d3a1c3b-kube-api-access-f8kfl\") pod \"redhat-marketplace-69m8b\" (UID: \"7039bec8-af08-4439-be97-c6ee7d3a1c3b\") " pod="openshift-marketplace/redhat-marketplace-69m8b" Feb 16 17:01:41 crc kubenswrapper[4794]: I0216 17:01:41.687222 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 17:01:41 crc kubenswrapper[4794]: I0216 17:01:41.688027 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-69m8b" Feb 16 17:01:41 crc kubenswrapper[4794]: I0216 17:01:41.692812 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5sk9z" Feb 16 17:01:41 crc kubenswrapper[4794]: I0216 17:01:41.769384 4794 patch_prober.go:28] interesting pod/downloads-7954f5f757-fg5gp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 16 17:01:41 crc kubenswrapper[4794]: I0216 17:01:41.769430 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fg5gp" podUID="5cff129b-bd54-4115-bc42-d5617d10eae0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 16 17:01:41 crc kubenswrapper[4794]: I0216 17:01:41.769545 4794 patch_prober.go:28] interesting pod/downloads-7954f5f757-fg5gp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 16 17:01:41 crc kubenswrapper[4794]: I0216 17:01:41.769655 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fg5gp" podUID="5cff129b-bd54-4115-bc42-d5617d10eae0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 16 17:01:41 crc kubenswrapper[4794]: I0216 17:01:41.830920 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"abe25642-f7aa-45c2-8d68-82e02528e51d","Type":"ContainerStarted","Data":"eaa031d6ed50123101b0836eb10553af10a8ef0f1978f5396763e267a2ab907f"} Feb 16 17:01:41 crc kubenswrapper[4794]: I0216 17:01:41.831005 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"abe25642-f7aa-45c2-8d68-82e02528e51d","Type":"ContainerStarted","Data":"781e66b18bd5108c7ba418d87bb5b8ceabb7866840b4f24b9c7af6debf16994f"} Feb 16 17:01:41 crc kubenswrapper[4794]: I0216 17:01:41.855883 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=1.855857496 podStartE2EDuration="1.855857496s" podCreationTimestamp="2026-02-16 17:01:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:41.849832316 +0000 UTC m=+127.797926963" watchObservedRunningTime="2026-02-16 17:01:41.855857496 +0000 UTC m=+127.803952153" Feb 16 17:01:41 crc kubenswrapper[4794]: I0216 17:01:41.893703 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-xtklb" Feb 16 17:01:41 crc kubenswrapper[4794]: I0216 17:01:41.904073 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:41 crc kubenswrapper[4794]: I0216 17:01:41.904103 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:41 crc kubenswrapper[4794]: I0216 17:01:41.908063 4794 patch_prober.go:28] interesting pod/router-default-5444994796-xtklb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:01:41 crc kubenswrapper[4794]: [-]has-synced failed: reason withheld Feb 16 17:01:41 crc kubenswrapper[4794]: [+]process-running ok Feb 16 17:01:41 crc kubenswrapper[4794]: healthz check failed Feb 16 17:01:41 crc kubenswrapper[4794]: I0216 17:01:41.908294 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xtklb" podUID="33ee8fad-d568-45d8-b55f-3302e5f3c9c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:01:41 crc kubenswrapper[4794]: I0216 17:01:41.927471 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:41 crc kubenswrapper[4794]: I0216 17:01:41.961506 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7nzlb"] Feb 16 17:01:41 crc kubenswrapper[4794]: I0216 17:01:41.963254 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7nzlb" Feb 16 17:01:41 crc kubenswrapper[4794]: I0216 17:01:41.968401 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 17:01:41 crc kubenswrapper[4794]: I0216 17:01:41.974256 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7nzlb"] Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.034819 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bfe3d12-bcac-4380-b906-7abe78d56232-catalog-content\") pod \"redhat-operators-7nzlb\" (UID: \"1bfe3d12-bcac-4380-b906-7abe78d56232\") " pod="openshift-marketplace/redhat-operators-7nzlb" Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.034888 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bfe3d12-bcac-4380-b906-7abe78d56232-utilities\") pod \"redhat-operators-7nzlb\" (UID: \"1bfe3d12-bcac-4380-b906-7abe78d56232\") " pod="openshift-marketplace/redhat-operators-7nzlb" Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.035556 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8t2r\" (UniqueName: \"kubernetes.io/projected/1bfe3d12-bcac-4380-b906-7abe78d56232-kube-api-access-w8t2r\") pod \"redhat-operators-7nzlb\" (UID: \"1bfe3d12-bcac-4380-b906-7abe78d56232\") " pod="openshift-marketplace/redhat-operators-7nzlb" Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.047943 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-zwsbc" Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.047994 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-zwsbc" Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.054325 4794 patch_prober.go:28] interesting pod/console-f9d7485db-zwsbc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.054446 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-zwsbc" podUID="f3fa8c07-9947-4f5c-8295-bdec401113b0" containerName="console" probeResult="failure" output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.137898 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bfe3d12-bcac-4380-b906-7abe78d56232-catalog-content\") pod \"redhat-operators-7nzlb\" (UID: \"1bfe3d12-bcac-4380-b906-7abe78d56232\") " pod="openshift-marketplace/redhat-operators-7nzlb" Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.137952 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bfe3d12-bcac-4380-b906-7abe78d56232-utilities\") pod \"redhat-operators-7nzlb\" (UID: \"1bfe3d12-bcac-4380-b906-7abe78d56232\") " pod="openshift-marketplace/redhat-operators-7nzlb" Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.138029 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8t2r\" (UniqueName: \"kubernetes.io/projected/1bfe3d12-bcac-4380-b906-7abe78d56232-kube-api-access-w8t2r\") pod \"redhat-operators-7nzlb\" (UID: \"1bfe3d12-bcac-4380-b906-7abe78d56232\") " pod="openshift-marketplace/redhat-operators-7nzlb" Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.140441 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bfe3d12-bcac-4380-b906-7abe78d56232-catalog-content\") pod \"redhat-operators-7nzlb\" (UID: \"1bfe3d12-bcac-4380-b906-7abe78d56232\") " pod="openshift-marketplace/redhat-operators-7nzlb" Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.140476 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bfe3d12-bcac-4380-b906-7abe78d56232-utilities\") pod \"redhat-operators-7nzlb\" (UID: \"1bfe3d12-bcac-4380-b906-7abe78d56232\") " pod="openshift-marketplace/redhat-operators-7nzlb" Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.183660 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8t2r\" (UniqueName: \"kubernetes.io/projected/1bfe3d12-bcac-4380-b906-7abe78d56232-kube-api-access-w8t2r\") pod \"redhat-operators-7nzlb\" (UID: \"1bfe3d12-bcac-4380-b906-7abe78d56232\") " pod="openshift-marketplace/redhat-operators-7nzlb" Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.235829 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.300037 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7nzlb" Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.341998 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae8b5f6d-af1b-4be6-a043-4e829a09c334-kube-api-access\") pod \"ae8b5f6d-af1b-4be6-a043-4e829a09c334\" (UID: \"ae8b5f6d-af1b-4be6-a043-4e829a09c334\") " Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.342131 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae8b5f6d-af1b-4be6-a043-4e829a09c334-kubelet-dir\") pod \"ae8b5f6d-af1b-4be6-a043-4e829a09c334\" (UID: \"ae8b5f6d-af1b-4be6-a043-4e829a09c334\") " Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.342363 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae8b5f6d-af1b-4be6-a043-4e829a09c334-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ae8b5f6d-af1b-4be6-a043-4e829a09c334" (UID: "ae8b5f6d-af1b-4be6-a043-4e829a09c334"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.342532 4794 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae8b5f6d-af1b-4be6-a043-4e829a09c334-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.346999 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j2khs"] Feb 16 17:01:42 crc kubenswrapper[4794]: E0216 17:01:42.347614 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae8b5f6d-af1b-4be6-a043-4e829a09c334" containerName="pruner" Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.347635 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae8b5f6d-af1b-4be6-a043-4e829a09c334" containerName="pruner" Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.347939 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae8b5f6d-af1b-4be6-a043-4e829a09c334" containerName="pruner" Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.349598 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2khs" Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.356559 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae8b5f6d-af1b-4be6-a043-4e829a09c334-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ae8b5f6d-af1b-4be6-a043-4e829a09c334" (UID: "ae8b5f6d-af1b-4be6-a043-4e829a09c334"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.360740 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j2khs"] Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.412814 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-69m8b"] Feb 16 17:01:42 crc kubenswrapper[4794]: W0216 17:01:42.424503 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7039bec8_af08_4439_be97_c6ee7d3a1c3b.slice/crio-31ffc3c4144597eed2bada54478d8f685ecd5904dfe32ad979d007d47ba95a7b WatchSource:0}: Error finding container 31ffc3c4144597eed2bada54478d8f685ecd5904dfe32ad979d007d47ba95a7b: Status 404 returned error can't find the container with id 31ffc3c4144597eed2bada54478d8f685ecd5904dfe32ad979d007d47ba95a7b Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.438779 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5sk9z"] Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.443669 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljpff\" (UniqueName: \"kubernetes.io/projected/fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4-kube-api-access-ljpff\") pod \"redhat-operators-j2khs\" (UID: \"fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4\") " pod="openshift-marketplace/redhat-operators-j2khs" Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.443740 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4-utilities\") pod \"redhat-operators-j2khs\" (UID: \"fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4\") " pod="openshift-marketplace/redhat-operators-j2khs" Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.443977 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4-catalog-content\") pod \"redhat-operators-j2khs\" (UID: \"fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4\") " pod="openshift-marketplace/redhat-operators-j2khs" Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.444205 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae8b5f6d-af1b-4be6-a043-4e829a09c334-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 17:01:42 crc kubenswrapper[4794]: W0216 17:01:42.497085 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d9a576c_db95_4e07_9d36_c93e7adfbc46.slice/crio-9940f5d7c5b0dda7893f45c0ed536276c8eeabc4543591fa673ebe10815dd2a9 WatchSource:0}: Error finding container 9940f5d7c5b0dda7893f45c0ed536276c8eeabc4543591fa673ebe10815dd2a9: Status 404 returned error can't find the container with id 9940f5d7c5b0dda7893f45c0ed536276c8eeabc4543591fa673ebe10815dd2a9 Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.545241 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljpff\" (UniqueName: \"kubernetes.io/projected/fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4-kube-api-access-ljpff\") pod \"redhat-operators-j2khs\" (UID: \"fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4\") " pod="openshift-marketplace/redhat-operators-j2khs" Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.545330 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4-utilities\") pod \"redhat-operators-j2khs\" (UID: \"fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4\") " pod="openshift-marketplace/redhat-operators-j2khs" Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.545388 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4-catalog-content\") pod \"redhat-operators-j2khs\" (UID: \"fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4\") " pod="openshift-marketplace/redhat-operators-j2khs" Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.545936 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4-catalog-content\") pod \"redhat-operators-j2khs\" (UID: \"fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4\") " pod="openshift-marketplace/redhat-operators-j2khs" Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.546047 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4-utilities\") pod \"redhat-operators-j2khs\" (UID: \"fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4\") " pod="openshift-marketplace/redhat-operators-j2khs" Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.589187 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljpff\" (UniqueName: \"kubernetes.io/projected/fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4-kube-api-access-ljpff\") pod \"redhat-operators-j2khs\" (UID: \"fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4\") " pod="openshift-marketplace/redhat-operators-j2khs" Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.654927 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7nzlb"] Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.662722 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-85b84" Feb 16 17:01:42 crc kubenswrapper[4794]: W0216 17:01:42.674346 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bfe3d12_bcac_4380_b906_7abe78d56232.slice/crio-6efa4dc49181c7cf0212d254d20676f72be8f2186405d309e0572457abafc23c WatchSource:0}: Error finding container 6efa4dc49181c7cf0212d254d20676f72be8f2186405d309e0572457abafc23c: Status 404 returned error can't find the container with id 6efa4dc49181c7cf0212d254d20676f72be8f2186405d309e0572457abafc23c Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.682398 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2khs" Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.845968 4794 generic.go:334] "Generic (PLEG): container finished" podID="3d9a576c-db95-4e07-9d36-c93e7adfbc46" containerID="2d75b48ef75f4e53d1c41c694ceb6430dc93619c33bb03bc02136286684c8a61" exitCode=0 Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.846035 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5sk9z" event={"ID":"3d9a576c-db95-4e07-9d36-c93e7adfbc46","Type":"ContainerDied","Data":"2d75b48ef75f4e53d1c41c694ceb6430dc93619c33bb03bc02136286684c8a61"} Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.846066 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5sk9z" event={"ID":"3d9a576c-db95-4e07-9d36-c93e7adfbc46","Type":"ContainerStarted","Data":"9940f5d7c5b0dda7893f45c0ed536276c8eeabc4543591fa673ebe10815dd2a9"} Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.854121 4794 generic.go:334] "Generic (PLEG): container finished" podID="7039bec8-af08-4439-be97-c6ee7d3a1c3b" containerID="5d4245a21571624bce4be15cf67d676b937f28baa4a1c196cfd8ae9ea44134d2" exitCode=0 Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.854187 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-69m8b" event={"ID":"7039bec8-af08-4439-be97-c6ee7d3a1c3b","Type":"ContainerDied","Data":"5d4245a21571624bce4be15cf67d676b937f28baa4a1c196cfd8ae9ea44134d2"} Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.854217 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-69m8b" event={"ID":"7039bec8-af08-4439-be97-c6ee7d3a1c3b","Type":"ContainerStarted","Data":"31ffc3c4144597eed2bada54478d8f685ecd5904dfe32ad979d007d47ba95a7b"} Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.867181 4794 generic.go:334] "Generic (PLEG): container finished" podID="abe25642-f7aa-45c2-8d68-82e02528e51d" containerID="eaa031d6ed50123101b0836eb10553af10a8ef0f1978f5396763e267a2ab907f" exitCode=0 Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.867467 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"abe25642-f7aa-45c2-8d68-82e02528e51d","Type":"ContainerDied","Data":"eaa031d6ed50123101b0836eb10553af10a8ef0f1978f5396763e267a2ab907f"} Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.875824 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.876647 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ae8b5f6d-af1b-4be6-a043-4e829a09c334","Type":"ContainerDied","Data":"6ee15f942a115fcfd992078df82c6d112c5486bf3d7c797ccfd8b22887d1e3a8"} Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.876678 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ee15f942a115fcfd992078df82c6d112c5486bf3d7c797ccfd8b22887d1e3a8" Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.879858 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7nzlb" event={"ID":"1bfe3d12-bcac-4380-b906-7abe78d56232","Type":"ContainerStarted","Data":"6efa4dc49181c7cf0212d254d20676f72be8f2186405d309e0572457abafc23c"} Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.888376 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-5fbkt" Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.896653 4794 patch_prober.go:28] interesting pod/router-default-5444994796-xtklb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:01:42 crc kubenswrapper[4794]: [-]has-synced failed: reason withheld Feb 16 17:01:42 crc kubenswrapper[4794]: [+]process-running ok Feb 16 17:01:42 crc kubenswrapper[4794]: healthz check failed Feb 16 17:01:42 crc kubenswrapper[4794]: I0216 17:01:42.896708 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xtklb" podUID="33ee8fad-d568-45d8-b55f-3302e5f3c9c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:01:43 crc kubenswrapper[4794]: I0216 17:01:43.051895 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j2khs"] Feb 16 17:01:43 crc kubenswrapper[4794]: I0216 17:01:43.888655 4794 generic.go:334] "Generic (PLEG): container finished" podID="fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4" containerID="a3f26ac3c6a59682308df3e4040334be9220b1204d01b2c57cc524b70f8deefb" exitCode=0 Feb 16 17:01:43 crc kubenswrapper[4794]: I0216 17:01:43.888754 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2khs" event={"ID":"fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4","Type":"ContainerDied","Data":"a3f26ac3c6a59682308df3e4040334be9220b1204d01b2c57cc524b70f8deefb"} Feb 16 17:01:43 crc kubenswrapper[4794]: I0216 17:01:43.888981 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2khs" event={"ID":"fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4","Type":"ContainerStarted","Data":"41acacd8ecfbc255c93b4c0770a793f7e164a6ea471ec9d3390936a9caf52573"} Feb 16 17:01:43 crc kubenswrapper[4794]: I0216 17:01:43.891956 4794 generic.go:334] "Generic (PLEG): container finished" podID="1bfe3d12-bcac-4380-b906-7abe78d56232" containerID="385cc116e4325ac949f928abfe7837ffb98d24c0e02c0cda253ac2e2c30ff8bc" exitCode=0 Feb 16 17:01:43 crc kubenswrapper[4794]: I0216 17:01:43.892049 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7nzlb" event={"ID":"1bfe3d12-bcac-4380-b906-7abe78d56232","Type":"ContainerDied","Data":"385cc116e4325ac949f928abfe7837ffb98d24c0e02c0cda253ac2e2c30ff8bc"} Feb 16 17:01:43 crc kubenswrapper[4794]: I0216 17:01:43.895611 4794 patch_prober.go:28] interesting pod/router-default-5444994796-xtklb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:01:43 crc kubenswrapper[4794]: [-]has-synced failed: reason withheld Feb 16 17:01:43 crc kubenswrapper[4794]: [+]process-running ok Feb 16 17:01:43 crc kubenswrapper[4794]: healthz check failed Feb 16 17:01:43 crc kubenswrapper[4794]: I0216 17:01:43.895652 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xtklb" podUID="33ee8fad-d568-45d8-b55f-3302e5f3c9c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:01:44 crc kubenswrapper[4794]: I0216 17:01:44.203503 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 17:01:44 crc kubenswrapper[4794]: I0216 17:01:44.298946 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/abe25642-f7aa-45c2-8d68-82e02528e51d-kube-api-access\") pod \"abe25642-f7aa-45c2-8d68-82e02528e51d\" (UID: \"abe25642-f7aa-45c2-8d68-82e02528e51d\") " Feb 16 17:01:44 crc kubenswrapper[4794]: I0216 17:01:44.299074 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/abe25642-f7aa-45c2-8d68-82e02528e51d-kubelet-dir\") pod \"abe25642-f7aa-45c2-8d68-82e02528e51d\" (UID: \"abe25642-f7aa-45c2-8d68-82e02528e51d\") " Feb 16 17:01:44 crc kubenswrapper[4794]: I0216 17:01:44.299416 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abe25642-f7aa-45c2-8d68-82e02528e51d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "abe25642-f7aa-45c2-8d68-82e02528e51d" (UID: "abe25642-f7aa-45c2-8d68-82e02528e51d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:01:44 crc kubenswrapper[4794]: I0216 17:01:44.308608 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abe25642-f7aa-45c2-8d68-82e02528e51d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "abe25642-f7aa-45c2-8d68-82e02528e51d" (UID: "abe25642-f7aa-45c2-8d68-82e02528e51d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:01:44 crc kubenswrapper[4794]: I0216 17:01:44.400552 4794 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/abe25642-f7aa-45c2-8d68-82e02528e51d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 17:01:44 crc kubenswrapper[4794]: I0216 17:01:44.400582 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/abe25642-f7aa-45c2-8d68-82e02528e51d-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 17:01:44 crc kubenswrapper[4794]: I0216 17:01:44.895560 4794 patch_prober.go:28] interesting pod/router-default-5444994796-xtklb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:01:44 crc kubenswrapper[4794]: [-]has-synced failed: reason withheld Feb 16 17:01:44 crc kubenswrapper[4794]: [+]process-running ok Feb 16 17:01:44 crc kubenswrapper[4794]: healthz check failed Feb 16 17:01:44 crc kubenswrapper[4794]: I0216 17:01:44.895891 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xtklb" podUID="33ee8fad-d568-45d8-b55f-3302e5f3c9c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:01:44 crc kubenswrapper[4794]: I0216 17:01:44.900170 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"abe25642-f7aa-45c2-8d68-82e02528e51d","Type":"ContainerDied","Data":"781e66b18bd5108c7ba418d87bb5b8ceabb7866840b4f24b9c7af6debf16994f"} Feb 16 17:01:44 crc kubenswrapper[4794]: I0216 17:01:44.900208 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="781e66b18bd5108c7ba418d87bb5b8ceabb7866840b4f24b9c7af6debf16994f" Feb 16 17:01:44 crc kubenswrapper[4794]: I0216 17:01:44.900212 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 17:01:45 crc kubenswrapper[4794]: I0216 17:01:45.895568 4794 patch_prober.go:28] interesting pod/router-default-5444994796-xtklb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:01:45 crc kubenswrapper[4794]: [-]has-synced failed: reason withheld Feb 16 17:01:45 crc kubenswrapper[4794]: [+]process-running ok Feb 16 17:01:45 crc kubenswrapper[4794]: healthz check failed Feb 16 17:01:45 crc kubenswrapper[4794]: I0216 17:01:45.895639 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xtklb" podUID="33ee8fad-d568-45d8-b55f-3302e5f3c9c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:01:46 crc kubenswrapper[4794]: I0216 17:01:46.895576 4794 patch_prober.go:28] interesting pod/router-default-5444994796-xtklb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:01:46 crc kubenswrapper[4794]: [-]has-synced failed: reason withheld Feb 16 17:01:46 crc kubenswrapper[4794]: [+]process-running ok Feb 16 17:01:46 crc kubenswrapper[4794]: healthz check failed Feb 16 17:01:46 crc kubenswrapper[4794]: I0216 17:01:46.895868 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xtklb" podUID="33ee8fad-d568-45d8-b55f-3302e5f3c9c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:01:47 crc kubenswrapper[4794]: I0216 17:01:47.708748 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-9btqh" Feb 16 17:01:47 crc kubenswrapper[4794]: I0216 17:01:47.907482 4794 patch_prober.go:28] interesting pod/router-default-5444994796-xtklb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:01:47 crc kubenswrapper[4794]: [-]has-synced failed: reason withheld Feb 16 17:01:47 crc kubenswrapper[4794]: [+]process-running ok Feb 16 17:01:47 crc kubenswrapper[4794]: healthz check failed Feb 16 17:01:47 crc kubenswrapper[4794]: I0216 17:01:47.907546 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xtklb" podUID="33ee8fad-d568-45d8-b55f-3302e5f3c9c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:01:48 crc kubenswrapper[4794]: I0216 17:01:48.894955 4794 patch_prober.go:28] interesting pod/router-default-5444994796-xtklb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:01:48 crc kubenswrapper[4794]: [-]has-synced failed: reason withheld Feb 16 17:01:48 crc kubenswrapper[4794]: [+]process-running ok Feb 16 17:01:48 crc kubenswrapper[4794]: healthz check failed Feb 16 17:01:48 crc kubenswrapper[4794]: I0216 17:01:48.895009 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xtklb" podUID="33ee8fad-d568-45d8-b55f-3302e5f3c9c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:01:48 crc kubenswrapper[4794]: I0216 17:01:48.936054 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 17:01:49 crc kubenswrapper[4794]: I0216 17:01:49.895379 4794 patch_prober.go:28] interesting pod/router-default-5444994796-xtklb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:01:49 crc kubenswrapper[4794]: [-]has-synced failed: reason withheld Feb 16 17:01:49 crc kubenswrapper[4794]: [+]process-running ok Feb 16 17:01:49 crc kubenswrapper[4794]: healthz check failed Feb 16 17:01:49 crc kubenswrapper[4794]: I0216 17:01:49.895529 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xtklb" podUID="33ee8fad-d568-45d8-b55f-3302e5f3c9c0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:01:50 crc kubenswrapper[4794]: I0216 17:01:50.898407 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-xtklb" Feb 16 17:01:50 crc kubenswrapper[4794]: I0216 17:01:50.903368 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-xtklb" Feb 16 17:01:51 crc kubenswrapper[4794]: I0216 17:01:51.776285 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-fg5gp" Feb 16 17:01:52 crc kubenswrapper[4794]: I0216 17:01:52.048776 4794 patch_prober.go:28] interesting pod/console-f9d7485db-zwsbc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 16 17:01:52 crc kubenswrapper[4794]: I0216 17:01:52.048828 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-zwsbc" podUID="f3fa8c07-9947-4f5c-8295-bdec401113b0" containerName="console" probeResult="failure" output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 16 17:01:54 crc kubenswrapper[4794]: I0216 17:01:54.837731 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vtrkl"] Feb 16 17:01:54 crc kubenswrapper[4794]: I0216 17:01:54.838138 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-vtrkl" podUID="077b99e5-e95c-4afd-9008-d1f18a6b2f70" containerName="controller-manager" containerID="cri-o://add7e3e5400922633073ebdd7a32eccd94de433e0c188355401f3ae32fe751dc" gracePeriod=30 Feb 16 17:01:54 crc kubenswrapper[4794]: I0216 17:01:54.856717 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-42sb2"] Feb 16 17:01:54 crc kubenswrapper[4794]: I0216 17:01:54.856912 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42sb2" podUID="4943f829-0922-4e87-a750-1cfc2f2f1b72" containerName="route-controller-manager" containerID="cri-o://9abd71c82915eb373f82de656aee2691b053ba7809d24c2611c0a857d4f7f0e6" gracePeriod=30 Feb 16 17:01:55 crc kubenswrapper[4794]: I0216 17:01:55.990476 4794 generic.go:334] "Generic (PLEG): container finished" podID="077b99e5-e95c-4afd-9008-d1f18a6b2f70" containerID="add7e3e5400922633073ebdd7a32eccd94de433e0c188355401f3ae32fe751dc" exitCode=0 Feb 16 17:01:55 crc kubenswrapper[4794]: I0216 17:01:55.990566 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vtrkl" event={"ID":"077b99e5-e95c-4afd-9008-d1f18a6b2f70","Type":"ContainerDied","Data":"add7e3e5400922633073ebdd7a32eccd94de433e0c188355401f3ae32fe751dc"} Feb 16 17:01:55 crc kubenswrapper[4794]: I0216 17:01:55.992000 4794 generic.go:334] "Generic (PLEG): container finished" podID="4943f829-0922-4e87-a750-1cfc2f2f1b72" containerID="9abd71c82915eb373f82de656aee2691b053ba7809d24c2611c0a857d4f7f0e6" exitCode=0 Feb 16 17:01:55 crc kubenswrapper[4794]: I0216 17:01:55.992035 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42sb2" event={"ID":"4943f829-0922-4e87-a750-1cfc2f2f1b72","Type":"ContainerDied","Data":"9abd71c82915eb373f82de656aee2691b053ba7809d24c2611c0a857d4f7f0e6"} Feb 16 17:01:59 crc kubenswrapper[4794]: I0216 17:01:59.531145 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42sb2" Feb 16 17:01:59 crc kubenswrapper[4794]: I0216 17:01:59.581485 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c588587d7-p4rr4"] Feb 16 17:01:59 crc kubenswrapper[4794]: E0216 17:01:59.581840 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abe25642-f7aa-45c2-8d68-82e02528e51d" containerName="pruner" Feb 16 17:01:59 crc kubenswrapper[4794]: I0216 17:01:59.581860 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="abe25642-f7aa-45c2-8d68-82e02528e51d" containerName="pruner" Feb 16 17:01:59 crc kubenswrapper[4794]: E0216 17:01:59.581877 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4943f829-0922-4e87-a750-1cfc2f2f1b72" containerName="route-controller-manager" Feb 16 17:01:59 crc kubenswrapper[4794]: I0216 17:01:59.581885 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="4943f829-0922-4e87-a750-1cfc2f2f1b72" containerName="route-controller-manager" Feb 16 17:01:59 crc kubenswrapper[4794]: I0216 17:01:59.582064 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="4943f829-0922-4e87-a750-1cfc2f2f1b72" containerName="route-controller-manager" Feb 16 17:01:59 crc kubenswrapper[4794]: I0216 17:01:59.582083 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="abe25642-f7aa-45c2-8d68-82e02528e51d" containerName="pruner" Feb 16 17:01:59 crc kubenswrapper[4794]: I0216 17:01:59.583096 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-p4rr4" Feb 16 17:01:59 crc kubenswrapper[4794]: I0216 17:01:59.593215 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c588587d7-p4rr4"] Feb 16 17:01:59 crc kubenswrapper[4794]: I0216 17:01:59.642000 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4943f829-0922-4e87-a750-1cfc2f2f1b72-client-ca\") pod \"4943f829-0922-4e87-a750-1cfc2f2f1b72\" (UID: \"4943f829-0922-4e87-a750-1cfc2f2f1b72\") " Feb 16 17:01:59 crc kubenswrapper[4794]: I0216 17:01:59.642095 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4943f829-0922-4e87-a750-1cfc2f2f1b72-serving-cert\") pod \"4943f829-0922-4e87-a750-1cfc2f2f1b72\" (UID: \"4943f829-0922-4e87-a750-1cfc2f2f1b72\") " Feb 16 17:01:59 crc kubenswrapper[4794]: I0216 17:01:59.642126 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4943f829-0922-4e87-a750-1cfc2f2f1b72-config\") pod \"4943f829-0922-4e87-a750-1cfc2f2f1b72\" (UID: \"4943f829-0922-4e87-a750-1cfc2f2f1b72\") " Feb 16 17:01:59 crc kubenswrapper[4794]: I0216 17:01:59.642162 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tv4m5\" (UniqueName: \"kubernetes.io/projected/4943f829-0922-4e87-a750-1cfc2f2f1b72-kube-api-access-tv4m5\") pod \"4943f829-0922-4e87-a750-1cfc2f2f1b72\" (UID: \"4943f829-0922-4e87-a750-1cfc2f2f1b72\") " Feb 16 17:01:59 crc kubenswrapper[4794]: I0216 17:01:59.642364 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a32e5495-8e9f-495d-8bf0-e0e986d23411-client-ca\") pod \"route-controller-manager-7c588587d7-p4rr4\" (UID: \"a32e5495-8e9f-495d-8bf0-e0e986d23411\") " pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-p4rr4" Feb 16 17:01:59 crc kubenswrapper[4794]: I0216 17:01:59.642453 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a32e5495-8e9f-495d-8bf0-e0e986d23411-config\") pod \"route-controller-manager-7c588587d7-p4rr4\" (UID: \"a32e5495-8e9f-495d-8bf0-e0e986d23411\") " pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-p4rr4" Feb 16 17:01:59 crc kubenswrapper[4794]: I0216 17:01:59.642484 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a32e5495-8e9f-495d-8bf0-e0e986d23411-serving-cert\") pod \"route-controller-manager-7c588587d7-p4rr4\" (UID: \"a32e5495-8e9f-495d-8bf0-e0e986d23411\") " pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-p4rr4" Feb 16 17:01:59 crc kubenswrapper[4794]: I0216 17:01:59.642646 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z4k6\" (UniqueName: \"kubernetes.io/projected/a32e5495-8e9f-495d-8bf0-e0e986d23411-kube-api-access-6z4k6\") pod \"route-controller-manager-7c588587d7-p4rr4\" (UID: \"a32e5495-8e9f-495d-8bf0-e0e986d23411\") " pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-p4rr4" Feb 16 17:01:59 crc kubenswrapper[4794]: I0216 17:01:59.643571 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4943f829-0922-4e87-a750-1cfc2f2f1b72-config" (OuterVolumeSpecName: "config") pod "4943f829-0922-4e87-a750-1cfc2f2f1b72" (UID: "4943f829-0922-4e87-a750-1cfc2f2f1b72"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:01:59 crc kubenswrapper[4794]: I0216 17:01:59.644130 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4943f829-0922-4e87-a750-1cfc2f2f1b72-client-ca" (OuterVolumeSpecName: "client-ca") pod "4943f829-0922-4e87-a750-1cfc2f2f1b72" (UID: "4943f829-0922-4e87-a750-1cfc2f2f1b72"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:01:59 crc kubenswrapper[4794]: I0216 17:01:59.656051 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4943f829-0922-4e87-a750-1cfc2f2f1b72-kube-api-access-tv4m5" (OuterVolumeSpecName: "kube-api-access-tv4m5") pod "4943f829-0922-4e87-a750-1cfc2f2f1b72" (UID: "4943f829-0922-4e87-a750-1cfc2f2f1b72"). InnerVolumeSpecName "kube-api-access-tv4m5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:01:59 crc kubenswrapper[4794]: I0216 17:01:59.656635 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4943f829-0922-4e87-a750-1cfc2f2f1b72-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4943f829-0922-4e87-a750-1cfc2f2f1b72" (UID: "4943f829-0922-4e87-a750-1cfc2f2f1b72"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:01:59 crc kubenswrapper[4794]: I0216 17:01:59.744599 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z4k6\" (UniqueName: \"kubernetes.io/projected/a32e5495-8e9f-495d-8bf0-e0e986d23411-kube-api-access-6z4k6\") pod \"route-controller-manager-7c588587d7-p4rr4\" (UID: \"a32e5495-8e9f-495d-8bf0-e0e986d23411\") " pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-p4rr4" Feb 16 17:01:59 crc kubenswrapper[4794]: I0216 17:01:59.744697 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a32e5495-8e9f-495d-8bf0-e0e986d23411-client-ca\") pod \"route-controller-manager-7c588587d7-p4rr4\" (UID: \"a32e5495-8e9f-495d-8bf0-e0e986d23411\") " pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-p4rr4" Feb 16 17:01:59 crc kubenswrapper[4794]: I0216 17:01:59.744772 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a32e5495-8e9f-495d-8bf0-e0e986d23411-config\") pod \"route-controller-manager-7c588587d7-p4rr4\" (UID: \"a32e5495-8e9f-495d-8bf0-e0e986d23411\") " pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-p4rr4" Feb 16 17:01:59 crc kubenswrapper[4794]: I0216 17:01:59.744795 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a32e5495-8e9f-495d-8bf0-e0e986d23411-serving-cert\") pod \"route-controller-manager-7c588587d7-p4rr4\" (UID: \"a32e5495-8e9f-495d-8bf0-e0e986d23411\") " pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-p4rr4" Feb 16 17:01:59 crc kubenswrapper[4794]: I0216 17:01:59.744828 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tv4m5\" (UniqueName: \"kubernetes.io/projected/4943f829-0922-4e87-a750-1cfc2f2f1b72-kube-api-access-tv4m5\") on node \"crc\" DevicePath \"\"" Feb 16 17:01:59 crc kubenswrapper[4794]: I0216 17:01:59.744839 4794 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4943f829-0922-4e87-a750-1cfc2f2f1b72-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:01:59 crc kubenswrapper[4794]: I0216 17:01:59.744848 4794 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4943f829-0922-4e87-a750-1cfc2f2f1b72-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:01:59 crc kubenswrapper[4794]: I0216 17:01:59.744856 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4943f829-0922-4e87-a750-1cfc2f2f1b72-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:01:59 crc kubenswrapper[4794]: I0216 17:01:59.745804 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a32e5495-8e9f-495d-8bf0-e0e986d23411-client-ca\") pod \"route-controller-manager-7c588587d7-p4rr4\" (UID: \"a32e5495-8e9f-495d-8bf0-e0e986d23411\") " pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-p4rr4" Feb 16 17:01:59 crc kubenswrapper[4794]: I0216 17:01:59.746352 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a32e5495-8e9f-495d-8bf0-e0e986d23411-config\") pod \"route-controller-manager-7c588587d7-p4rr4\" (UID: \"a32e5495-8e9f-495d-8bf0-e0e986d23411\") " pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-p4rr4" Feb 16 17:01:59 crc kubenswrapper[4794]: I0216 17:01:59.750600 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a32e5495-8e9f-495d-8bf0-e0e986d23411-serving-cert\") pod \"route-controller-manager-7c588587d7-p4rr4\" (UID: \"a32e5495-8e9f-495d-8bf0-e0e986d23411\") " pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-p4rr4" Feb 16 17:01:59 crc kubenswrapper[4794]: I0216 17:01:59.771265 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z4k6\" (UniqueName: \"kubernetes.io/projected/a32e5495-8e9f-495d-8bf0-e0e986d23411-kube-api-access-6z4k6\") pod \"route-controller-manager-7c588587d7-p4rr4\" (UID: \"a32e5495-8e9f-495d-8bf0-e0e986d23411\") " pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-p4rr4" Feb 16 17:01:59 crc kubenswrapper[4794]: I0216 17:01:59.904519 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-p4rr4" Feb 16 17:02:00 crc kubenswrapper[4794]: I0216 17:02:00.020874 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42sb2" event={"ID":"4943f829-0922-4e87-a750-1cfc2f2f1b72","Type":"ContainerDied","Data":"a9f7c12a30cf59fc961479a168a2afccfa40ba600e849f884889fcf850d6de01"} Feb 16 17:02:00 crc kubenswrapper[4794]: I0216 17:02:00.021059 4794 scope.go:117] "RemoveContainer" containerID="9abd71c82915eb373f82de656aee2691b053ba7809d24c2611c0a857d4f7f0e6" Feb 16 17:02:00 crc kubenswrapper[4794]: I0216 17:02:00.021134 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42sb2" Feb 16 17:02:00 crc kubenswrapper[4794]: I0216 17:02:00.062322 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-42sb2"] Feb 16 17:02:00 crc kubenswrapper[4794]: I0216 17:02:00.064808 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-42sb2"] Feb 16 17:02:00 crc kubenswrapper[4794]: I0216 17:02:00.231573 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:02:00 crc kubenswrapper[4794]: I0216 17:02:00.798254 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4943f829-0922-4e87-a750-1cfc2f2f1b72" path="/var/lib/kubelet/pods/4943f829-0922-4e87-a750-1cfc2f2f1b72/volumes" Feb 16 17:02:01 crc kubenswrapper[4794]: I0216 17:02:01.931271 4794 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-vtrkl container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 16 17:02:01 crc kubenswrapper[4794]: I0216 17:02:01.931717 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-vtrkl" podUID="077b99e5-e95c-4afd-9008-d1f18a6b2f70" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 16 17:02:02 crc kubenswrapper[4794]: I0216 17:02:02.060557 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-zwsbc" Feb 16 17:02:02 crc kubenswrapper[4794]: I0216 17:02:02.067717 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-zwsbc" Feb 16 17:02:03 crc kubenswrapper[4794]: I0216 17:02:03.705937 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:02:03 crc kubenswrapper[4794]: I0216 17:02:03.705997 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:02:03 crc kubenswrapper[4794]: I0216 17:02:03.706108 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:02:03 crc kubenswrapper[4794]: I0216 17:02:03.708162 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 17:02:03 crc kubenswrapper[4794]: I0216 17:02:03.708352 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 17:02:03 crc kubenswrapper[4794]: I0216 17:02:03.708852 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 17:02:03 crc kubenswrapper[4794]: I0216 17:02:03.718914 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 17:02:03 crc kubenswrapper[4794]: I0216 17:02:03.724751 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:02:03 crc kubenswrapper[4794]: I0216 17:02:03.732190 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:02:03 crc kubenswrapper[4794]: I0216 17:02:03.808066 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:02:03 crc kubenswrapper[4794]: I0216 17:02:03.812436 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:02:03 crc kubenswrapper[4794]: I0216 17:02:03.861788 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:02:03 crc kubenswrapper[4794]: I0216 17:02:03.917665 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:02:03 crc kubenswrapper[4794]: I0216 17:02:03.926056 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:02:03 crc kubenswrapper[4794]: I0216 17:02:03.932181 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:02:04 crc kubenswrapper[4794]: E0216 17:02:04.583573 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 16 17:02:04 crc kubenswrapper[4794]: E0216 17:02:04.583815 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xbtrp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-gj5mf_openshift-marketplace(68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 17:02:04 crc kubenswrapper[4794]: E0216 17:02:04.585092 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-gj5mf" podUID="68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4" Feb 16 17:02:07 crc kubenswrapper[4794]: E0216 17:02:07.409085 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-gj5mf" podUID="68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4" Feb 16 17:02:07 crc kubenswrapper[4794]: E0216 17:02:07.463662 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 16 17:02:07 crc kubenswrapper[4794]: E0216 17:02:07.464412 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lq6f6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-7cctn_openshift-marketplace(0f9ab6e7-980e-4a61-9072-cd2baa7c51ab): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 17:02:07 crc kubenswrapper[4794]: E0216 17:02:07.465648 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-7cctn" podUID="0f9ab6e7-980e-4a61-9072-cd2baa7c51ab" Feb 16 17:02:07 crc kubenswrapper[4794]: E0216 17:02:07.506549 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 16 17:02:07 crc kubenswrapper[4794]: E0216 17:02:07.506699 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-td8nh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-gxf57_openshift-marketplace(0c2a611b-e699-45f3-a8ca-a687be266a1f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 17:02:07 crc kubenswrapper[4794]: E0216 17:02:07.507851 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-gxf57" podUID="0c2a611b-e699-45f3-a8ca-a687be266a1f" Feb 16 17:02:08 crc kubenswrapper[4794]: E0216 17:02:08.690938 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-gxf57" podUID="0c2a611b-e699-45f3-a8ca-a687be266a1f" Feb 16 17:02:08 crc kubenswrapper[4794]: E0216 17:02:08.691002 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-7cctn" podUID="0f9ab6e7-980e-4a61-9072-cd2baa7c51ab" Feb 16 17:02:08 crc kubenswrapper[4794]: E0216 17:02:08.704124 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 16 17:02:08 crc kubenswrapper[4794]: E0216 17:02:08.704322 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f8kfl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-69m8b_openshift-marketplace(7039bec8-af08-4439-be97-c6ee7d3a1c3b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 17:02:08 crc kubenswrapper[4794]: E0216 17:02:08.705476 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-69m8b" podUID="7039bec8-af08-4439-be97-c6ee7d3a1c3b" Feb 16 17:02:11 crc kubenswrapper[4794]: E0216 17:02:11.630733 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-69m8b" podUID="7039bec8-af08-4439-be97-c6ee7d3a1c3b" Feb 16 17:02:11 crc kubenswrapper[4794]: I0216 17:02:11.680738 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vtrkl" Feb 16 17:02:11 crc kubenswrapper[4794]: I0216 17:02:11.716736 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7469dbc894-7zpxc"] Feb 16 17:02:11 crc kubenswrapper[4794]: E0216 17:02:11.716946 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077b99e5-e95c-4afd-9008-d1f18a6b2f70" containerName="controller-manager" Feb 16 17:02:11 crc kubenswrapper[4794]: I0216 17:02:11.716957 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="077b99e5-e95c-4afd-9008-d1f18a6b2f70" containerName="controller-manager" Feb 16 17:02:11 crc kubenswrapper[4794]: I0216 17:02:11.717050 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="077b99e5-e95c-4afd-9008-d1f18a6b2f70" containerName="controller-manager" Feb 16 17:02:11 crc kubenswrapper[4794]: I0216 17:02:11.717446 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7469dbc894-7zpxc" Feb 16 17:02:11 crc kubenswrapper[4794]: I0216 17:02:11.734347 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7469dbc894-7zpxc"] Feb 16 17:02:11 crc kubenswrapper[4794]: I0216 17:02:11.835231 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/077b99e5-e95c-4afd-9008-d1f18a6b2f70-config\") pod \"077b99e5-e95c-4afd-9008-d1f18a6b2f70\" (UID: \"077b99e5-e95c-4afd-9008-d1f18a6b2f70\") " Feb 16 17:02:11 crc kubenswrapper[4794]: I0216 17:02:11.835283 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/077b99e5-e95c-4afd-9008-d1f18a6b2f70-client-ca\") pod \"077b99e5-e95c-4afd-9008-d1f18a6b2f70\" (UID: \"077b99e5-e95c-4afd-9008-d1f18a6b2f70\") " Feb 16 17:02:11 crc kubenswrapper[4794]: I0216 17:02:11.835322 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/077b99e5-e95c-4afd-9008-d1f18a6b2f70-proxy-ca-bundles\") pod \"077b99e5-e95c-4afd-9008-d1f18a6b2f70\" (UID: \"077b99e5-e95c-4afd-9008-d1f18a6b2f70\") " Feb 16 17:02:11 crc kubenswrapper[4794]: I0216 17:02:11.835396 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/077b99e5-e95c-4afd-9008-d1f18a6b2f70-serving-cert\") pod \"077b99e5-e95c-4afd-9008-d1f18a6b2f70\" (UID: \"077b99e5-e95c-4afd-9008-d1f18a6b2f70\") " Feb 16 17:02:11 crc kubenswrapper[4794]: I0216 17:02:11.835449 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5zdw\" (UniqueName: \"kubernetes.io/projected/077b99e5-e95c-4afd-9008-d1f18a6b2f70-kube-api-access-p5zdw\") pod \"077b99e5-e95c-4afd-9008-d1f18a6b2f70\" (UID: \"077b99e5-e95c-4afd-9008-d1f18a6b2f70\") " Feb 16 17:02:11 crc kubenswrapper[4794]: I0216 17:02:11.835651 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ce9cf273-395a-42b2-820b-cd4bf8aa21d6-proxy-ca-bundles\") pod \"controller-manager-7469dbc894-7zpxc\" (UID: \"ce9cf273-395a-42b2-820b-cd4bf8aa21d6\") " pod="openshift-controller-manager/controller-manager-7469dbc894-7zpxc" Feb 16 17:02:11 crc kubenswrapper[4794]: I0216 17:02:11.835731 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce9cf273-395a-42b2-820b-cd4bf8aa21d6-client-ca\") pod \"controller-manager-7469dbc894-7zpxc\" (UID: \"ce9cf273-395a-42b2-820b-cd4bf8aa21d6\") " pod="openshift-controller-manager/controller-manager-7469dbc894-7zpxc" Feb 16 17:02:11 crc kubenswrapper[4794]: I0216 17:02:11.835757 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce9cf273-395a-42b2-820b-cd4bf8aa21d6-serving-cert\") pod \"controller-manager-7469dbc894-7zpxc\" (UID: \"ce9cf273-395a-42b2-820b-cd4bf8aa21d6\") " pod="openshift-controller-manager/controller-manager-7469dbc894-7zpxc" Feb 16 17:02:11 crc kubenswrapper[4794]: I0216 17:02:11.835782 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9slvq\" (UniqueName: \"kubernetes.io/projected/ce9cf273-395a-42b2-820b-cd4bf8aa21d6-kube-api-access-9slvq\") pod \"controller-manager-7469dbc894-7zpxc\" (UID: \"ce9cf273-395a-42b2-820b-cd4bf8aa21d6\") " pod="openshift-controller-manager/controller-manager-7469dbc894-7zpxc" Feb 16 17:02:11 crc kubenswrapper[4794]: I0216 17:02:11.835813 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce9cf273-395a-42b2-820b-cd4bf8aa21d6-config\") pod \"controller-manager-7469dbc894-7zpxc\" (UID: \"ce9cf273-395a-42b2-820b-cd4bf8aa21d6\") " pod="openshift-controller-manager/controller-manager-7469dbc894-7zpxc" Feb 16 17:02:11 crc kubenswrapper[4794]: I0216 17:02:11.836328 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/077b99e5-e95c-4afd-9008-d1f18a6b2f70-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "077b99e5-e95c-4afd-9008-d1f18a6b2f70" (UID: "077b99e5-e95c-4afd-9008-d1f18a6b2f70"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:02:11 crc kubenswrapper[4794]: I0216 17:02:11.836373 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/077b99e5-e95c-4afd-9008-d1f18a6b2f70-client-ca" (OuterVolumeSpecName: "client-ca") pod "077b99e5-e95c-4afd-9008-d1f18a6b2f70" (UID: "077b99e5-e95c-4afd-9008-d1f18a6b2f70"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:02:11 crc kubenswrapper[4794]: I0216 17:02:11.836391 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/077b99e5-e95c-4afd-9008-d1f18a6b2f70-config" (OuterVolumeSpecName: "config") pod "077b99e5-e95c-4afd-9008-d1f18a6b2f70" (UID: "077b99e5-e95c-4afd-9008-d1f18a6b2f70"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:02:11 crc kubenswrapper[4794]: I0216 17:02:11.844137 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/077b99e5-e95c-4afd-9008-d1f18a6b2f70-kube-api-access-p5zdw" (OuterVolumeSpecName: "kube-api-access-p5zdw") pod "077b99e5-e95c-4afd-9008-d1f18a6b2f70" (UID: "077b99e5-e95c-4afd-9008-d1f18a6b2f70"). InnerVolumeSpecName "kube-api-access-p5zdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:02:11 crc kubenswrapper[4794]: I0216 17:02:11.844386 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/077b99e5-e95c-4afd-9008-d1f18a6b2f70-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "077b99e5-e95c-4afd-9008-d1f18a6b2f70" (UID: "077b99e5-e95c-4afd-9008-d1f18a6b2f70"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:02:11 crc kubenswrapper[4794]: I0216 17:02:11.937059 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce9cf273-395a-42b2-820b-cd4bf8aa21d6-config\") pod \"controller-manager-7469dbc894-7zpxc\" (UID: \"ce9cf273-395a-42b2-820b-cd4bf8aa21d6\") " pod="openshift-controller-manager/controller-manager-7469dbc894-7zpxc" Feb 16 17:02:11 crc kubenswrapper[4794]: I0216 17:02:11.937416 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ce9cf273-395a-42b2-820b-cd4bf8aa21d6-proxy-ca-bundles\") pod \"controller-manager-7469dbc894-7zpxc\" (UID: \"ce9cf273-395a-42b2-820b-cd4bf8aa21d6\") " pod="openshift-controller-manager/controller-manager-7469dbc894-7zpxc" Feb 16 17:02:11 crc kubenswrapper[4794]: I0216 17:02:11.937465 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce9cf273-395a-42b2-820b-cd4bf8aa21d6-client-ca\") pod \"controller-manager-7469dbc894-7zpxc\" (UID: \"ce9cf273-395a-42b2-820b-cd4bf8aa21d6\") " pod="openshift-controller-manager/controller-manager-7469dbc894-7zpxc" Feb 16 17:02:11 crc kubenswrapper[4794]: I0216 17:02:11.937481 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce9cf273-395a-42b2-820b-cd4bf8aa21d6-serving-cert\") pod \"controller-manager-7469dbc894-7zpxc\" (UID: \"ce9cf273-395a-42b2-820b-cd4bf8aa21d6\") " pod="openshift-controller-manager/controller-manager-7469dbc894-7zpxc" Feb 16 17:02:11 crc kubenswrapper[4794]: I0216 17:02:11.937496 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9slvq\" (UniqueName: \"kubernetes.io/projected/ce9cf273-395a-42b2-820b-cd4bf8aa21d6-kube-api-access-9slvq\") pod \"controller-manager-7469dbc894-7zpxc\" (UID: \"ce9cf273-395a-42b2-820b-cd4bf8aa21d6\") " pod="openshift-controller-manager/controller-manager-7469dbc894-7zpxc" Feb 16 17:02:11 crc kubenswrapper[4794]: I0216 17:02:11.937534 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/077b99e5-e95c-4afd-9008-d1f18a6b2f70-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:11 crc kubenswrapper[4794]: I0216 17:02:11.937544 4794 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/077b99e5-e95c-4afd-9008-d1f18a6b2f70-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:11 crc kubenswrapper[4794]: I0216 17:02:11.937554 4794 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/077b99e5-e95c-4afd-9008-d1f18a6b2f70-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:11 crc kubenswrapper[4794]: I0216 17:02:11.937564 4794 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/077b99e5-e95c-4afd-9008-d1f18a6b2f70-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:11 crc kubenswrapper[4794]: I0216 17:02:11.937572 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5zdw\" (UniqueName: \"kubernetes.io/projected/077b99e5-e95c-4afd-9008-d1f18a6b2f70-kube-api-access-p5zdw\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:11 crc kubenswrapper[4794]: I0216 17:02:11.938900 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce9cf273-395a-42b2-820b-cd4bf8aa21d6-config\") pod \"controller-manager-7469dbc894-7zpxc\" (UID: \"ce9cf273-395a-42b2-820b-cd4bf8aa21d6\") " pod="openshift-controller-manager/controller-manager-7469dbc894-7zpxc" Feb 16 17:02:11 crc kubenswrapper[4794]: I0216 17:02:11.940713 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ce9cf273-395a-42b2-820b-cd4bf8aa21d6-proxy-ca-bundles\") pod \"controller-manager-7469dbc894-7zpxc\" (UID: \"ce9cf273-395a-42b2-820b-cd4bf8aa21d6\") " pod="openshift-controller-manager/controller-manager-7469dbc894-7zpxc" Feb 16 17:02:11 crc kubenswrapper[4794]: I0216 17:02:11.941105 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce9cf273-395a-42b2-820b-cd4bf8aa21d6-client-ca\") pod \"controller-manager-7469dbc894-7zpxc\" (UID: \"ce9cf273-395a-42b2-820b-cd4bf8aa21d6\") " pod="openshift-controller-manager/controller-manager-7469dbc894-7zpxc" Feb 16 17:02:11 crc kubenswrapper[4794]: I0216 17:02:11.956292 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce9cf273-395a-42b2-820b-cd4bf8aa21d6-serving-cert\") pod \"controller-manager-7469dbc894-7zpxc\" (UID: \"ce9cf273-395a-42b2-820b-cd4bf8aa21d6\") " pod="openshift-controller-manager/controller-manager-7469dbc894-7zpxc" Feb 16 17:02:11 crc kubenswrapper[4794]: I0216 17:02:11.958349 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9slvq\" (UniqueName: \"kubernetes.io/projected/ce9cf273-395a-42b2-820b-cd4bf8aa21d6-kube-api-access-9slvq\") pod \"controller-manager-7469dbc894-7zpxc\" (UID: \"ce9cf273-395a-42b2-820b-cd4bf8aa21d6\") " pod="openshift-controller-manager/controller-manager-7469dbc894-7zpxc" Feb 16 17:02:12 crc kubenswrapper[4794]: I0216 17:02:12.048101 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7469dbc894-7zpxc" Feb 16 17:02:12 crc kubenswrapper[4794]: I0216 17:02:12.094055 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vtrkl" Feb 16 17:02:12 crc kubenswrapper[4794]: I0216 17:02:12.095284 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vtrkl" event={"ID":"077b99e5-e95c-4afd-9008-d1f18a6b2f70","Type":"ContainerDied","Data":"c32d6153944692c4ac9d2529aae40bd2b1f6ea68674de196d99841de270f21ff"} Feb 16 17:02:12 crc kubenswrapper[4794]: I0216 17:02:12.095469 4794 scope.go:117] "RemoveContainer" containerID="add7e3e5400922633073ebdd7a32eccd94de433e0c188355401f3ae32fe751dc" Feb 16 17:02:12 crc kubenswrapper[4794]: I0216 17:02:12.166455 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vtrkl"] Feb 16 17:02:12 crc kubenswrapper[4794]: I0216 17:02:12.170357 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vtrkl"] Feb 16 17:02:12 crc kubenswrapper[4794]: I0216 17:02:12.280489 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7469dbc894-7zpxc"] Feb 16 17:02:12 crc kubenswrapper[4794]: W0216 17:02:12.341932 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce9cf273_395a_42b2_820b_cd4bf8aa21d6.slice/crio-0135945649f08c959f4b19d4fe4685d9795bcd0f2ed9c8412e59a8439c3f6fca WatchSource:0}: Error finding container 0135945649f08c959f4b19d4fe4685d9795bcd0f2ed9c8412e59a8439c3f6fca: Status 404 returned error can't find the container with id 0135945649f08c959f4b19d4fe4685d9795bcd0f2ed9c8412e59a8439c3f6fca Feb 16 17:02:12 crc kubenswrapper[4794]: I0216 17:02:12.404914 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c588587d7-p4rr4"] Feb 16 17:02:12 crc kubenswrapper[4794]: W0216 17:02:12.446467 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda32e5495_8e9f_495d_8bf0_e0e986d23411.slice/crio-f769b1e21d9fa6bb69e690bcbfdb8e58b9e430b7501db465f0df0fd1e84ccbd8 WatchSource:0}: Error finding container f769b1e21d9fa6bb69e690bcbfdb8e58b9e430b7501db465f0df0fd1e84ccbd8: Status 404 returned error can't find the container with id f769b1e21d9fa6bb69e690bcbfdb8e58b9e430b7501db465f0df0fd1e84ccbd8 Feb 16 17:02:12 crc kubenswrapper[4794]: W0216 17:02:12.448428 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-4d567514ad520c20c42639940b993106207dcf492a4313946b7d7cf232513cdd WatchSource:0}: Error finding container 4d567514ad520c20c42639940b993106207dcf492a4313946b7d7cf232513cdd: Status 404 returned error can't find the container with id 4d567514ad520c20c42639940b993106207dcf492a4313946b7d7cf232513cdd Feb 16 17:02:12 crc kubenswrapper[4794]: I0216 17:02:12.616447 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jv2jb" Feb 16 17:02:12 crc kubenswrapper[4794]: I0216 17:02:12.800387 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="077b99e5-e95c-4afd-9008-d1f18a6b2f70" path="/var/lib/kubelet/pods/077b99e5-e95c-4afd-9008-d1f18a6b2f70/volumes" Feb 16 17:02:13 crc kubenswrapper[4794]: I0216 17:02:13.104579 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7469dbc894-7zpxc" event={"ID":"ce9cf273-395a-42b2-820b-cd4bf8aa21d6","Type":"ContainerStarted","Data":"62866bca697e873c7f000bbc6b07489588e167ab903f5399bf54bdb583ff4070"} Feb 16 17:02:13 crc kubenswrapper[4794]: I0216 17:02:13.104633 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7469dbc894-7zpxc" event={"ID":"ce9cf273-395a-42b2-820b-cd4bf8aa21d6","Type":"ContainerStarted","Data":"0135945649f08c959f4b19d4fe4685d9795bcd0f2ed9c8412e59a8439c3f6fca"} Feb 16 17:02:13 crc kubenswrapper[4794]: I0216 17:02:13.106010 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7469dbc894-7zpxc" Feb 16 17:02:13 crc kubenswrapper[4794]: I0216 17:02:13.108020 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-p4rr4" event={"ID":"a32e5495-8e9f-495d-8bf0-e0e986d23411","Type":"ContainerStarted","Data":"2c96eb756a3cb86fba338033da93c560cfc74e6c6ecc005e614de21b2f8fd4b5"} Feb 16 17:02:13 crc kubenswrapper[4794]: I0216 17:02:13.108057 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-p4rr4" event={"ID":"a32e5495-8e9f-495d-8bf0-e0e986d23411","Type":"ContainerStarted","Data":"f769b1e21d9fa6bb69e690bcbfdb8e58b9e430b7501db465f0df0fd1e84ccbd8"} Feb 16 17:02:13 crc kubenswrapper[4794]: I0216 17:02:13.108748 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-p4rr4" Feb 16 17:02:13 crc kubenswrapper[4794]: I0216 17:02:13.110506 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7469dbc894-7zpxc" Feb 16 17:02:13 crc kubenswrapper[4794]: I0216 17:02:13.111381 4794 generic.go:334] "Generic (PLEG): container finished" podID="1bfe3d12-bcac-4380-b906-7abe78d56232" containerID="c584e24afd6b22886dc219def6085e7103673de36483b3ebd2d33856c94b59ae" exitCode=0 Feb 16 17:02:13 crc kubenswrapper[4794]: I0216 17:02:13.111430 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7nzlb" event={"ID":"1bfe3d12-bcac-4380-b906-7abe78d56232","Type":"ContainerDied","Data":"c584e24afd6b22886dc219def6085e7103673de36483b3ebd2d33856c94b59ae"} Feb 16 17:02:13 crc kubenswrapper[4794]: I0216 17:02:13.114656 4794 generic.go:334] "Generic (PLEG): container finished" podID="3d9a576c-db95-4e07-9d36-c93e7adfbc46" containerID="c5b8ee0b6432c5bc56708a1b1812d6a38bbd76d7dff5c48d0b77a8f2c85fbb38" exitCode=0 Feb 16 17:02:13 crc kubenswrapper[4794]: I0216 17:02:13.114700 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5sk9z" event={"ID":"3d9a576c-db95-4e07-9d36-c93e7adfbc46","Type":"ContainerDied","Data":"c5b8ee0b6432c5bc56708a1b1812d6a38bbd76d7dff5c48d0b77a8f2c85fbb38"} Feb 16 17:02:13 crc kubenswrapper[4794]: I0216 17:02:13.118242 4794 generic.go:334] "Generic (PLEG): container finished" podID="fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4" containerID="86002c4fc1d3c2d06a269afcd1ebb62da1898b3a1f8dc562fead5a84b0cb3c6a" exitCode=0 Feb 16 17:02:13 crc kubenswrapper[4794]: I0216 17:02:13.118318 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2khs" event={"ID":"fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4","Type":"ContainerDied","Data":"86002c4fc1d3c2d06a269afcd1ebb62da1898b3a1f8dc562fead5a84b0cb3c6a"} Feb 16 17:02:13 crc kubenswrapper[4794]: I0216 17:02:13.121066 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"bc458e09290434989a2b7194bb411c3d92ab18e8843ab54dab0abeefadd73219"} Feb 16 17:02:13 crc kubenswrapper[4794]: I0216 17:02:13.121094 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"4d567514ad520c20c42639940b993106207dcf492a4313946b7d7cf232513cdd"} Feb 16 17:02:13 crc kubenswrapper[4794]: I0216 17:02:13.127454 4794 generic.go:334] "Generic (PLEG): container finished" podID="aa756591-c2f4-430e-8f17-bd040051f77d" containerID="8bdb027dee1055b133f8785550e922a775ef974fd3cab4d1bb112e3a933160f7" exitCode=0 Feb 16 17:02:13 crc kubenswrapper[4794]: I0216 17:02:13.127629 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6v5np" event={"ID":"aa756591-c2f4-430e-8f17-bd040051f77d","Type":"ContainerDied","Data":"8bdb027dee1055b133f8785550e922a775ef974fd3cab4d1bb112e3a933160f7"} Feb 16 17:02:13 crc kubenswrapper[4794]: I0216 17:02:13.129761 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"7526eb52574af61d7cea6700d22a59cb6e7c90ddec2f3605903f2dff5c106b3e"} Feb 16 17:02:13 crc kubenswrapper[4794]: I0216 17:02:13.129788 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"67fd3511c5e947b5d20bf702fd9b7fb39e4f6dd0d9d46afdd511bf82865460f2"} Feb 16 17:02:13 crc kubenswrapper[4794]: I0216 17:02:13.130328 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:02:13 crc kubenswrapper[4794]: I0216 17:02:13.133669 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-p4rr4" Feb 16 17:02:13 crc kubenswrapper[4794]: I0216 17:02:13.142524 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"c34d45d940382ea51cfc9dc9d9028e2964cc87cf73f875c0c0918ee4a40d1bf0"} Feb 16 17:02:13 crc kubenswrapper[4794]: I0216 17:02:13.142562 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"172562d03b82c2026e6d4bd5de0ca21f81b3d22decdf7a2e809a3ab02cbbabd9"} Feb 16 17:02:13 crc kubenswrapper[4794]: I0216 17:02:13.191842 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7469dbc894-7zpxc" podStartSLOduration=19.191819606 podStartE2EDuration="19.191819606s" podCreationTimestamp="2026-02-16 17:01:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:13.163209255 +0000 UTC m=+159.111303912" watchObservedRunningTime="2026-02-16 17:02:13.191819606 +0000 UTC m=+159.139914253" Feb 16 17:02:13 crc kubenswrapper[4794]: I0216 17:02:13.337329 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-p4rr4" podStartSLOduration=19.337280262 podStartE2EDuration="19.337280262s" podCreationTimestamp="2026-02-16 17:01:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:13.334435416 +0000 UTC m=+159.282530053" watchObservedRunningTime="2026-02-16 17:02:13.337280262 +0000 UTC m=+159.285374919" Feb 16 17:02:14 crc kubenswrapper[4794]: I0216 17:02:14.163416 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6v5np" event={"ID":"aa756591-c2f4-430e-8f17-bd040051f77d","Type":"ContainerStarted","Data":"872d1b9c96df1b502dd7971130ede6ef9e6714b71a7ffd21124860e6b42c7de5"} Feb 16 17:02:14 crc kubenswrapper[4794]: I0216 17:02:14.166952 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7nzlb" event={"ID":"1bfe3d12-bcac-4380-b906-7abe78d56232","Type":"ContainerStarted","Data":"b01e36befcd84ac0ca5e00992989458aca376661573ac71d358aa9145e63c6a8"} Feb 16 17:02:14 crc kubenswrapper[4794]: I0216 17:02:14.178686 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5sk9z" event={"ID":"3d9a576c-db95-4e07-9d36-c93e7adfbc46","Type":"ContainerStarted","Data":"101e2ace45e8f91bb0ec9f38f8a90442863142c22f3975f77a23c79e98e18732"} Feb 16 17:02:14 crc kubenswrapper[4794]: I0216 17:02:14.181735 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6v5np" podStartSLOduration=3.334352576 podStartE2EDuration="36.181721685s" podCreationTimestamp="2026-02-16 17:01:38 +0000 UTC" firstStartedPulling="2026-02-16 17:01:40.805458339 +0000 UTC m=+126.753552986" lastFinishedPulling="2026-02-16 17:02:13.652827458 +0000 UTC m=+159.600922095" observedRunningTime="2026-02-16 17:02:14.18039406 +0000 UTC m=+160.128488707" watchObservedRunningTime="2026-02-16 17:02:14.181721685 +0000 UTC m=+160.129816332" Feb 16 17:02:14 crc kubenswrapper[4794]: I0216 17:02:14.183992 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2khs" event={"ID":"fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4","Type":"ContainerStarted","Data":"de60c993299e13ba2c0c694214d1ed39cad8da75f3fcfaaff735e348ac8cf73f"} Feb 16 17:02:14 crc kubenswrapper[4794]: I0216 17:02:14.211830 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5sk9z" podStartSLOduration=3.556346183 podStartE2EDuration="34.211814985s" podCreationTimestamp="2026-02-16 17:01:40 +0000 UTC" firstStartedPulling="2026-02-16 17:01:42.850919344 +0000 UTC m=+128.799013991" lastFinishedPulling="2026-02-16 17:02:13.506388146 +0000 UTC m=+159.454482793" observedRunningTime="2026-02-16 17:02:14.207260804 +0000 UTC m=+160.155355451" watchObservedRunningTime="2026-02-16 17:02:14.211814985 +0000 UTC m=+160.159909632" Feb 16 17:02:14 crc kubenswrapper[4794]: I0216 17:02:14.242537 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7nzlb" podStartSLOduration=3.639579963 podStartE2EDuration="33.242519651s" podCreationTimestamp="2026-02-16 17:01:41 +0000 UTC" firstStartedPulling="2026-02-16 17:01:43.893572676 +0000 UTC m=+129.841667323" lastFinishedPulling="2026-02-16 17:02:13.496512364 +0000 UTC m=+159.444607011" observedRunningTime="2026-02-16 17:02:14.228831767 +0000 UTC m=+160.176926414" watchObservedRunningTime="2026-02-16 17:02:14.242519651 +0000 UTC m=+160.190614298" Feb 16 17:02:14 crc kubenswrapper[4794]: I0216 17:02:14.244194 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j2khs" podStartSLOduration=2.370380059 podStartE2EDuration="32.244187035s" podCreationTimestamp="2026-02-16 17:01:42 +0000 UTC" firstStartedPulling="2026-02-16 17:01:43.890296439 +0000 UTC m=+129.838391086" lastFinishedPulling="2026-02-16 17:02:13.764103415 +0000 UTC m=+159.712198062" observedRunningTime="2026-02-16 17:02:14.243372423 +0000 UTC m=+160.191467070" watchObservedRunningTime="2026-02-16 17:02:14.244187035 +0000 UTC m=+160.192281672" Feb 16 17:02:14 crc kubenswrapper[4794]: I0216 17:02:14.838657 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7469dbc894-7zpxc"] Feb 16 17:02:14 crc kubenswrapper[4794]: I0216 17:02:14.951268 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c588587d7-p4rr4"] Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.202432 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-p4rr4" podUID="a32e5495-8e9f-495d-8bf0-e0e986d23411" containerName="route-controller-manager" containerID="cri-o://2c96eb756a3cb86fba338033da93c560cfc74e6c6ecc005e614de21b2f8fd4b5" gracePeriod=30 Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.202925 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7469dbc894-7zpxc" podUID="ce9cf273-395a-42b2-820b-cd4bf8aa21d6" containerName="controller-manager" containerID="cri-o://62866bca697e873c7f000bbc6b07489588e167ab903f5399bf54bdb583ff4070" gracePeriod=30 Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.644415 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-p4rr4" Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.670134 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79cb89f5b4-xvt96"] Feb 16 17:02:16 crc kubenswrapper[4794]: E0216 17:02:16.670366 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a32e5495-8e9f-495d-8bf0-e0e986d23411" containerName="route-controller-manager" Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.670381 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="a32e5495-8e9f-495d-8bf0-e0e986d23411" containerName="route-controller-manager" Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.670523 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="a32e5495-8e9f-495d-8bf0-e0e986d23411" containerName="route-controller-manager" Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.671021 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79cb89f5b4-xvt96" Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.687988 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79cb89f5b4-xvt96"] Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.730799 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7469dbc894-7zpxc" Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.814859 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a32e5495-8e9f-495d-8bf0-e0e986d23411-serving-cert\") pod \"a32e5495-8e9f-495d-8bf0-e0e986d23411\" (UID: \"a32e5495-8e9f-495d-8bf0-e0e986d23411\") " Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.815293 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a32e5495-8e9f-495d-8bf0-e0e986d23411-client-ca\") pod \"a32e5495-8e9f-495d-8bf0-e0e986d23411\" (UID: \"a32e5495-8e9f-495d-8bf0-e0e986d23411\") " Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.815452 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a32e5495-8e9f-495d-8bf0-e0e986d23411-config\") pod \"a32e5495-8e9f-495d-8bf0-e0e986d23411\" (UID: \"a32e5495-8e9f-495d-8bf0-e0e986d23411\") " Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.815602 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6z4k6\" (UniqueName: \"kubernetes.io/projected/a32e5495-8e9f-495d-8bf0-e0e986d23411-kube-api-access-6z4k6\") pod \"a32e5495-8e9f-495d-8bf0-e0e986d23411\" (UID: \"a32e5495-8e9f-495d-8bf0-e0e986d23411\") " Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.815849 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a32e5495-8e9f-495d-8bf0-e0e986d23411-client-ca" (OuterVolumeSpecName: "client-ca") pod "a32e5495-8e9f-495d-8bf0-e0e986d23411" (UID: "a32e5495-8e9f-495d-8bf0-e0e986d23411"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.815988 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a32e5495-8e9f-495d-8bf0-e0e986d23411-config" (OuterVolumeSpecName: "config") pod "a32e5495-8e9f-495d-8bf0-e0e986d23411" (UID: "a32e5495-8e9f-495d-8bf0-e0e986d23411"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.816098 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9877cb59-9113-4bc5-b05f-3fa3a9c25d45-config\") pod \"route-controller-manager-79cb89f5b4-xvt96\" (UID: \"9877cb59-9113-4bc5-b05f-3fa3a9c25d45\") " pod="openshift-route-controller-manager/route-controller-manager-79cb89f5b4-xvt96" Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.816201 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86bh5\" (UniqueName: \"kubernetes.io/projected/9877cb59-9113-4bc5-b05f-3fa3a9c25d45-kube-api-access-86bh5\") pod \"route-controller-manager-79cb89f5b4-xvt96\" (UID: \"9877cb59-9113-4bc5-b05f-3fa3a9c25d45\") " pod="openshift-route-controller-manager/route-controller-manager-79cb89f5b4-xvt96" Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.816349 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9877cb59-9113-4bc5-b05f-3fa3a9c25d45-client-ca\") pod \"route-controller-manager-79cb89f5b4-xvt96\" (UID: \"9877cb59-9113-4bc5-b05f-3fa3a9c25d45\") " pod="openshift-route-controller-manager/route-controller-manager-79cb89f5b4-xvt96" Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.816442 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9877cb59-9113-4bc5-b05f-3fa3a9c25d45-serving-cert\") pod \"route-controller-manager-79cb89f5b4-xvt96\" (UID: \"9877cb59-9113-4bc5-b05f-3fa3a9c25d45\") " pod="openshift-route-controller-manager/route-controller-manager-79cb89f5b4-xvt96" Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.816535 4794 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a32e5495-8e9f-495d-8bf0-e0e986d23411-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.816596 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a32e5495-8e9f-495d-8bf0-e0e986d23411-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.820763 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a32e5495-8e9f-495d-8bf0-e0e986d23411-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a32e5495-8e9f-495d-8bf0-e0e986d23411" (UID: "a32e5495-8e9f-495d-8bf0-e0e986d23411"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.826925 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a32e5495-8e9f-495d-8bf0-e0e986d23411-kube-api-access-6z4k6" (OuterVolumeSpecName: "kube-api-access-6z4k6") pod "a32e5495-8e9f-495d-8bf0-e0e986d23411" (UID: "a32e5495-8e9f-495d-8bf0-e0e986d23411"). InnerVolumeSpecName "kube-api-access-6z4k6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.917979 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce9cf273-395a-42b2-820b-cd4bf8aa21d6-serving-cert\") pod \"ce9cf273-395a-42b2-820b-cd4bf8aa21d6\" (UID: \"ce9cf273-395a-42b2-820b-cd4bf8aa21d6\") " Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.918038 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ce9cf273-395a-42b2-820b-cd4bf8aa21d6-proxy-ca-bundles\") pod \"ce9cf273-395a-42b2-820b-cd4bf8aa21d6\" (UID: \"ce9cf273-395a-42b2-820b-cd4bf8aa21d6\") " Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.918081 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce9cf273-395a-42b2-820b-cd4bf8aa21d6-config\") pod \"ce9cf273-395a-42b2-820b-cd4bf8aa21d6\" (UID: \"ce9cf273-395a-42b2-820b-cd4bf8aa21d6\") " Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.918103 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9slvq\" (UniqueName: \"kubernetes.io/projected/ce9cf273-395a-42b2-820b-cd4bf8aa21d6-kube-api-access-9slvq\") pod \"ce9cf273-395a-42b2-820b-cd4bf8aa21d6\" (UID: \"ce9cf273-395a-42b2-820b-cd4bf8aa21d6\") " Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.918135 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce9cf273-395a-42b2-820b-cd4bf8aa21d6-client-ca\") pod \"ce9cf273-395a-42b2-820b-cd4bf8aa21d6\" (UID: \"ce9cf273-395a-42b2-820b-cd4bf8aa21d6\") " Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.918364 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86bh5\" (UniqueName: \"kubernetes.io/projected/9877cb59-9113-4bc5-b05f-3fa3a9c25d45-kube-api-access-86bh5\") pod \"route-controller-manager-79cb89f5b4-xvt96\" (UID: \"9877cb59-9113-4bc5-b05f-3fa3a9c25d45\") " pod="openshift-route-controller-manager/route-controller-manager-79cb89f5b4-xvt96" Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.918453 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9877cb59-9113-4bc5-b05f-3fa3a9c25d45-client-ca\") pod \"route-controller-manager-79cb89f5b4-xvt96\" (UID: \"9877cb59-9113-4bc5-b05f-3fa3a9c25d45\") " pod="openshift-route-controller-manager/route-controller-manager-79cb89f5b4-xvt96" Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.918504 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9877cb59-9113-4bc5-b05f-3fa3a9c25d45-serving-cert\") pod \"route-controller-manager-79cb89f5b4-xvt96\" (UID: \"9877cb59-9113-4bc5-b05f-3fa3a9c25d45\") " pod="openshift-route-controller-manager/route-controller-manager-79cb89f5b4-xvt96" Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.918547 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9877cb59-9113-4bc5-b05f-3fa3a9c25d45-config\") pod \"route-controller-manager-79cb89f5b4-xvt96\" (UID: \"9877cb59-9113-4bc5-b05f-3fa3a9c25d45\") " pod="openshift-route-controller-manager/route-controller-manager-79cb89f5b4-xvt96" Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.918598 4794 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a32e5495-8e9f-495d-8bf0-e0e986d23411-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.918616 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6z4k6\" (UniqueName: \"kubernetes.io/projected/a32e5495-8e9f-495d-8bf0-e0e986d23411-kube-api-access-6z4k6\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.919342 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce9cf273-395a-42b2-820b-cd4bf8aa21d6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ce9cf273-395a-42b2-820b-cd4bf8aa21d6" (UID: "ce9cf273-395a-42b2-820b-cd4bf8aa21d6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.919487 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce9cf273-395a-42b2-820b-cd4bf8aa21d6-config" (OuterVolumeSpecName: "config") pod "ce9cf273-395a-42b2-820b-cd4bf8aa21d6" (UID: "ce9cf273-395a-42b2-820b-cd4bf8aa21d6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.919898 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9877cb59-9113-4bc5-b05f-3fa3a9c25d45-config\") pod \"route-controller-manager-79cb89f5b4-xvt96\" (UID: \"9877cb59-9113-4bc5-b05f-3fa3a9c25d45\") " pod="openshift-route-controller-manager/route-controller-manager-79cb89f5b4-xvt96" Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.920136 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9877cb59-9113-4bc5-b05f-3fa3a9c25d45-client-ca\") pod \"route-controller-manager-79cb89f5b4-xvt96\" (UID: \"9877cb59-9113-4bc5-b05f-3fa3a9c25d45\") " pod="openshift-route-controller-manager/route-controller-manager-79cb89f5b4-xvt96" Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.920181 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce9cf273-395a-42b2-820b-cd4bf8aa21d6-client-ca" (OuterVolumeSpecName: "client-ca") pod "ce9cf273-395a-42b2-820b-cd4bf8aa21d6" (UID: "ce9cf273-395a-42b2-820b-cd4bf8aa21d6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.921336 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce9cf273-395a-42b2-820b-cd4bf8aa21d6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ce9cf273-395a-42b2-820b-cd4bf8aa21d6" (UID: "ce9cf273-395a-42b2-820b-cd4bf8aa21d6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.922631 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce9cf273-395a-42b2-820b-cd4bf8aa21d6-kube-api-access-9slvq" (OuterVolumeSpecName: "kube-api-access-9slvq") pod "ce9cf273-395a-42b2-820b-cd4bf8aa21d6" (UID: "ce9cf273-395a-42b2-820b-cd4bf8aa21d6"). InnerVolumeSpecName "kube-api-access-9slvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.924562 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9877cb59-9113-4bc5-b05f-3fa3a9c25d45-serving-cert\") pod \"route-controller-manager-79cb89f5b4-xvt96\" (UID: \"9877cb59-9113-4bc5-b05f-3fa3a9c25d45\") " pod="openshift-route-controller-manager/route-controller-manager-79cb89f5b4-xvt96" Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.940292 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86bh5\" (UniqueName: \"kubernetes.io/projected/9877cb59-9113-4bc5-b05f-3fa3a9c25d45-kube-api-access-86bh5\") pod \"route-controller-manager-79cb89f5b4-xvt96\" (UID: \"9877cb59-9113-4bc5-b05f-3fa3a9c25d45\") " pod="openshift-route-controller-manager/route-controller-manager-79cb89f5b4-xvt96" Feb 16 17:02:16 crc kubenswrapper[4794]: I0216 17:02:16.987900 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79cb89f5b4-xvt96" Feb 16 17:02:17 crc kubenswrapper[4794]: I0216 17:02:17.019541 4794 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce9cf273-395a-42b2-820b-cd4bf8aa21d6-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:17 crc kubenswrapper[4794]: I0216 17:02:17.019574 4794 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ce9cf273-395a-42b2-820b-cd4bf8aa21d6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:17 crc kubenswrapper[4794]: I0216 17:02:17.019584 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9slvq\" (UniqueName: \"kubernetes.io/projected/ce9cf273-395a-42b2-820b-cd4bf8aa21d6-kube-api-access-9slvq\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:17 crc kubenswrapper[4794]: I0216 17:02:17.019592 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce9cf273-395a-42b2-820b-cd4bf8aa21d6-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:17 crc kubenswrapper[4794]: I0216 17:02:17.019605 4794 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce9cf273-395a-42b2-820b-cd4bf8aa21d6-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:17 crc kubenswrapper[4794]: I0216 17:02:17.210048 4794 generic.go:334] "Generic (PLEG): container finished" podID="a32e5495-8e9f-495d-8bf0-e0e986d23411" containerID="2c96eb756a3cb86fba338033da93c560cfc74e6c6ecc005e614de21b2f8fd4b5" exitCode=0 Feb 16 17:02:17 crc kubenswrapper[4794]: I0216 17:02:17.210400 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-p4rr4" Feb 16 17:02:17 crc kubenswrapper[4794]: I0216 17:02:17.212236 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-p4rr4" event={"ID":"a32e5495-8e9f-495d-8bf0-e0e986d23411","Type":"ContainerDied","Data":"2c96eb756a3cb86fba338033da93c560cfc74e6c6ecc005e614de21b2f8fd4b5"} Feb 16 17:02:17 crc kubenswrapper[4794]: I0216 17:02:17.212288 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-p4rr4" event={"ID":"a32e5495-8e9f-495d-8bf0-e0e986d23411","Type":"ContainerDied","Data":"f769b1e21d9fa6bb69e690bcbfdb8e58b9e430b7501db465f0df0fd1e84ccbd8"} Feb 16 17:02:17 crc kubenswrapper[4794]: I0216 17:02:17.212321 4794 scope.go:117] "RemoveContainer" containerID="2c96eb756a3cb86fba338033da93c560cfc74e6c6ecc005e614de21b2f8fd4b5" Feb 16 17:02:17 crc kubenswrapper[4794]: I0216 17:02:17.213904 4794 generic.go:334] "Generic (PLEG): container finished" podID="ce9cf273-395a-42b2-820b-cd4bf8aa21d6" containerID="62866bca697e873c7f000bbc6b07489588e167ab903f5399bf54bdb583ff4070" exitCode=0 Feb 16 17:02:17 crc kubenswrapper[4794]: I0216 17:02:17.213925 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7469dbc894-7zpxc" event={"ID":"ce9cf273-395a-42b2-820b-cd4bf8aa21d6","Type":"ContainerDied","Data":"62866bca697e873c7f000bbc6b07489588e167ab903f5399bf54bdb583ff4070"} Feb 16 17:02:17 crc kubenswrapper[4794]: I0216 17:02:17.213938 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7469dbc894-7zpxc" event={"ID":"ce9cf273-395a-42b2-820b-cd4bf8aa21d6","Type":"ContainerDied","Data":"0135945649f08c959f4b19d4fe4685d9795bcd0f2ed9c8412e59a8439c3f6fca"} Feb 16 17:02:17 crc kubenswrapper[4794]: I0216 17:02:17.213976 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7469dbc894-7zpxc" Feb 16 17:02:17 crc kubenswrapper[4794]: I0216 17:02:17.228896 4794 scope.go:117] "RemoveContainer" containerID="2c96eb756a3cb86fba338033da93c560cfc74e6c6ecc005e614de21b2f8fd4b5" Feb 16 17:02:17 crc kubenswrapper[4794]: E0216 17:02:17.229581 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c96eb756a3cb86fba338033da93c560cfc74e6c6ecc005e614de21b2f8fd4b5\": container with ID starting with 2c96eb756a3cb86fba338033da93c560cfc74e6c6ecc005e614de21b2f8fd4b5 not found: ID does not exist" containerID="2c96eb756a3cb86fba338033da93c560cfc74e6c6ecc005e614de21b2f8fd4b5" Feb 16 17:02:17 crc kubenswrapper[4794]: I0216 17:02:17.229620 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c96eb756a3cb86fba338033da93c560cfc74e6c6ecc005e614de21b2f8fd4b5"} err="failed to get container status \"2c96eb756a3cb86fba338033da93c560cfc74e6c6ecc005e614de21b2f8fd4b5\": rpc error: code = NotFound desc = could not find container \"2c96eb756a3cb86fba338033da93c560cfc74e6c6ecc005e614de21b2f8fd4b5\": container with ID starting with 2c96eb756a3cb86fba338033da93c560cfc74e6c6ecc005e614de21b2f8fd4b5 not found: ID does not exist" Feb 16 17:02:17 crc kubenswrapper[4794]: I0216 17:02:17.229669 4794 scope.go:117] "RemoveContainer" containerID="62866bca697e873c7f000bbc6b07489588e167ab903f5399bf54bdb583ff4070" Feb 16 17:02:17 crc kubenswrapper[4794]: I0216 17:02:17.249173 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7469dbc894-7zpxc"] Feb 16 17:02:17 crc kubenswrapper[4794]: I0216 17:02:17.253383 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7469dbc894-7zpxc"] Feb 16 17:02:17 crc kubenswrapper[4794]: I0216 17:02:17.260694 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c588587d7-p4rr4"] Feb 16 17:02:17 crc kubenswrapper[4794]: I0216 17:02:17.262432 4794 scope.go:117] "RemoveContainer" containerID="62866bca697e873c7f000bbc6b07489588e167ab903f5399bf54bdb583ff4070" Feb 16 17:02:17 crc kubenswrapper[4794]: E0216 17:02:17.262822 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62866bca697e873c7f000bbc6b07489588e167ab903f5399bf54bdb583ff4070\": container with ID starting with 62866bca697e873c7f000bbc6b07489588e167ab903f5399bf54bdb583ff4070 not found: ID does not exist" containerID="62866bca697e873c7f000bbc6b07489588e167ab903f5399bf54bdb583ff4070" Feb 16 17:02:17 crc kubenswrapper[4794]: I0216 17:02:17.262854 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62866bca697e873c7f000bbc6b07489588e167ab903f5399bf54bdb583ff4070"} err="failed to get container status \"62866bca697e873c7f000bbc6b07489588e167ab903f5399bf54bdb583ff4070\": rpc error: code = NotFound desc = could not find container \"62866bca697e873c7f000bbc6b07489588e167ab903f5399bf54bdb583ff4070\": container with ID starting with 62866bca697e873c7f000bbc6b07489588e167ab903f5399bf54bdb583ff4070 not found: ID does not exist" Feb 16 17:02:17 crc kubenswrapper[4794]: I0216 17:02:17.266162 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c588587d7-p4rr4"] Feb 16 17:02:17 crc kubenswrapper[4794]: I0216 17:02:17.385465 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79cb89f5b4-xvt96"] Feb 16 17:02:18 crc kubenswrapper[4794]: I0216 17:02:18.220113 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79cb89f5b4-xvt96" event={"ID":"9877cb59-9113-4bc5-b05f-3fa3a9c25d45","Type":"ContainerStarted","Data":"72c8f9d7ee2d63dddc01c6520dd864efec6d6dbb8a3a5334fecaf21faeebf98a"} Feb 16 17:02:18 crc kubenswrapper[4794]: I0216 17:02:18.220439 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79cb89f5b4-xvt96" event={"ID":"9877cb59-9113-4bc5-b05f-3fa3a9c25d45","Type":"ContainerStarted","Data":"3f66d442627f024d558ce0c5641eac120346c79e422926d5d89d8a923f87f09e"} Feb 16 17:02:18 crc kubenswrapper[4794]: I0216 17:02:18.335914 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/894bff1b-b8b9-4c28-8ffe-0e0469958227-metrics-certs\") pod \"network-metrics-daemon-tf698\" (UID: \"894bff1b-b8b9-4c28-8ffe-0e0469958227\") " pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:02:18 crc kubenswrapper[4794]: I0216 17:02:18.337446 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 17:02:18 crc kubenswrapper[4794]: I0216 17:02:18.351435 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/894bff1b-b8b9-4c28-8ffe-0e0469958227-metrics-certs\") pod \"network-metrics-daemon-tf698\" (UID: \"894bff1b-b8b9-4c28-8ffe-0e0469958227\") " pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:02:18 crc kubenswrapper[4794]: I0216 17:02:18.511535 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 16 17:02:18 crc kubenswrapper[4794]: I0216 17:02:18.520649 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tf698" Feb 16 17:02:18 crc kubenswrapper[4794]: I0216 17:02:18.801678 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a32e5495-8e9f-495d-8bf0-e0e986d23411" path="/var/lib/kubelet/pods/a32e5495-8e9f-495d-8bf0-e0e986d23411/volumes" Feb 16 17:02:18 crc kubenswrapper[4794]: I0216 17:02:18.802707 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce9cf273-395a-42b2-820b-cd4bf8aa21d6" path="/var/lib/kubelet/pods/ce9cf273-395a-42b2-820b-cd4bf8aa21d6/volumes" Feb 16 17:02:18 crc kubenswrapper[4794]: I0216 17:02:18.911149 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-tf698"] Feb 16 17:02:18 crc kubenswrapper[4794]: W0216 17:02:18.924847 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod894bff1b_b8b9_4c28_8ffe_0e0469958227.slice/crio-b5751c61ee7fed63adc64a74a629983f63c4a1d4bf8840c85a31b9fcb508e9c1 WatchSource:0}: Error finding container b5751c61ee7fed63adc64a74a629983f63c4a1d4bf8840c85a31b9fcb508e9c1: Status 404 returned error can't find the container with id b5751c61ee7fed63adc64a74a629983f63c4a1d4bf8840c85a31b9fcb508e9c1 Feb 16 17:02:19 crc kubenswrapper[4794]: I0216 17:02:19.232563 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tf698" event={"ID":"894bff1b-b8b9-4c28-8ffe-0e0469958227","Type":"ContainerStarted","Data":"b5751c61ee7fed63adc64a74a629983f63c4a1d4bf8840c85a31b9fcb508e9c1"} Feb 16 17:02:19 crc kubenswrapper[4794]: I0216 17:02:19.233001 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-79cb89f5b4-xvt96" Feb 16 17:02:19 crc kubenswrapper[4794]: I0216 17:02:19.247054 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-79cb89f5b4-xvt96" Feb 16 17:02:19 crc kubenswrapper[4794]: I0216 17:02:19.251084 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-79cb89f5b4-xvt96" podStartSLOduration=5.25106703 podStartE2EDuration="5.25106703s" podCreationTimestamp="2026-02-16 17:02:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:19.24917941 +0000 UTC m=+165.197274057" watchObservedRunningTime="2026-02-16 17:02:19.25106703 +0000 UTC m=+165.199161677" Feb 16 17:02:19 crc kubenswrapper[4794]: I0216 17:02:19.314189 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6v5np" Feb 16 17:02:19 crc kubenswrapper[4794]: I0216 17:02:19.314538 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6v5np" Feb 16 17:02:19 crc kubenswrapper[4794]: I0216 17:02:19.354433 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5df9c79f99-mmwct"] Feb 16 17:02:19 crc kubenswrapper[4794]: E0216 17:02:19.354694 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9cf273-395a-42b2-820b-cd4bf8aa21d6" containerName="controller-manager" Feb 16 17:02:19 crc kubenswrapper[4794]: I0216 17:02:19.354706 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9cf273-395a-42b2-820b-cd4bf8aa21d6" containerName="controller-manager" Feb 16 17:02:19 crc kubenswrapper[4794]: I0216 17:02:19.354804 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce9cf273-395a-42b2-820b-cd4bf8aa21d6" containerName="controller-manager" Feb 16 17:02:19 crc kubenswrapper[4794]: I0216 17:02:19.355168 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5df9c79f99-mmwct" Feb 16 17:02:19 crc kubenswrapper[4794]: I0216 17:02:19.356529 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 17:02:19 crc kubenswrapper[4794]: I0216 17:02:19.356958 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 17:02:19 crc kubenswrapper[4794]: I0216 17:02:19.357110 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 17:02:19 crc kubenswrapper[4794]: I0216 17:02:19.357365 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 17:02:19 crc kubenswrapper[4794]: I0216 17:02:19.357541 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 17:02:19 crc kubenswrapper[4794]: I0216 17:02:19.357776 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 17:02:19 crc kubenswrapper[4794]: I0216 17:02:19.363771 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 17:02:19 crc kubenswrapper[4794]: I0216 17:02:19.367727 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5df9c79f99-mmwct"] Feb 16 17:02:19 crc kubenswrapper[4794]: I0216 17:02:19.453083 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6a5346-b124-4089-8f06-630c21bc3be2-client-ca\") pod \"controller-manager-5df9c79f99-mmwct\" (UID: \"af6a5346-b124-4089-8f06-630c21bc3be2\") " pod="openshift-controller-manager/controller-manager-5df9c79f99-mmwct" Feb 16 17:02:19 crc kubenswrapper[4794]: I0216 17:02:19.453135 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6a5346-b124-4089-8f06-630c21bc3be2-config\") pod \"controller-manager-5df9c79f99-mmwct\" (UID: \"af6a5346-b124-4089-8f06-630c21bc3be2\") " pod="openshift-controller-manager/controller-manager-5df9c79f99-mmwct" Feb 16 17:02:19 crc kubenswrapper[4794]: I0216 17:02:19.453164 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk8kb\" (UniqueName: \"kubernetes.io/projected/af6a5346-b124-4089-8f06-630c21bc3be2-kube-api-access-fk8kb\") pod \"controller-manager-5df9c79f99-mmwct\" (UID: \"af6a5346-b124-4089-8f06-630c21bc3be2\") " pod="openshift-controller-manager/controller-manager-5df9c79f99-mmwct" Feb 16 17:02:19 crc kubenswrapper[4794]: I0216 17:02:19.453237 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6a5346-b124-4089-8f06-630c21bc3be2-serving-cert\") pod \"controller-manager-5df9c79f99-mmwct\" (UID: \"af6a5346-b124-4089-8f06-630c21bc3be2\") " pod="openshift-controller-manager/controller-manager-5df9c79f99-mmwct" Feb 16 17:02:19 crc kubenswrapper[4794]: I0216 17:02:19.453285 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/af6a5346-b124-4089-8f06-630c21bc3be2-proxy-ca-bundles\") pod \"controller-manager-5df9c79f99-mmwct\" (UID: \"af6a5346-b124-4089-8f06-630c21bc3be2\") " pod="openshift-controller-manager/controller-manager-5df9c79f99-mmwct" Feb 16 17:02:19 crc kubenswrapper[4794]: I0216 17:02:19.557423 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6a5346-b124-4089-8f06-630c21bc3be2-client-ca\") pod \"controller-manager-5df9c79f99-mmwct\" (UID: \"af6a5346-b124-4089-8f06-630c21bc3be2\") " pod="openshift-controller-manager/controller-manager-5df9c79f99-mmwct" Feb 16 17:02:19 crc kubenswrapper[4794]: I0216 17:02:19.557473 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6a5346-b124-4089-8f06-630c21bc3be2-config\") pod \"controller-manager-5df9c79f99-mmwct\" (UID: \"af6a5346-b124-4089-8f06-630c21bc3be2\") " pod="openshift-controller-manager/controller-manager-5df9c79f99-mmwct" Feb 16 17:02:19 crc kubenswrapper[4794]: I0216 17:02:19.557511 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk8kb\" (UniqueName: \"kubernetes.io/projected/af6a5346-b124-4089-8f06-630c21bc3be2-kube-api-access-fk8kb\") pod \"controller-manager-5df9c79f99-mmwct\" (UID: \"af6a5346-b124-4089-8f06-630c21bc3be2\") " pod="openshift-controller-manager/controller-manager-5df9c79f99-mmwct" Feb 16 17:02:19 crc kubenswrapper[4794]: I0216 17:02:19.557561 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6a5346-b124-4089-8f06-630c21bc3be2-serving-cert\") pod \"controller-manager-5df9c79f99-mmwct\" (UID: \"af6a5346-b124-4089-8f06-630c21bc3be2\") " pod="openshift-controller-manager/controller-manager-5df9c79f99-mmwct" Feb 16 17:02:19 crc kubenswrapper[4794]: I0216 17:02:19.557600 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/af6a5346-b124-4089-8f06-630c21bc3be2-proxy-ca-bundles\") pod \"controller-manager-5df9c79f99-mmwct\" (UID: \"af6a5346-b124-4089-8f06-630c21bc3be2\") " pod="openshift-controller-manager/controller-manager-5df9c79f99-mmwct" Feb 16 17:02:19 crc kubenswrapper[4794]: I0216 17:02:19.558982 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6a5346-b124-4089-8f06-630c21bc3be2-client-ca\") pod \"controller-manager-5df9c79f99-mmwct\" (UID: \"af6a5346-b124-4089-8f06-630c21bc3be2\") " pod="openshift-controller-manager/controller-manager-5df9c79f99-mmwct" Feb 16 17:02:19 crc kubenswrapper[4794]: I0216 17:02:19.559234 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6a5346-b124-4089-8f06-630c21bc3be2-config\") pod \"controller-manager-5df9c79f99-mmwct\" (UID: \"af6a5346-b124-4089-8f06-630c21bc3be2\") " pod="openshift-controller-manager/controller-manager-5df9c79f99-mmwct" Feb 16 17:02:19 crc kubenswrapper[4794]: I0216 17:02:19.559458 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/af6a5346-b124-4089-8f06-630c21bc3be2-proxy-ca-bundles\") pod \"controller-manager-5df9c79f99-mmwct\" (UID: \"af6a5346-b124-4089-8f06-630c21bc3be2\") " pod="openshift-controller-manager/controller-manager-5df9c79f99-mmwct" Feb 16 17:02:19 crc kubenswrapper[4794]: I0216 17:02:19.563235 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6a5346-b124-4089-8f06-630c21bc3be2-serving-cert\") pod \"controller-manager-5df9c79f99-mmwct\" (UID: \"af6a5346-b124-4089-8f06-630c21bc3be2\") " pod="openshift-controller-manager/controller-manager-5df9c79f99-mmwct" Feb 16 17:02:19 crc kubenswrapper[4794]: I0216 17:02:19.581537 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk8kb\" (UniqueName: \"kubernetes.io/projected/af6a5346-b124-4089-8f06-630c21bc3be2-kube-api-access-fk8kb\") pod \"controller-manager-5df9c79f99-mmwct\" (UID: \"af6a5346-b124-4089-8f06-630c21bc3be2\") " pod="openshift-controller-manager/controller-manager-5df9c79f99-mmwct" Feb 16 17:02:19 crc kubenswrapper[4794]: I0216 17:02:19.673551 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5df9c79f99-mmwct" Feb 16 17:02:19 crc kubenswrapper[4794]: I0216 17:02:19.801656 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6v5np" Feb 16 17:02:19 crc kubenswrapper[4794]: I0216 17:02:19.922853 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5df9c79f99-mmwct"] Feb 16 17:02:19 crc kubenswrapper[4794]: W0216 17:02:19.937070 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf6a5346_b124_4089_8f06_630c21bc3be2.slice/crio-fc0b145482e091114ae84b26d7b12f3772bff0b72eb871869d2d75aa490ceca4 WatchSource:0}: Error finding container fc0b145482e091114ae84b26d7b12f3772bff0b72eb871869d2d75aa490ceca4: Status 404 returned error can't find the container with id fc0b145482e091114ae84b26d7b12f3772bff0b72eb871869d2d75aa490ceca4 Feb 16 17:02:20 crc kubenswrapper[4794]: I0216 17:02:20.131240 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 17:02:20 crc kubenswrapper[4794]: I0216 17:02:20.132814 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 17:02:20 crc kubenswrapper[4794]: I0216 17:02:20.135985 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 17:02:20 crc kubenswrapper[4794]: I0216 17:02:20.136973 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 17:02:20 crc kubenswrapper[4794]: I0216 17:02:20.140676 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:02:20 crc kubenswrapper[4794]: I0216 17:02:20.140737 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:02:20 crc kubenswrapper[4794]: I0216 17:02:20.143559 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 17:02:20 crc kubenswrapper[4794]: I0216 17:02:20.164734 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/522b6fa4-fce1-4c7d-875c-b1a776c3d024-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"522b6fa4-fce1-4c7d-875c-b1a776c3d024\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 17:02:20 crc kubenswrapper[4794]: I0216 17:02:20.164799 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/522b6fa4-fce1-4c7d-875c-b1a776c3d024-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"522b6fa4-fce1-4c7d-875c-b1a776c3d024\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 17:02:20 crc kubenswrapper[4794]: I0216 17:02:20.241538 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5df9c79f99-mmwct" event={"ID":"af6a5346-b124-4089-8f06-630c21bc3be2","Type":"ContainerStarted","Data":"da132c7baa89641e3513bb4c4d3a26674332115e0003c08e27875c4c378667ee"} Feb 16 17:02:20 crc kubenswrapper[4794]: I0216 17:02:20.241593 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5df9c79f99-mmwct" event={"ID":"af6a5346-b124-4089-8f06-630c21bc3be2","Type":"ContainerStarted","Data":"fc0b145482e091114ae84b26d7b12f3772bff0b72eb871869d2d75aa490ceca4"} Feb 16 17:02:20 crc kubenswrapper[4794]: I0216 17:02:20.243484 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tf698" event={"ID":"894bff1b-b8b9-4c28-8ffe-0e0469958227","Type":"ContainerStarted","Data":"960d36fb0906cf9b45621406e4aee218f2a0b2452cd268aa88df5b1bcc8a6f62"} Feb 16 17:02:20 crc kubenswrapper[4794]: I0216 17:02:20.243523 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tf698" event={"ID":"894bff1b-b8b9-4c28-8ffe-0e0469958227","Type":"ContainerStarted","Data":"ace3f8af19ba629e0cf5e92554fc778ee319a07b7d4e4c8b7ca06bfc872a64f5"} Feb 16 17:02:20 crc kubenswrapper[4794]: I0216 17:02:20.265806 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/522b6fa4-fce1-4c7d-875c-b1a776c3d024-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"522b6fa4-fce1-4c7d-875c-b1a776c3d024\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 17:02:20 crc kubenswrapper[4794]: I0216 17:02:20.265912 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/522b6fa4-fce1-4c7d-875c-b1a776c3d024-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"522b6fa4-fce1-4c7d-875c-b1a776c3d024\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 17:02:20 crc kubenswrapper[4794]: I0216 17:02:20.265996 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/522b6fa4-fce1-4c7d-875c-b1a776c3d024-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"522b6fa4-fce1-4c7d-875c-b1a776c3d024\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 17:02:20 crc kubenswrapper[4794]: I0216 17:02:20.286470 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/522b6fa4-fce1-4c7d-875c-b1a776c3d024-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"522b6fa4-fce1-4c7d-875c-b1a776c3d024\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 17:02:20 crc kubenswrapper[4794]: I0216 17:02:20.304423 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6v5np" Feb 16 17:02:20 crc kubenswrapper[4794]: I0216 17:02:20.466907 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 17:02:20 crc kubenswrapper[4794]: I0216 17:02:20.926696 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 17:02:20 crc kubenswrapper[4794]: W0216 17:02:20.933823 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod522b6fa4_fce1_4c7d_875c_b1a776c3d024.slice/crio-622fe98ec8034a98e946064f60e8c42163aaa65ce9061d41a77cf01036a83b8a WatchSource:0}: Error finding container 622fe98ec8034a98e946064f60e8c42163aaa65ce9061d41a77cf01036a83b8a: Status 404 returned error can't find the container with id 622fe98ec8034a98e946064f60e8c42163aaa65ce9061d41a77cf01036a83b8a Feb 16 17:02:21 crc kubenswrapper[4794]: I0216 17:02:21.248495 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"522b6fa4-fce1-4c7d-875c-b1a776c3d024","Type":"ContainerStarted","Data":"622fe98ec8034a98e946064f60e8c42163aaa65ce9061d41a77cf01036a83b8a"} Feb 16 17:02:21 crc kubenswrapper[4794]: I0216 17:02:21.270332 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5df9c79f99-mmwct" podStartSLOduration=7.270310298 podStartE2EDuration="7.270310298s" podCreationTimestamp="2026-02-16 17:02:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:21.267219256 +0000 UTC m=+167.215313923" watchObservedRunningTime="2026-02-16 17:02:21.270310298 +0000 UTC m=+167.218404945" Feb 16 17:02:21 crc kubenswrapper[4794]: I0216 17:02:21.694050 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5sk9z" Feb 16 17:02:21 crc kubenswrapper[4794]: I0216 17:02:21.694117 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5sk9z" Feb 16 17:02:21 crc kubenswrapper[4794]: I0216 17:02:21.736444 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5sk9z" Feb 16 17:02:21 crc kubenswrapper[4794]: I0216 17:02:21.753596 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-tf698" podStartSLOduration=146.753578103 podStartE2EDuration="2m26.753578103s" podCreationTimestamp="2026-02-16 16:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:21.281470615 +0000 UTC m=+167.229565262" watchObservedRunningTime="2026-02-16 17:02:21.753578103 +0000 UTC m=+167.701672750" Feb 16 17:02:22 crc kubenswrapper[4794]: I0216 17:02:22.295420 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5sk9z" Feb 16 17:02:22 crc kubenswrapper[4794]: I0216 17:02:22.301318 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7nzlb" Feb 16 17:02:22 crc kubenswrapper[4794]: I0216 17:02:22.301378 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7nzlb" Feb 16 17:02:22 crc kubenswrapper[4794]: I0216 17:02:22.344166 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7nzlb" Feb 16 17:02:22 crc kubenswrapper[4794]: I0216 17:02:22.683501 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j2khs" Feb 16 17:02:22 crc kubenswrapper[4794]: I0216 17:02:22.683599 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j2khs" Feb 16 17:02:22 crc kubenswrapper[4794]: I0216 17:02:22.726016 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j2khs" Feb 16 17:02:23 crc kubenswrapper[4794]: I0216 17:02:23.263785 4794 generic.go:334] "Generic (PLEG): container finished" podID="522b6fa4-fce1-4c7d-875c-b1a776c3d024" containerID="6930406ddd90ff90c49fbdf2a9c8742d7fb805d717cd33b8ec2a1f474829266e" exitCode=0 Feb 16 17:02:23 crc kubenswrapper[4794]: I0216 17:02:23.263839 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"522b6fa4-fce1-4c7d-875c-b1a776c3d024","Type":"ContainerDied","Data":"6930406ddd90ff90c49fbdf2a9c8742d7fb805d717cd33b8ec2a1f474829266e"} Feb 16 17:02:23 crc kubenswrapper[4794]: I0216 17:02:23.305857 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7nzlb" Feb 16 17:02:23 crc kubenswrapper[4794]: I0216 17:02:23.312642 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j2khs" Feb 16 17:02:24 crc kubenswrapper[4794]: I0216 17:02:24.276915 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gj5mf" event={"ID":"68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4","Type":"ContainerStarted","Data":"b6d16b138537156df50e3491a6291cd82b5b46bd217edc0993950375f0988c84"} Feb 16 17:02:24 crc kubenswrapper[4794]: I0216 17:02:24.702103 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 17:02:24 crc kubenswrapper[4794]: I0216 17:02:24.763209 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/522b6fa4-fce1-4c7d-875c-b1a776c3d024-kube-api-access\") pod \"522b6fa4-fce1-4c7d-875c-b1a776c3d024\" (UID: \"522b6fa4-fce1-4c7d-875c-b1a776c3d024\") " Feb 16 17:02:24 crc kubenswrapper[4794]: I0216 17:02:24.763407 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/522b6fa4-fce1-4c7d-875c-b1a776c3d024-kubelet-dir\") pod \"522b6fa4-fce1-4c7d-875c-b1a776c3d024\" (UID: \"522b6fa4-fce1-4c7d-875c-b1a776c3d024\") " Feb 16 17:02:24 crc kubenswrapper[4794]: I0216 17:02:24.763793 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/522b6fa4-fce1-4c7d-875c-b1a776c3d024-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "522b6fa4-fce1-4c7d-875c-b1a776c3d024" (UID: "522b6fa4-fce1-4c7d-875c-b1a776c3d024"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:02:24 crc kubenswrapper[4794]: I0216 17:02:24.784487 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/522b6fa4-fce1-4c7d-875c-b1a776c3d024-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "522b6fa4-fce1-4c7d-875c-b1a776c3d024" (UID: "522b6fa4-fce1-4c7d-875c-b1a776c3d024"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:02:24 crc kubenswrapper[4794]: I0216 17:02:24.865465 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/522b6fa4-fce1-4c7d-875c-b1a776c3d024-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:24 crc kubenswrapper[4794]: I0216 17:02:24.865507 4794 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/522b6fa4-fce1-4c7d-875c-b1a776c3d024-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:24 crc kubenswrapper[4794]: I0216 17:02:24.927872 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 17:02:24 crc kubenswrapper[4794]: E0216 17:02:24.928178 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="522b6fa4-fce1-4c7d-875c-b1a776c3d024" containerName="pruner" Feb 16 17:02:24 crc kubenswrapper[4794]: I0216 17:02:24.928190 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="522b6fa4-fce1-4c7d-875c-b1a776c3d024" containerName="pruner" Feb 16 17:02:24 crc kubenswrapper[4794]: I0216 17:02:24.928389 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="522b6fa4-fce1-4c7d-875c-b1a776c3d024" containerName="pruner" Feb 16 17:02:24 crc kubenswrapper[4794]: I0216 17:02:24.931772 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 17:02:24 crc kubenswrapper[4794]: I0216 17:02:24.934457 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 17:02:25 crc kubenswrapper[4794]: I0216 17:02:25.067839 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cd94818-7cad-4289-9c9d-ebdddf83a6c8-kube-api-access\") pod \"installer-9-crc\" (UID: \"4cd94818-7cad-4289-9c9d-ebdddf83a6c8\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 17:02:25 crc kubenswrapper[4794]: I0216 17:02:25.067930 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cd94818-7cad-4289-9c9d-ebdddf83a6c8-kubelet-dir\") pod \"installer-9-crc\" (UID: \"4cd94818-7cad-4289-9c9d-ebdddf83a6c8\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 17:02:25 crc kubenswrapper[4794]: I0216 17:02:25.067951 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4cd94818-7cad-4289-9c9d-ebdddf83a6c8-var-lock\") pod \"installer-9-crc\" (UID: \"4cd94818-7cad-4289-9c9d-ebdddf83a6c8\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 17:02:25 crc kubenswrapper[4794]: I0216 17:02:25.169791 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cd94818-7cad-4289-9c9d-ebdddf83a6c8-kube-api-access\") pod \"installer-9-crc\" (UID: \"4cd94818-7cad-4289-9c9d-ebdddf83a6c8\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 17:02:25 crc kubenswrapper[4794]: I0216 17:02:25.170290 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cd94818-7cad-4289-9c9d-ebdddf83a6c8-kubelet-dir\") pod \"installer-9-crc\" (UID: \"4cd94818-7cad-4289-9c9d-ebdddf83a6c8\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 17:02:25 crc kubenswrapper[4794]: I0216 17:02:25.170344 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4cd94818-7cad-4289-9c9d-ebdddf83a6c8-var-lock\") pod \"installer-9-crc\" (UID: \"4cd94818-7cad-4289-9c9d-ebdddf83a6c8\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 17:02:25 crc kubenswrapper[4794]: I0216 17:02:25.170448 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4cd94818-7cad-4289-9c9d-ebdddf83a6c8-var-lock\") pod \"installer-9-crc\" (UID: \"4cd94818-7cad-4289-9c9d-ebdddf83a6c8\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 17:02:25 crc kubenswrapper[4794]: I0216 17:02:25.170493 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cd94818-7cad-4289-9c9d-ebdddf83a6c8-kubelet-dir\") pod \"installer-9-crc\" (UID: \"4cd94818-7cad-4289-9c9d-ebdddf83a6c8\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 17:02:25 crc kubenswrapper[4794]: I0216 17:02:25.195623 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cd94818-7cad-4289-9c9d-ebdddf83a6c8-kube-api-access\") pod \"installer-9-crc\" (UID: \"4cd94818-7cad-4289-9c9d-ebdddf83a6c8\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 17:02:25 crc kubenswrapper[4794]: I0216 17:02:25.252788 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 17:02:25 crc kubenswrapper[4794]: I0216 17:02:25.255985 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qfr5h"] Feb 16 17:02:25 crc kubenswrapper[4794]: I0216 17:02:25.298552 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"522b6fa4-fce1-4c7d-875c-b1a776c3d024","Type":"ContainerDied","Data":"622fe98ec8034a98e946064f60e8c42163aaa65ce9061d41a77cf01036a83b8a"} Feb 16 17:02:25 crc kubenswrapper[4794]: I0216 17:02:25.298835 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="622fe98ec8034a98e946064f60e8c42163aaa65ce9061d41a77cf01036a83b8a" Feb 16 17:02:25 crc kubenswrapper[4794]: I0216 17:02:25.298590 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 17:02:25 crc kubenswrapper[4794]: I0216 17:02:25.303711 4794 generic.go:334] "Generic (PLEG): container finished" podID="68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4" containerID="b6d16b138537156df50e3491a6291cd82b5b46bd217edc0993950375f0988c84" exitCode=0 Feb 16 17:02:25 crc kubenswrapper[4794]: I0216 17:02:25.303741 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gj5mf" event={"ID":"68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4","Type":"ContainerDied","Data":"b6d16b138537156df50e3491a6291cd82b5b46bd217edc0993950375f0988c84"} Feb 16 17:02:25 crc kubenswrapper[4794]: I0216 17:02:25.952443 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 17:02:25 crc kubenswrapper[4794]: W0216 17:02:25.962296 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4cd94818_7cad_4289_9c9d_ebdddf83a6c8.slice/crio-509f565bef29b46d1fd24ad7e0aa27c2c8539e60eee58c65d7cec2ffc9841677 WatchSource:0}: Error finding container 509f565bef29b46d1fd24ad7e0aa27c2c8539e60eee58c65d7cec2ffc9841677: Status 404 returned error can't find the container with id 509f565bef29b46d1fd24ad7e0aa27c2c8539e60eee58c65d7cec2ffc9841677 Feb 16 17:02:26 crc kubenswrapper[4794]: I0216 17:02:26.093011 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j2khs"] Feb 16 17:02:26 crc kubenswrapper[4794]: I0216 17:02:26.093237 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j2khs" podUID="fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4" containerName="registry-server" containerID="cri-o://de60c993299e13ba2c0c694214d1ed39cad8da75f3fcfaaff735e348ac8cf73f" gracePeriod=2 Feb 16 17:02:26 crc kubenswrapper[4794]: I0216 17:02:26.321112 4794 generic.go:334] "Generic (PLEG): container finished" podID="0f9ab6e7-980e-4a61-9072-cd2baa7c51ab" containerID="495ae736baebd4c74d9e49656cdb4b3cc30f38d7199efe52d7009055906d49de" exitCode=0 Feb 16 17:02:26 crc kubenswrapper[4794]: I0216 17:02:26.324863 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cctn" event={"ID":"0f9ab6e7-980e-4a61-9072-cd2baa7c51ab","Type":"ContainerDied","Data":"495ae736baebd4c74d9e49656cdb4b3cc30f38d7199efe52d7009055906d49de"} Feb 16 17:02:26 crc kubenswrapper[4794]: I0216 17:02:26.340127 4794 generic.go:334] "Generic (PLEG): container finished" podID="0c2a611b-e699-45f3-a8ca-a687be266a1f" containerID="2e4ddedae6fe585a8bcfb6256be26a7355c61d9054ec576ec0e82118a6890e4b" exitCode=0 Feb 16 17:02:26 crc kubenswrapper[4794]: I0216 17:02:26.340195 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gxf57" event={"ID":"0c2a611b-e699-45f3-a8ca-a687be266a1f","Type":"ContainerDied","Data":"2e4ddedae6fe585a8bcfb6256be26a7355c61d9054ec576ec0e82118a6890e4b"} Feb 16 17:02:26 crc kubenswrapper[4794]: I0216 17:02:26.351507 4794 generic.go:334] "Generic (PLEG): container finished" podID="fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4" containerID="de60c993299e13ba2c0c694214d1ed39cad8da75f3fcfaaff735e348ac8cf73f" exitCode=0 Feb 16 17:02:26 crc kubenswrapper[4794]: I0216 17:02:26.351580 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2khs" event={"ID":"fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4","Type":"ContainerDied","Data":"de60c993299e13ba2c0c694214d1ed39cad8da75f3fcfaaff735e348ac8cf73f"} Feb 16 17:02:26 crc kubenswrapper[4794]: I0216 17:02:26.357679 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4cd94818-7cad-4289-9c9d-ebdddf83a6c8","Type":"ContainerStarted","Data":"509f565bef29b46d1fd24ad7e0aa27c2c8539e60eee58c65d7cec2ffc9841677"} Feb 16 17:02:26 crc kubenswrapper[4794]: I0216 17:02:26.359829 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gj5mf" event={"ID":"68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4","Type":"ContainerStarted","Data":"f3f3f062dc5ecbd0d964cbc1b2b1277a80bb8bda777974051ad0475694102b37"} Feb 16 17:02:26 crc kubenswrapper[4794]: I0216 17:02:26.522281 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2khs" Feb 16 17:02:26 crc kubenswrapper[4794]: I0216 17:02:26.545059 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gj5mf" podStartSLOduration=2.277431643 podStartE2EDuration="47.545041882s" podCreationTimestamp="2026-02-16 17:01:39 +0000 UTC" firstStartedPulling="2026-02-16 17:01:40.807657597 +0000 UTC m=+126.755752244" lastFinishedPulling="2026-02-16 17:02:26.075267836 +0000 UTC m=+172.023362483" observedRunningTime="2026-02-16 17:02:26.395337863 +0000 UTC m=+172.343432510" watchObservedRunningTime="2026-02-16 17:02:26.545041882 +0000 UTC m=+172.493136519" Feb 16 17:02:26 crc kubenswrapper[4794]: I0216 17:02:26.612215 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4-catalog-content\") pod \"fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4\" (UID: \"fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4\") " Feb 16 17:02:26 crc kubenswrapper[4794]: I0216 17:02:26.612292 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4-utilities\") pod \"fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4\" (UID: \"fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4\") " Feb 16 17:02:26 crc kubenswrapper[4794]: I0216 17:02:26.612402 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljpff\" (UniqueName: \"kubernetes.io/projected/fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4-kube-api-access-ljpff\") pod \"fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4\" (UID: \"fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4\") " Feb 16 17:02:26 crc kubenswrapper[4794]: I0216 17:02:26.631291 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4-utilities" (OuterVolumeSpecName: "utilities") pod "fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4" (UID: "fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:02:26 crc kubenswrapper[4794]: I0216 17:02:26.637033 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4-kube-api-access-ljpff" (OuterVolumeSpecName: "kube-api-access-ljpff") pod "fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4" (UID: "fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4"). InnerVolumeSpecName "kube-api-access-ljpff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:02:26 crc kubenswrapper[4794]: I0216 17:02:26.713794 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:26 crc kubenswrapper[4794]: I0216 17:02:26.713825 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljpff\" (UniqueName: \"kubernetes.io/projected/fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4-kube-api-access-ljpff\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:26 crc kubenswrapper[4794]: I0216 17:02:26.759488 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4" (UID: "fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:02:26 crc kubenswrapper[4794]: I0216 17:02:26.815109 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:27 crc kubenswrapper[4794]: I0216 17:02:27.367710 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2khs" event={"ID":"fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4","Type":"ContainerDied","Data":"41acacd8ecfbc255c93b4c0770a793f7e164a6ea471ec9d3390936a9caf52573"} Feb 16 17:02:27 crc kubenswrapper[4794]: I0216 17:02:27.368094 4794 scope.go:117] "RemoveContainer" containerID="de60c993299e13ba2c0c694214d1ed39cad8da75f3fcfaaff735e348ac8cf73f" Feb 16 17:02:27 crc kubenswrapper[4794]: I0216 17:02:27.367769 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2khs" Feb 16 17:02:27 crc kubenswrapper[4794]: I0216 17:02:27.369032 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4cd94818-7cad-4289-9c9d-ebdddf83a6c8","Type":"ContainerStarted","Data":"9edf09f0a5c258a8a2a6e66cb66b99e2069e04fe0028acc4c1d87e32eefe84a1"} Feb 16 17:02:27 crc kubenswrapper[4794]: I0216 17:02:27.671789 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.671769059 podStartE2EDuration="3.671769059s" podCreationTimestamp="2026-02-16 17:02:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:27.390723989 +0000 UTC m=+173.338818636" watchObservedRunningTime="2026-02-16 17:02:27.671769059 +0000 UTC m=+173.619863706" Feb 16 17:02:27 crc kubenswrapper[4794]: I0216 17:02:27.674561 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j2khs"] Feb 16 17:02:27 crc kubenswrapper[4794]: I0216 17:02:27.676281 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j2khs"] Feb 16 17:02:28 crc kubenswrapper[4794]: I0216 17:02:28.652589 4794 scope.go:117] "RemoveContainer" containerID="86002c4fc1d3c2d06a269afcd1ebb62da1898b3a1f8dc562fead5a84b0cb3c6a" Feb 16 17:02:28 crc kubenswrapper[4794]: I0216 17:02:28.669548 4794 scope.go:117] "RemoveContainer" containerID="a3f26ac3c6a59682308df3e4040334be9220b1204d01b2c57cc524b70f8deefb" Feb 16 17:02:28 crc kubenswrapper[4794]: I0216 17:02:28.799237 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4" path="/var/lib/kubelet/pods/fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4/volumes" Feb 16 17:02:29 crc kubenswrapper[4794]: I0216 17:02:29.381615 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cctn" event={"ID":"0f9ab6e7-980e-4a61-9072-cd2baa7c51ab","Type":"ContainerStarted","Data":"c42d498b85c6841f1e1c4f7ce19346e9e3f22d61cc77ccd19b5868e00ad59207"} Feb 16 17:02:29 crc kubenswrapper[4794]: I0216 17:02:29.402128 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7cctn" podStartSLOduration=2.5369270310000003 podStartE2EDuration="51.402109068s" podCreationTimestamp="2026-02-16 17:01:38 +0000 UTC" firstStartedPulling="2026-02-16 17:01:39.787461392 +0000 UTC m=+125.735556039" lastFinishedPulling="2026-02-16 17:02:28.652643429 +0000 UTC m=+174.600738076" observedRunningTime="2026-02-16 17:02:29.398644416 +0000 UTC m=+175.346739073" watchObservedRunningTime="2026-02-16 17:02:29.402109068 +0000 UTC m=+175.350203715" Feb 16 17:02:29 crc kubenswrapper[4794]: I0216 17:02:29.673837 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5df9c79f99-mmwct" Feb 16 17:02:29 crc kubenswrapper[4794]: I0216 17:02:29.678343 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5df9c79f99-mmwct" Feb 16 17:02:29 crc kubenswrapper[4794]: I0216 17:02:29.699514 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gj5mf" Feb 16 17:02:29 crc kubenswrapper[4794]: I0216 17:02:29.699551 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gj5mf" Feb 16 17:02:29 crc kubenswrapper[4794]: I0216 17:02:29.743025 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gj5mf" Feb 16 17:02:34 crc kubenswrapper[4794]: I0216 17:02:34.415979 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gxf57" event={"ID":"0c2a611b-e699-45f3-a8ca-a687be266a1f","Type":"ContainerStarted","Data":"86cc8abccf7db28968aaf72abf1fc5df9b9e3db3afcb45c5a6e48f297959dcf5"} Feb 16 17:02:34 crc kubenswrapper[4794]: I0216 17:02:34.420143 4794 generic.go:334] "Generic (PLEG): container finished" podID="7039bec8-af08-4439-be97-c6ee7d3a1c3b" containerID="7d87e64460cf050717ed51e0cf9c76e7d822398ef5991c59f28acde1e65235d3" exitCode=0 Feb 16 17:02:34 crc kubenswrapper[4794]: I0216 17:02:34.420190 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-69m8b" event={"ID":"7039bec8-af08-4439-be97-c6ee7d3a1c3b","Type":"ContainerDied","Data":"7d87e64460cf050717ed51e0cf9c76e7d822398ef5991c59f28acde1e65235d3"} Feb 16 17:02:34 crc kubenswrapper[4794]: I0216 17:02:34.444446 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gxf57" podStartSLOduration=2.811889738 podStartE2EDuration="55.444427575s" podCreationTimestamp="2026-02-16 17:01:39 +0000 UTC" firstStartedPulling="2026-02-16 17:01:40.811031947 +0000 UTC m=+126.759126594" lastFinishedPulling="2026-02-16 17:02:33.443569784 +0000 UTC m=+179.391664431" observedRunningTime="2026-02-16 17:02:34.441096497 +0000 UTC m=+180.389191144" watchObservedRunningTime="2026-02-16 17:02:34.444427575 +0000 UTC m=+180.392522232" Feb 16 17:02:34 crc kubenswrapper[4794]: I0216 17:02:34.838196 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5df9c79f99-mmwct"] Feb 16 17:02:34 crc kubenswrapper[4794]: I0216 17:02:34.838388 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5df9c79f99-mmwct" podUID="af6a5346-b124-4089-8f06-630c21bc3be2" containerName="controller-manager" containerID="cri-o://da132c7baa89641e3513bb4c4d3a26674332115e0003c08e27875c4c378667ee" gracePeriod=30 Feb 16 17:02:34 crc kubenswrapper[4794]: I0216 17:02:34.872713 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79cb89f5b4-xvt96"] Feb 16 17:02:34 crc kubenswrapper[4794]: I0216 17:02:34.872974 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-79cb89f5b4-xvt96" podUID="9877cb59-9113-4bc5-b05f-3fa3a9c25d45" containerName="route-controller-manager" containerID="cri-o://72c8f9d7ee2d63dddc01c6520dd864efec6d6dbb8a3a5334fecaf21faeebf98a" gracePeriod=30 Feb 16 17:02:35 crc kubenswrapper[4794]: I0216 17:02:35.455165 4794 generic.go:334] "Generic (PLEG): container finished" podID="9877cb59-9113-4bc5-b05f-3fa3a9c25d45" containerID="72c8f9d7ee2d63dddc01c6520dd864efec6d6dbb8a3a5334fecaf21faeebf98a" exitCode=0 Feb 16 17:02:35 crc kubenswrapper[4794]: I0216 17:02:35.455599 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79cb89f5b4-xvt96" event={"ID":"9877cb59-9113-4bc5-b05f-3fa3a9c25d45","Type":"ContainerDied","Data":"72c8f9d7ee2d63dddc01c6520dd864efec6d6dbb8a3a5334fecaf21faeebf98a"} Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.120446 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79cb89f5b4-xvt96" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.151097 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55cb94d69b-vbfgw"] Feb 16 17:02:36 crc kubenswrapper[4794]: E0216 17:02:36.151522 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4" containerName="extract-content" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.151545 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4" containerName="extract-content" Feb 16 17:02:36 crc kubenswrapper[4794]: E0216 17:02:36.151560 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4" containerName="extract-utilities" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.151574 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4" containerName="extract-utilities" Feb 16 17:02:36 crc kubenswrapper[4794]: E0216 17:02:36.151609 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4" containerName="registry-server" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.151620 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4" containerName="registry-server" Feb 16 17:02:36 crc kubenswrapper[4794]: E0216 17:02:36.151629 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9877cb59-9113-4bc5-b05f-3fa3a9c25d45" containerName="route-controller-manager" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.151637 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="9877cb59-9113-4bc5-b05f-3fa3a9c25d45" containerName="route-controller-manager" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.151819 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa11a8c2-cdc5-4d0e-9de2-991b907bb1e4" containerName="registry-server" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.151847 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="9877cb59-9113-4bc5-b05f-3fa3a9c25d45" containerName="route-controller-manager" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.152558 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55cb94d69b-vbfgw" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.160795 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55cb94d69b-vbfgw"] Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.262345 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9877cb59-9113-4bc5-b05f-3fa3a9c25d45-client-ca\") pod \"9877cb59-9113-4bc5-b05f-3fa3a9c25d45\" (UID: \"9877cb59-9113-4bc5-b05f-3fa3a9c25d45\") " Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.262391 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86bh5\" (UniqueName: \"kubernetes.io/projected/9877cb59-9113-4bc5-b05f-3fa3a9c25d45-kube-api-access-86bh5\") pod \"9877cb59-9113-4bc5-b05f-3fa3a9c25d45\" (UID: \"9877cb59-9113-4bc5-b05f-3fa3a9c25d45\") " Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.262807 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9877cb59-9113-4bc5-b05f-3fa3a9c25d45-config\") pod \"9877cb59-9113-4bc5-b05f-3fa3a9c25d45\" (UID: \"9877cb59-9113-4bc5-b05f-3fa3a9c25d45\") " Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.262845 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9877cb59-9113-4bc5-b05f-3fa3a9c25d45-serving-cert\") pod \"9877cb59-9113-4bc5-b05f-3fa3a9c25d45\" (UID: \"9877cb59-9113-4bc5-b05f-3fa3a9c25d45\") " Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.262945 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07616a6b-a602-4f28-a88c-c5f71b56466a-client-ca\") pod \"route-controller-manager-55cb94d69b-vbfgw\" (UID: \"07616a6b-a602-4f28-a88c-c5f71b56466a\") " pod="openshift-route-controller-manager/route-controller-manager-55cb94d69b-vbfgw" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.262989 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcskh\" (UniqueName: \"kubernetes.io/projected/07616a6b-a602-4f28-a88c-c5f71b56466a-kube-api-access-rcskh\") pod \"route-controller-manager-55cb94d69b-vbfgw\" (UID: \"07616a6b-a602-4f28-a88c-c5f71b56466a\") " pod="openshift-route-controller-manager/route-controller-manager-55cb94d69b-vbfgw" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.263056 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07616a6b-a602-4f28-a88c-c5f71b56466a-serving-cert\") pod \"route-controller-manager-55cb94d69b-vbfgw\" (UID: \"07616a6b-a602-4f28-a88c-c5f71b56466a\") " pod="openshift-route-controller-manager/route-controller-manager-55cb94d69b-vbfgw" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.263096 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07616a6b-a602-4f28-a88c-c5f71b56466a-config\") pod \"route-controller-manager-55cb94d69b-vbfgw\" (UID: \"07616a6b-a602-4f28-a88c-c5f71b56466a\") " pod="openshift-route-controller-manager/route-controller-manager-55cb94d69b-vbfgw" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.263853 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9877cb59-9113-4bc5-b05f-3fa3a9c25d45-config" (OuterVolumeSpecName: "config") pod "9877cb59-9113-4bc5-b05f-3fa3a9c25d45" (UID: "9877cb59-9113-4bc5-b05f-3fa3a9c25d45"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.264333 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9877cb59-9113-4bc5-b05f-3fa3a9c25d45-client-ca" (OuterVolumeSpecName: "client-ca") pod "9877cb59-9113-4bc5-b05f-3fa3a9c25d45" (UID: "9877cb59-9113-4bc5-b05f-3fa3a9c25d45"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.287227 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9877cb59-9113-4bc5-b05f-3fa3a9c25d45-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9877cb59-9113-4bc5-b05f-3fa3a9c25d45" (UID: "9877cb59-9113-4bc5-b05f-3fa3a9c25d45"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.287963 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9877cb59-9113-4bc5-b05f-3fa3a9c25d45-kube-api-access-86bh5" (OuterVolumeSpecName: "kube-api-access-86bh5") pod "9877cb59-9113-4bc5-b05f-3fa3a9c25d45" (UID: "9877cb59-9113-4bc5-b05f-3fa3a9c25d45"). InnerVolumeSpecName "kube-api-access-86bh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.364772 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07616a6b-a602-4f28-a88c-c5f71b56466a-serving-cert\") pod \"route-controller-manager-55cb94d69b-vbfgw\" (UID: \"07616a6b-a602-4f28-a88c-c5f71b56466a\") " pod="openshift-route-controller-manager/route-controller-manager-55cb94d69b-vbfgw" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.364855 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07616a6b-a602-4f28-a88c-c5f71b56466a-config\") pod \"route-controller-manager-55cb94d69b-vbfgw\" (UID: \"07616a6b-a602-4f28-a88c-c5f71b56466a\") " pod="openshift-route-controller-manager/route-controller-manager-55cb94d69b-vbfgw" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.364900 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07616a6b-a602-4f28-a88c-c5f71b56466a-client-ca\") pod \"route-controller-manager-55cb94d69b-vbfgw\" (UID: \"07616a6b-a602-4f28-a88c-c5f71b56466a\") " pod="openshift-route-controller-manager/route-controller-manager-55cb94d69b-vbfgw" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.364943 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcskh\" (UniqueName: \"kubernetes.io/projected/07616a6b-a602-4f28-a88c-c5f71b56466a-kube-api-access-rcskh\") pod \"route-controller-manager-55cb94d69b-vbfgw\" (UID: \"07616a6b-a602-4f28-a88c-c5f71b56466a\") " pod="openshift-route-controller-manager/route-controller-manager-55cb94d69b-vbfgw" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.365291 4794 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9877cb59-9113-4bc5-b05f-3fa3a9c25d45-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.369209 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86bh5\" (UniqueName: \"kubernetes.io/projected/9877cb59-9113-4bc5-b05f-3fa3a9c25d45-kube-api-access-86bh5\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.369242 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9877cb59-9113-4bc5-b05f-3fa3a9c25d45-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.369445 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07616a6b-a602-4f28-a88c-c5f71b56466a-serving-cert\") pod \"route-controller-manager-55cb94d69b-vbfgw\" (UID: \"07616a6b-a602-4f28-a88c-c5f71b56466a\") " pod="openshift-route-controller-manager/route-controller-manager-55cb94d69b-vbfgw" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.369512 4794 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9877cb59-9113-4bc5-b05f-3fa3a9c25d45-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.366174 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07616a6b-a602-4f28-a88c-c5f71b56466a-client-ca\") pod \"route-controller-manager-55cb94d69b-vbfgw\" (UID: \"07616a6b-a602-4f28-a88c-c5f71b56466a\") " pod="openshift-route-controller-manager/route-controller-manager-55cb94d69b-vbfgw" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.369747 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07616a6b-a602-4f28-a88c-c5f71b56466a-config\") pod \"route-controller-manager-55cb94d69b-vbfgw\" (UID: \"07616a6b-a602-4f28-a88c-c5f71b56466a\") " pod="openshift-route-controller-manager/route-controller-manager-55cb94d69b-vbfgw" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.384175 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcskh\" (UniqueName: \"kubernetes.io/projected/07616a6b-a602-4f28-a88c-c5f71b56466a-kube-api-access-rcskh\") pod \"route-controller-manager-55cb94d69b-vbfgw\" (UID: \"07616a6b-a602-4f28-a88c-c5f71b56466a\") " pod="openshift-route-controller-manager/route-controller-manager-55cb94d69b-vbfgw" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.461976 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79cb89f5b4-xvt96" event={"ID":"9877cb59-9113-4bc5-b05f-3fa3a9c25d45","Type":"ContainerDied","Data":"3f66d442627f024d558ce0c5641eac120346c79e422926d5d89d8a923f87f09e"} Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.462037 4794 scope.go:117] "RemoveContainer" containerID="72c8f9d7ee2d63dddc01c6520dd864efec6d6dbb8a3a5334fecaf21faeebf98a" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.462031 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79cb89f5b4-xvt96" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.463870 4794 generic.go:334] "Generic (PLEG): container finished" podID="af6a5346-b124-4089-8f06-630c21bc3be2" containerID="da132c7baa89641e3513bb4c4d3a26674332115e0003c08e27875c4c378667ee" exitCode=0 Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.464435 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5df9c79f99-mmwct" event={"ID":"af6a5346-b124-4089-8f06-630c21bc3be2","Type":"ContainerDied","Data":"da132c7baa89641e3513bb4c4d3a26674332115e0003c08e27875c4c378667ee"} Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.469213 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-69m8b" event={"ID":"7039bec8-af08-4439-be97-c6ee7d3a1c3b","Type":"ContainerStarted","Data":"9603c865843a7abe69d329048e1905ee5512b76d43a2ffcd6d53c1644b780c09"} Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.480111 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55cb94d69b-vbfgw" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.492554 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-69m8b" podStartSLOduration=2.417180904 podStartE2EDuration="55.49253058s" podCreationTimestamp="2026-02-16 17:01:41 +0000 UTC" firstStartedPulling="2026-02-16 17:01:42.876630317 +0000 UTC m=+128.824724974" lastFinishedPulling="2026-02-16 17:02:35.951980003 +0000 UTC m=+181.900074650" observedRunningTime="2026-02-16 17:02:36.487518897 +0000 UTC m=+182.435613554" watchObservedRunningTime="2026-02-16 17:02:36.49253058 +0000 UTC m=+182.440625227" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.506696 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79cb89f5b4-xvt96"] Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.509818 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79cb89f5b4-xvt96"] Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.516827 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5df9c79f99-mmwct" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.573223 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6a5346-b124-4089-8f06-630c21bc3be2-config\") pod \"af6a5346-b124-4089-8f06-630c21bc3be2\" (UID: \"af6a5346-b124-4089-8f06-630c21bc3be2\") " Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.575629 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af6a5346-b124-4089-8f06-630c21bc3be2-config" (OuterVolumeSpecName: "config") pod "af6a5346-b124-4089-8f06-630c21bc3be2" (UID: "af6a5346-b124-4089-8f06-630c21bc3be2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.575761 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6a5346-b124-4089-8f06-630c21bc3be2-client-ca\") pod \"af6a5346-b124-4089-8f06-630c21bc3be2\" (UID: \"af6a5346-b124-4089-8f06-630c21bc3be2\") " Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.576232 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af6a5346-b124-4089-8f06-630c21bc3be2-client-ca" (OuterVolumeSpecName: "client-ca") pod "af6a5346-b124-4089-8f06-630c21bc3be2" (UID: "af6a5346-b124-4089-8f06-630c21bc3be2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.575797 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fk8kb\" (UniqueName: \"kubernetes.io/projected/af6a5346-b124-4089-8f06-630c21bc3be2-kube-api-access-fk8kb\") pod \"af6a5346-b124-4089-8f06-630c21bc3be2\" (UID: \"af6a5346-b124-4089-8f06-630c21bc3be2\") " Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.576434 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/af6a5346-b124-4089-8f06-630c21bc3be2-proxy-ca-bundles\") pod \"af6a5346-b124-4089-8f06-630c21bc3be2\" (UID: \"af6a5346-b124-4089-8f06-630c21bc3be2\") " Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.576488 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6a5346-b124-4089-8f06-630c21bc3be2-serving-cert\") pod \"af6a5346-b124-4089-8f06-630c21bc3be2\" (UID: \"af6a5346-b124-4089-8f06-630c21bc3be2\") " Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.576962 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6a5346-b124-4089-8f06-630c21bc3be2-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.576981 4794 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6a5346-b124-4089-8f06-630c21bc3be2-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.577395 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af6a5346-b124-4089-8f06-630c21bc3be2-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "af6a5346-b124-4089-8f06-630c21bc3be2" (UID: "af6a5346-b124-4089-8f06-630c21bc3be2"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.586755 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6a5346-b124-4089-8f06-630c21bc3be2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "af6a5346-b124-4089-8f06-630c21bc3be2" (UID: "af6a5346-b124-4089-8f06-630c21bc3be2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.586890 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af6a5346-b124-4089-8f06-630c21bc3be2-kube-api-access-fk8kb" (OuterVolumeSpecName: "kube-api-access-fk8kb") pod "af6a5346-b124-4089-8f06-630c21bc3be2" (UID: "af6a5346-b124-4089-8f06-630c21bc3be2"). InnerVolumeSpecName "kube-api-access-fk8kb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.678317 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fk8kb\" (UniqueName: \"kubernetes.io/projected/af6a5346-b124-4089-8f06-630c21bc3be2-kube-api-access-fk8kb\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.678367 4794 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/af6a5346-b124-4089-8f06-630c21bc3be2-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.678380 4794 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6a5346-b124-4089-8f06-630c21bc3be2-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.798056 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9877cb59-9113-4bc5-b05f-3fa3a9c25d45" path="/var/lib/kubelet/pods/9877cb59-9113-4bc5-b05f-3fa3a9c25d45/volumes" Feb 16 17:02:36 crc kubenswrapper[4794]: I0216 17:02:36.893946 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55cb94d69b-vbfgw"] Feb 16 17:02:36 crc kubenswrapper[4794]: W0216 17:02:36.901885 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07616a6b_a602_4f28_a88c_c5f71b56466a.slice/crio-76afd5d4fae18a3a01f97ed6b9cd9258e6d923456e942e0faf3367e8b6a5a85f WatchSource:0}: Error finding container 76afd5d4fae18a3a01f97ed6b9cd9258e6d923456e942e0faf3367e8b6a5a85f: Status 404 returned error can't find the container with id 76afd5d4fae18a3a01f97ed6b9cd9258e6d923456e942e0faf3367e8b6a5a85f Feb 16 17:02:37 crc kubenswrapper[4794]: I0216 17:02:37.478492 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5df9c79f99-mmwct" event={"ID":"af6a5346-b124-4089-8f06-630c21bc3be2","Type":"ContainerDied","Data":"fc0b145482e091114ae84b26d7b12f3772bff0b72eb871869d2d75aa490ceca4"} Feb 16 17:02:37 crc kubenswrapper[4794]: I0216 17:02:37.478564 4794 scope.go:117] "RemoveContainer" containerID="da132c7baa89641e3513bb4c4d3a26674332115e0003c08e27875c4c378667ee" Feb 16 17:02:37 crc kubenswrapper[4794]: I0216 17:02:37.478731 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5df9c79f99-mmwct" Feb 16 17:02:37 crc kubenswrapper[4794]: I0216 17:02:37.481912 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55cb94d69b-vbfgw" event={"ID":"07616a6b-a602-4f28-a88c-c5f71b56466a","Type":"ContainerStarted","Data":"9b24ed5bbb960dd3d24fc18e6c9a2888ca98831cc5a77a70d662cf5fa140b8d3"} Feb 16 17:02:37 crc kubenswrapper[4794]: I0216 17:02:37.481945 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55cb94d69b-vbfgw" event={"ID":"07616a6b-a602-4f28-a88c-c5f71b56466a","Type":"ContainerStarted","Data":"76afd5d4fae18a3a01f97ed6b9cd9258e6d923456e942e0faf3367e8b6a5a85f"} Feb 16 17:02:37 crc kubenswrapper[4794]: I0216 17:02:37.484668 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-55cb94d69b-vbfgw" Feb 16 17:02:37 crc kubenswrapper[4794]: I0216 17:02:37.488695 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-55cb94d69b-vbfgw" Feb 16 17:02:37 crc kubenswrapper[4794]: I0216 17:02:37.515454 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-55cb94d69b-vbfgw" podStartSLOduration=3.515432968 podStartE2EDuration="3.515432968s" podCreationTimestamp="2026-02-16 17:02:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:37.512085749 +0000 UTC m=+183.460180416" watchObservedRunningTime="2026-02-16 17:02:37.515432968 +0000 UTC m=+183.463527615" Feb 16 17:02:37 crc kubenswrapper[4794]: I0216 17:02:37.528554 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5df9c79f99-mmwct"] Feb 16 17:02:37 crc kubenswrapper[4794]: I0216 17:02:37.532658 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5df9c79f99-mmwct"] Feb 16 17:02:38 crc kubenswrapper[4794]: I0216 17:02:38.374137 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-748f84f96-5pn2r"] Feb 16 17:02:38 crc kubenswrapper[4794]: E0216 17:02:38.375165 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af6a5346-b124-4089-8f06-630c21bc3be2" containerName="controller-manager" Feb 16 17:02:38 crc kubenswrapper[4794]: I0216 17:02:38.375195 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6a5346-b124-4089-8f06-630c21bc3be2" containerName="controller-manager" Feb 16 17:02:38 crc kubenswrapper[4794]: I0216 17:02:38.375373 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="af6a5346-b124-4089-8f06-630c21bc3be2" containerName="controller-manager" Feb 16 17:02:38 crc kubenswrapper[4794]: I0216 17:02:38.376053 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-748f84f96-5pn2r" Feb 16 17:02:38 crc kubenswrapper[4794]: I0216 17:02:38.378125 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 17:02:38 crc kubenswrapper[4794]: I0216 17:02:38.378939 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 17:02:38 crc kubenswrapper[4794]: I0216 17:02:38.379201 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 17:02:38 crc kubenswrapper[4794]: I0216 17:02:38.379268 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 17:02:38 crc kubenswrapper[4794]: I0216 17:02:38.380802 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 17:02:38 crc kubenswrapper[4794]: I0216 17:02:38.381031 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 17:02:38 crc kubenswrapper[4794]: I0216 17:02:38.387788 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-748f84f96-5pn2r"] Feb 16 17:02:38 crc kubenswrapper[4794]: I0216 17:02:38.388037 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 17:02:38 crc kubenswrapper[4794]: I0216 17:02:38.397348 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4948f11-fc76-4ab0-880e-670b3db638a9-config\") pod \"controller-manager-748f84f96-5pn2r\" (UID: \"b4948f11-fc76-4ab0-880e-670b3db638a9\") " pod="openshift-controller-manager/controller-manager-748f84f96-5pn2r" Feb 16 17:02:38 crc kubenswrapper[4794]: I0216 17:02:38.397442 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrjxg\" (UniqueName: \"kubernetes.io/projected/b4948f11-fc76-4ab0-880e-670b3db638a9-kube-api-access-qrjxg\") pod \"controller-manager-748f84f96-5pn2r\" (UID: \"b4948f11-fc76-4ab0-880e-670b3db638a9\") " pod="openshift-controller-manager/controller-manager-748f84f96-5pn2r" Feb 16 17:02:38 crc kubenswrapper[4794]: I0216 17:02:38.397494 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b4948f11-fc76-4ab0-880e-670b3db638a9-client-ca\") pod \"controller-manager-748f84f96-5pn2r\" (UID: \"b4948f11-fc76-4ab0-880e-670b3db638a9\") " pod="openshift-controller-manager/controller-manager-748f84f96-5pn2r" Feb 16 17:02:38 crc kubenswrapper[4794]: I0216 17:02:38.397529 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4948f11-fc76-4ab0-880e-670b3db638a9-proxy-ca-bundles\") pod \"controller-manager-748f84f96-5pn2r\" (UID: \"b4948f11-fc76-4ab0-880e-670b3db638a9\") " pod="openshift-controller-manager/controller-manager-748f84f96-5pn2r" Feb 16 17:02:38 crc kubenswrapper[4794]: I0216 17:02:38.397554 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4948f11-fc76-4ab0-880e-670b3db638a9-serving-cert\") pod \"controller-manager-748f84f96-5pn2r\" (UID: \"b4948f11-fc76-4ab0-880e-670b3db638a9\") " pod="openshift-controller-manager/controller-manager-748f84f96-5pn2r" Feb 16 17:02:38 crc kubenswrapper[4794]: I0216 17:02:38.498631 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrjxg\" (UniqueName: \"kubernetes.io/projected/b4948f11-fc76-4ab0-880e-670b3db638a9-kube-api-access-qrjxg\") pod \"controller-manager-748f84f96-5pn2r\" (UID: \"b4948f11-fc76-4ab0-880e-670b3db638a9\") " pod="openshift-controller-manager/controller-manager-748f84f96-5pn2r" Feb 16 17:02:38 crc kubenswrapper[4794]: I0216 17:02:38.498795 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b4948f11-fc76-4ab0-880e-670b3db638a9-client-ca\") pod \"controller-manager-748f84f96-5pn2r\" (UID: \"b4948f11-fc76-4ab0-880e-670b3db638a9\") " pod="openshift-controller-manager/controller-manager-748f84f96-5pn2r" Feb 16 17:02:38 crc kubenswrapper[4794]: I0216 17:02:38.498832 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4948f11-fc76-4ab0-880e-670b3db638a9-proxy-ca-bundles\") pod \"controller-manager-748f84f96-5pn2r\" (UID: \"b4948f11-fc76-4ab0-880e-670b3db638a9\") " pod="openshift-controller-manager/controller-manager-748f84f96-5pn2r" Feb 16 17:02:38 crc kubenswrapper[4794]: I0216 17:02:38.498858 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4948f11-fc76-4ab0-880e-670b3db638a9-serving-cert\") pod \"controller-manager-748f84f96-5pn2r\" (UID: \"b4948f11-fc76-4ab0-880e-670b3db638a9\") " pod="openshift-controller-manager/controller-manager-748f84f96-5pn2r" Feb 16 17:02:38 crc kubenswrapper[4794]: I0216 17:02:38.498904 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4948f11-fc76-4ab0-880e-670b3db638a9-config\") pod \"controller-manager-748f84f96-5pn2r\" (UID: \"b4948f11-fc76-4ab0-880e-670b3db638a9\") " pod="openshift-controller-manager/controller-manager-748f84f96-5pn2r" Feb 16 17:02:38 crc kubenswrapper[4794]: I0216 17:02:38.500171 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b4948f11-fc76-4ab0-880e-670b3db638a9-client-ca\") pod \"controller-manager-748f84f96-5pn2r\" (UID: \"b4948f11-fc76-4ab0-880e-670b3db638a9\") " pod="openshift-controller-manager/controller-manager-748f84f96-5pn2r" Feb 16 17:02:38 crc kubenswrapper[4794]: I0216 17:02:38.500677 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4948f11-fc76-4ab0-880e-670b3db638a9-proxy-ca-bundles\") pod \"controller-manager-748f84f96-5pn2r\" (UID: \"b4948f11-fc76-4ab0-880e-670b3db638a9\") " pod="openshift-controller-manager/controller-manager-748f84f96-5pn2r" Feb 16 17:02:38 crc kubenswrapper[4794]: I0216 17:02:38.500699 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4948f11-fc76-4ab0-880e-670b3db638a9-config\") pod \"controller-manager-748f84f96-5pn2r\" (UID: \"b4948f11-fc76-4ab0-880e-670b3db638a9\") " pod="openshift-controller-manager/controller-manager-748f84f96-5pn2r" Feb 16 17:02:38 crc kubenswrapper[4794]: I0216 17:02:38.508022 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4948f11-fc76-4ab0-880e-670b3db638a9-serving-cert\") pod \"controller-manager-748f84f96-5pn2r\" (UID: \"b4948f11-fc76-4ab0-880e-670b3db638a9\") " pod="openshift-controller-manager/controller-manager-748f84f96-5pn2r" Feb 16 17:02:38 crc kubenswrapper[4794]: I0216 17:02:38.523874 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrjxg\" (UniqueName: \"kubernetes.io/projected/b4948f11-fc76-4ab0-880e-670b3db638a9-kube-api-access-qrjxg\") pod \"controller-manager-748f84f96-5pn2r\" (UID: \"b4948f11-fc76-4ab0-880e-670b3db638a9\") " pod="openshift-controller-manager/controller-manager-748f84f96-5pn2r" Feb 16 17:02:38 crc kubenswrapper[4794]: I0216 17:02:38.693717 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-748f84f96-5pn2r" Feb 16 17:02:38 crc kubenswrapper[4794]: I0216 17:02:38.799760 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af6a5346-b124-4089-8f06-630c21bc3be2" path="/var/lib/kubelet/pods/af6a5346-b124-4089-8f06-630c21bc3be2/volumes" Feb 16 17:02:39 crc kubenswrapper[4794]: I0216 17:02:39.099812 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7cctn" Feb 16 17:02:39 crc kubenswrapper[4794]: I0216 17:02:39.100161 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7cctn" Feb 16 17:02:39 crc kubenswrapper[4794]: I0216 17:02:39.109708 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-748f84f96-5pn2r"] Feb 16 17:02:39 crc kubenswrapper[4794]: I0216 17:02:39.179281 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7cctn" Feb 16 17:02:39 crc kubenswrapper[4794]: I0216 17:02:39.493404 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-748f84f96-5pn2r" event={"ID":"b4948f11-fc76-4ab0-880e-670b3db638a9","Type":"ContainerStarted","Data":"1f116119c81ef9cdeaa100618d064abaaa7f9ad421f124dedd31a7018dcc44b6"} Feb 16 17:02:39 crc kubenswrapper[4794]: I0216 17:02:39.533373 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7cctn" Feb 16 17:02:39 crc kubenswrapper[4794]: I0216 17:02:39.536802 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gxf57" Feb 16 17:02:39 crc kubenswrapper[4794]: I0216 17:02:39.536888 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gxf57" Feb 16 17:02:39 crc kubenswrapper[4794]: I0216 17:02:39.609448 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gxf57" Feb 16 17:02:39 crc kubenswrapper[4794]: I0216 17:02:39.742216 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gj5mf" Feb 16 17:02:40 crc kubenswrapper[4794]: I0216 17:02:40.500441 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-748f84f96-5pn2r" event={"ID":"b4948f11-fc76-4ab0-880e-670b3db638a9","Type":"ContainerStarted","Data":"4aeff2ceb41fca1d1666270db92f8f3f9eb80007de15d4252e3edfaf2860d748"} Feb 16 17:02:40 crc kubenswrapper[4794]: I0216 17:02:40.501763 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-748f84f96-5pn2r" Feb 16 17:02:40 crc kubenswrapper[4794]: I0216 17:02:40.509039 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-748f84f96-5pn2r" Feb 16 17:02:40 crc kubenswrapper[4794]: I0216 17:02:40.548141 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-748f84f96-5pn2r" podStartSLOduration=6.548105431 podStartE2EDuration="6.548105431s" podCreationTimestamp="2026-02-16 17:02:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:40.520565679 +0000 UTC m=+186.468660326" watchObservedRunningTime="2026-02-16 17:02:40.548105431 +0000 UTC m=+186.496200078" Feb 16 17:02:40 crc kubenswrapper[4794]: I0216 17:02:40.564268 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gxf57" Feb 16 17:02:41 crc kubenswrapper[4794]: I0216 17:02:41.094154 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gxf57"] Feb 16 17:02:41 crc kubenswrapper[4794]: I0216 17:02:41.689106 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-69m8b" Feb 16 17:02:41 crc kubenswrapper[4794]: I0216 17:02:41.689246 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-69m8b" Feb 16 17:02:41 crc kubenswrapper[4794]: I0216 17:02:41.740423 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-69m8b" Feb 16 17:02:42 crc kubenswrapper[4794]: I0216 17:02:42.102972 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gj5mf"] Feb 16 17:02:42 crc kubenswrapper[4794]: I0216 17:02:42.103487 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gj5mf" podUID="68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4" containerName="registry-server" containerID="cri-o://f3f3f062dc5ecbd0d964cbc1b2b1277a80bb8bda777974051ad0475694102b37" gracePeriod=2 Feb 16 17:02:42 crc kubenswrapper[4794]: I0216 17:02:42.516498 4794 generic.go:334] "Generic (PLEG): container finished" podID="68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4" containerID="f3f3f062dc5ecbd0d964cbc1b2b1277a80bb8bda777974051ad0475694102b37" exitCode=0 Feb 16 17:02:42 crc kubenswrapper[4794]: I0216 17:02:42.516918 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gj5mf" event={"ID":"68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4","Type":"ContainerDied","Data":"f3f3f062dc5ecbd0d964cbc1b2b1277a80bb8bda777974051ad0475694102b37"} Feb 16 17:02:42 crc kubenswrapper[4794]: I0216 17:02:42.517264 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gxf57" podUID="0c2a611b-e699-45f3-a8ca-a687be266a1f" containerName="registry-server" containerID="cri-o://86cc8abccf7db28968aaf72abf1fc5df9b9e3db3afcb45c5a6e48f297959dcf5" gracePeriod=2 Feb 16 17:02:42 crc kubenswrapper[4794]: I0216 17:02:42.560972 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-69m8b" Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.011291 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gxf57" Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.066818 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td8nh\" (UniqueName: \"kubernetes.io/projected/0c2a611b-e699-45f3-a8ca-a687be266a1f-kube-api-access-td8nh\") pod \"0c2a611b-e699-45f3-a8ca-a687be266a1f\" (UID: \"0c2a611b-e699-45f3-a8ca-a687be266a1f\") " Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.067243 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c2a611b-e699-45f3-a8ca-a687be266a1f-utilities\") pod \"0c2a611b-e699-45f3-a8ca-a687be266a1f\" (UID: \"0c2a611b-e699-45f3-a8ca-a687be266a1f\") " Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.067409 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c2a611b-e699-45f3-a8ca-a687be266a1f-catalog-content\") pod \"0c2a611b-e699-45f3-a8ca-a687be266a1f\" (UID: \"0c2a611b-e699-45f3-a8ca-a687be266a1f\") " Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.068331 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c2a611b-e699-45f3-a8ca-a687be266a1f-utilities" (OuterVolumeSpecName: "utilities") pod "0c2a611b-e699-45f3-a8ca-a687be266a1f" (UID: "0c2a611b-e699-45f3-a8ca-a687be266a1f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.073119 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c2a611b-e699-45f3-a8ca-a687be266a1f-kube-api-access-td8nh" (OuterVolumeSpecName: "kube-api-access-td8nh") pod "0c2a611b-e699-45f3-a8ca-a687be266a1f" (UID: "0c2a611b-e699-45f3-a8ca-a687be266a1f"). InnerVolumeSpecName "kube-api-access-td8nh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.110198 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gj5mf" Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.116900 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c2a611b-e699-45f3-a8ca-a687be266a1f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c2a611b-e699-45f3-a8ca-a687be266a1f" (UID: "0c2a611b-e699-45f3-a8ca-a687be266a1f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.169079 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbtrp\" (UniqueName: \"kubernetes.io/projected/68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4-kube-api-access-xbtrp\") pod \"68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4\" (UID: \"68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4\") " Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.169650 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4-catalog-content\") pod \"68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4\" (UID: \"68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4\") " Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.169854 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4-utilities\") pod \"68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4\" (UID: \"68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4\") " Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.170739 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4-utilities" (OuterVolumeSpecName: "utilities") pod "68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4" (UID: "68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.171351 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.171477 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c2a611b-e699-45f3-a8ca-a687be266a1f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.171563 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td8nh\" (UniqueName: \"kubernetes.io/projected/0c2a611b-e699-45f3-a8ca-a687be266a1f-kube-api-access-td8nh\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.171698 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c2a611b-e699-45f3-a8ca-a687be266a1f-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.172236 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4-kube-api-access-xbtrp" (OuterVolumeSpecName: "kube-api-access-xbtrp") pod "68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4" (UID: "68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4"). InnerVolumeSpecName "kube-api-access-xbtrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.221513 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4" (UID: "68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.272615 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbtrp\" (UniqueName: \"kubernetes.io/projected/68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4-kube-api-access-xbtrp\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.272660 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.526029 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gj5mf" event={"ID":"68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4","Type":"ContainerDied","Data":"dcaada487ce6dc13cfa8ffb912817d09ba242370b8fee3c36e9b1aa6aa10768d"} Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.526341 4794 scope.go:117] "RemoveContainer" containerID="f3f3f062dc5ecbd0d964cbc1b2b1277a80bb8bda777974051ad0475694102b37" Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.526624 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gj5mf" Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.532077 4794 generic.go:334] "Generic (PLEG): container finished" podID="0c2a611b-e699-45f3-a8ca-a687be266a1f" containerID="86cc8abccf7db28968aaf72abf1fc5df9b9e3db3afcb45c5a6e48f297959dcf5" exitCode=0 Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.532224 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gxf57" event={"ID":"0c2a611b-e699-45f3-a8ca-a687be266a1f","Type":"ContainerDied","Data":"86cc8abccf7db28968aaf72abf1fc5df9b9e3db3afcb45c5a6e48f297959dcf5"} Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.532392 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gxf57" event={"ID":"0c2a611b-e699-45f3-a8ca-a687be266a1f","Type":"ContainerDied","Data":"66b4e9d64f78da64a1b2d7d51d30d62ffff6140465fe7c2997814c53c2ca3a58"} Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.532259 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gxf57" Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.554110 4794 scope.go:117] "RemoveContainer" containerID="b6d16b138537156df50e3491a6291cd82b5b46bd217edc0993950375f0988c84" Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.561671 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gj5mf"] Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.567207 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gj5mf"] Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.574533 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gxf57"] Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.577728 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gxf57"] Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.587527 4794 scope.go:117] "RemoveContainer" containerID="4343b2a6b0391ee86617460676f89afd50fb3a665a1e28a2601eaa2c6f530a4d" Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.601236 4794 scope.go:117] "RemoveContainer" containerID="86cc8abccf7db28968aaf72abf1fc5df9b9e3db3afcb45c5a6e48f297959dcf5" Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.622104 4794 scope.go:117] "RemoveContainer" containerID="2e4ddedae6fe585a8bcfb6256be26a7355c61d9054ec576ec0e82118a6890e4b" Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.641545 4794 scope.go:117] "RemoveContainer" containerID="d654ef9a7b68f323e80fcbb4e0dfe5554d402a5b222e4ad4a3c28618ff8d01b3" Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.666750 4794 scope.go:117] "RemoveContainer" containerID="86cc8abccf7db28968aaf72abf1fc5df9b9e3db3afcb45c5a6e48f297959dcf5" Feb 16 17:02:43 crc kubenswrapper[4794]: E0216 17:02:43.667184 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86cc8abccf7db28968aaf72abf1fc5df9b9e3db3afcb45c5a6e48f297959dcf5\": container with ID starting with 86cc8abccf7db28968aaf72abf1fc5df9b9e3db3afcb45c5a6e48f297959dcf5 not found: ID does not exist" containerID="86cc8abccf7db28968aaf72abf1fc5df9b9e3db3afcb45c5a6e48f297959dcf5" Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.667227 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86cc8abccf7db28968aaf72abf1fc5df9b9e3db3afcb45c5a6e48f297959dcf5"} err="failed to get container status \"86cc8abccf7db28968aaf72abf1fc5df9b9e3db3afcb45c5a6e48f297959dcf5\": rpc error: code = NotFound desc = could not find container \"86cc8abccf7db28968aaf72abf1fc5df9b9e3db3afcb45c5a6e48f297959dcf5\": container with ID starting with 86cc8abccf7db28968aaf72abf1fc5df9b9e3db3afcb45c5a6e48f297959dcf5 not found: ID does not exist" Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.667259 4794 scope.go:117] "RemoveContainer" containerID="2e4ddedae6fe585a8bcfb6256be26a7355c61d9054ec576ec0e82118a6890e4b" Feb 16 17:02:43 crc kubenswrapper[4794]: E0216 17:02:43.667593 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e4ddedae6fe585a8bcfb6256be26a7355c61d9054ec576ec0e82118a6890e4b\": container with ID starting with 2e4ddedae6fe585a8bcfb6256be26a7355c61d9054ec576ec0e82118a6890e4b not found: ID does not exist" containerID="2e4ddedae6fe585a8bcfb6256be26a7355c61d9054ec576ec0e82118a6890e4b" Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.667631 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e4ddedae6fe585a8bcfb6256be26a7355c61d9054ec576ec0e82118a6890e4b"} err="failed to get container status \"2e4ddedae6fe585a8bcfb6256be26a7355c61d9054ec576ec0e82118a6890e4b\": rpc error: code = NotFound desc = could not find container \"2e4ddedae6fe585a8bcfb6256be26a7355c61d9054ec576ec0e82118a6890e4b\": container with ID starting with 2e4ddedae6fe585a8bcfb6256be26a7355c61d9054ec576ec0e82118a6890e4b not found: ID does not exist" Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.667658 4794 scope.go:117] "RemoveContainer" containerID="d654ef9a7b68f323e80fcbb4e0dfe5554d402a5b222e4ad4a3c28618ff8d01b3" Feb 16 17:02:43 crc kubenswrapper[4794]: E0216 17:02:43.667876 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d654ef9a7b68f323e80fcbb4e0dfe5554d402a5b222e4ad4a3c28618ff8d01b3\": container with ID starting with d654ef9a7b68f323e80fcbb4e0dfe5554d402a5b222e4ad4a3c28618ff8d01b3 not found: ID does not exist" containerID="d654ef9a7b68f323e80fcbb4e0dfe5554d402a5b222e4ad4a3c28618ff8d01b3" Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.667904 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d654ef9a7b68f323e80fcbb4e0dfe5554d402a5b222e4ad4a3c28618ff8d01b3"} err="failed to get container status \"d654ef9a7b68f323e80fcbb4e0dfe5554d402a5b222e4ad4a3c28618ff8d01b3\": rpc error: code = NotFound desc = could not find container \"d654ef9a7b68f323e80fcbb4e0dfe5554d402a5b222e4ad4a3c28618ff8d01b3\": container with ID starting with d654ef9a7b68f323e80fcbb4e0dfe5554d402a5b222e4ad4a3c28618ff8d01b3 not found: ID does not exist" Feb 16 17:02:43 crc kubenswrapper[4794]: I0216 17:02:43.962879 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:02:44 crc kubenswrapper[4794]: I0216 17:02:44.496161 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-69m8b"] Feb 16 17:02:44 crc kubenswrapper[4794]: I0216 17:02:44.799411 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c2a611b-e699-45f3-a8ca-a687be266a1f" path="/var/lib/kubelet/pods/0c2a611b-e699-45f3-a8ca-a687be266a1f/volumes" Feb 16 17:02:44 crc kubenswrapper[4794]: I0216 17:02:44.800245 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4" path="/var/lib/kubelet/pods/68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4/volumes" Feb 16 17:02:45 crc kubenswrapper[4794]: I0216 17:02:45.546877 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-69m8b" podUID="7039bec8-af08-4439-be97-c6ee7d3a1c3b" containerName="registry-server" containerID="cri-o://9603c865843a7abe69d329048e1905ee5512b76d43a2ffcd6d53c1644b780c09" gracePeriod=2 Feb 16 17:02:46 crc kubenswrapper[4794]: I0216 17:02:46.557665 4794 generic.go:334] "Generic (PLEG): container finished" podID="7039bec8-af08-4439-be97-c6ee7d3a1c3b" containerID="9603c865843a7abe69d329048e1905ee5512b76d43a2ffcd6d53c1644b780c09" exitCode=0 Feb 16 17:02:46 crc kubenswrapper[4794]: I0216 17:02:46.557744 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-69m8b" event={"ID":"7039bec8-af08-4439-be97-c6ee7d3a1c3b","Type":"ContainerDied","Data":"9603c865843a7abe69d329048e1905ee5512b76d43a2ffcd6d53c1644b780c09"} Feb 16 17:02:46 crc kubenswrapper[4794]: I0216 17:02:46.557983 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-69m8b" event={"ID":"7039bec8-af08-4439-be97-c6ee7d3a1c3b","Type":"ContainerDied","Data":"31ffc3c4144597eed2bada54478d8f685ecd5904dfe32ad979d007d47ba95a7b"} Feb 16 17:02:46 crc kubenswrapper[4794]: I0216 17:02:46.558002 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31ffc3c4144597eed2bada54478d8f685ecd5904dfe32ad979d007d47ba95a7b" Feb 16 17:02:46 crc kubenswrapper[4794]: I0216 17:02:46.578239 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-69m8b" Feb 16 17:02:46 crc kubenswrapper[4794]: I0216 17:02:46.621694 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7039bec8-af08-4439-be97-c6ee7d3a1c3b-catalog-content\") pod \"7039bec8-af08-4439-be97-c6ee7d3a1c3b\" (UID: \"7039bec8-af08-4439-be97-c6ee7d3a1c3b\") " Feb 16 17:02:46 crc kubenswrapper[4794]: I0216 17:02:46.621830 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8kfl\" (UniqueName: \"kubernetes.io/projected/7039bec8-af08-4439-be97-c6ee7d3a1c3b-kube-api-access-f8kfl\") pod \"7039bec8-af08-4439-be97-c6ee7d3a1c3b\" (UID: \"7039bec8-af08-4439-be97-c6ee7d3a1c3b\") " Feb 16 17:02:46 crc kubenswrapper[4794]: I0216 17:02:46.621855 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7039bec8-af08-4439-be97-c6ee7d3a1c3b-utilities\") pod \"7039bec8-af08-4439-be97-c6ee7d3a1c3b\" (UID: \"7039bec8-af08-4439-be97-c6ee7d3a1c3b\") " Feb 16 17:02:46 crc kubenswrapper[4794]: I0216 17:02:46.622744 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7039bec8-af08-4439-be97-c6ee7d3a1c3b-utilities" (OuterVolumeSpecName: "utilities") pod "7039bec8-af08-4439-be97-c6ee7d3a1c3b" (UID: "7039bec8-af08-4439-be97-c6ee7d3a1c3b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:02:46 crc kubenswrapper[4794]: I0216 17:02:46.628216 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7039bec8-af08-4439-be97-c6ee7d3a1c3b-kube-api-access-f8kfl" (OuterVolumeSpecName: "kube-api-access-f8kfl") pod "7039bec8-af08-4439-be97-c6ee7d3a1c3b" (UID: "7039bec8-af08-4439-be97-c6ee7d3a1c3b"). InnerVolumeSpecName "kube-api-access-f8kfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:02:46 crc kubenswrapper[4794]: I0216 17:02:46.646946 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7039bec8-af08-4439-be97-c6ee7d3a1c3b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7039bec8-af08-4439-be97-c6ee7d3a1c3b" (UID: "7039bec8-af08-4439-be97-c6ee7d3a1c3b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:02:46 crc kubenswrapper[4794]: I0216 17:02:46.723163 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8kfl\" (UniqueName: \"kubernetes.io/projected/7039bec8-af08-4439-be97-c6ee7d3a1c3b-kube-api-access-f8kfl\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:46 crc kubenswrapper[4794]: I0216 17:02:46.723501 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7039bec8-af08-4439-be97-c6ee7d3a1c3b-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:46 crc kubenswrapper[4794]: I0216 17:02:46.723513 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7039bec8-af08-4439-be97-c6ee7d3a1c3b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:47 crc kubenswrapper[4794]: I0216 17:02:47.562629 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-69m8b" Feb 16 17:02:47 crc kubenswrapper[4794]: I0216 17:02:47.582763 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-69m8b"] Feb 16 17:02:47 crc kubenswrapper[4794]: I0216 17:02:47.585005 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-69m8b"] Feb 16 17:02:48 crc kubenswrapper[4794]: I0216 17:02:48.802346 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7039bec8-af08-4439-be97-c6ee7d3a1c3b" path="/var/lib/kubelet/pods/7039bec8-af08-4439-be97-c6ee7d3a1c3b/volumes" Feb 16 17:02:50 crc kubenswrapper[4794]: I0216 17:02:50.140795 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:02:50 crc kubenswrapper[4794]: I0216 17:02:50.140874 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:02:50 crc kubenswrapper[4794]: I0216 17:02:50.292975 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" podUID="2b7e6568-6f15-4a8f-aca6-38be84a1a624" containerName="oauth-openshift" containerID="cri-o://b71ade9bb24bbfcad1d6e843429935de8b6387450f68313e7ed1f54116cc34e9" gracePeriod=15 Feb 16 17:02:50 crc kubenswrapper[4794]: I0216 17:02:50.581608 4794 generic.go:334] "Generic (PLEG): container finished" podID="2b7e6568-6f15-4a8f-aca6-38be84a1a624" containerID="b71ade9bb24bbfcad1d6e843429935de8b6387450f68313e7ed1f54116cc34e9" exitCode=0 Feb 16 17:02:50 crc kubenswrapper[4794]: I0216 17:02:50.581660 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" event={"ID":"2b7e6568-6f15-4a8f-aca6-38be84a1a624","Type":"ContainerDied","Data":"b71ade9bb24bbfcad1d6e843429935de8b6387450f68313e7ed1f54116cc34e9"} Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.396328 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.487062 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-router-certs\") pod \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.487111 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-user-template-error\") pod \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.487140 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6p66\" (UniqueName: \"kubernetes.io/projected/2b7e6568-6f15-4a8f-aca6-38be84a1a624-kube-api-access-d6p66\") pod \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.487157 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-user-template-login\") pod \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.487195 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-session\") pod \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.487231 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-ocp-branding-template\") pod \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.487271 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-cliconfig\") pod \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.487321 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-serving-cert\") pod \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.487356 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-user-idp-0-file-data\") pod \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.487387 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2b7e6568-6f15-4a8f-aca6-38be84a1a624-audit-policies\") pod \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.487412 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-trusted-ca-bundle\") pod \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.487453 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-service-ca\") pod \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.487488 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2b7e6568-6f15-4a8f-aca6-38be84a1a624-audit-dir\") pod \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.487531 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-user-template-provider-selection\") pod \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\" (UID: \"2b7e6568-6f15-4a8f-aca6-38be84a1a624\") " Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.488288 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "2b7e6568-6f15-4a8f-aca6-38be84a1a624" (UID: "2b7e6568-6f15-4a8f-aca6-38be84a1a624"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.488399 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b7e6568-6f15-4a8f-aca6-38be84a1a624-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "2b7e6568-6f15-4a8f-aca6-38be84a1a624" (UID: "2b7e6568-6f15-4a8f-aca6-38be84a1a624"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.489208 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b7e6568-6f15-4a8f-aca6-38be84a1a624-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "2b7e6568-6f15-4a8f-aca6-38be84a1a624" (UID: "2b7e6568-6f15-4a8f-aca6-38be84a1a624"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.489350 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "2b7e6568-6f15-4a8f-aca6-38be84a1a624" (UID: "2b7e6568-6f15-4a8f-aca6-38be84a1a624"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.489675 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "2b7e6568-6f15-4a8f-aca6-38be84a1a624" (UID: "2b7e6568-6f15-4a8f-aca6-38be84a1a624"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.493597 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "2b7e6568-6f15-4a8f-aca6-38be84a1a624" (UID: "2b7e6568-6f15-4a8f-aca6-38be84a1a624"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.493977 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "2b7e6568-6f15-4a8f-aca6-38be84a1a624" (UID: "2b7e6568-6f15-4a8f-aca6-38be84a1a624"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.494657 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "2b7e6568-6f15-4a8f-aca6-38be84a1a624" (UID: "2b7e6568-6f15-4a8f-aca6-38be84a1a624"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.494725 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b7e6568-6f15-4a8f-aca6-38be84a1a624-kube-api-access-d6p66" (OuterVolumeSpecName: "kube-api-access-d6p66") pod "2b7e6568-6f15-4a8f-aca6-38be84a1a624" (UID: "2b7e6568-6f15-4a8f-aca6-38be84a1a624"). InnerVolumeSpecName "kube-api-access-d6p66". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.495193 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "2b7e6568-6f15-4a8f-aca6-38be84a1a624" (UID: "2b7e6568-6f15-4a8f-aca6-38be84a1a624"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.495399 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "2b7e6568-6f15-4a8f-aca6-38be84a1a624" (UID: "2b7e6568-6f15-4a8f-aca6-38be84a1a624"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.495581 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "2b7e6568-6f15-4a8f-aca6-38be84a1a624" (UID: "2b7e6568-6f15-4a8f-aca6-38be84a1a624"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.495728 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "2b7e6568-6f15-4a8f-aca6-38be84a1a624" (UID: "2b7e6568-6f15-4a8f-aca6-38be84a1a624"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.496065 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "2b7e6568-6f15-4a8f-aca6-38be84a1a624" (UID: "2b7e6568-6f15-4a8f-aca6-38be84a1a624"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.588116 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" event={"ID":"2b7e6568-6f15-4a8f-aca6-38be84a1a624","Type":"ContainerDied","Data":"8f4bb954ca2e086af5e8e513e1547c5bf64b67356c4c1467fe4366c4032b7a74"} Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.588176 4794 scope.go:117] "RemoveContainer" containerID="b71ade9bb24bbfcad1d6e843429935de8b6387450f68313e7ed1f54116cc34e9" Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.588337 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qfr5h" Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.588902 4794 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.588951 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6p66\" (UniqueName: \"kubernetes.io/projected/2b7e6568-6f15-4a8f-aca6-38be84a1a624-kube-api-access-d6p66\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.588972 4794 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.588984 4794 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.588997 4794 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.589011 4794 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.589024 4794 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.589038 4794 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.589050 4794 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2b7e6568-6f15-4a8f-aca6-38be84a1a624-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.589062 4794 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.589075 4794 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.589088 4794 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2b7e6568-6f15-4a8f-aca6-38be84a1a624-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.589100 4794 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.589115 4794 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2b7e6568-6f15-4a8f-aca6-38be84a1a624-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.616363 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qfr5h"] Feb 16 17:02:51 crc kubenswrapper[4794]: I0216 17:02:51.618354 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qfr5h"] Feb 16 17:02:52 crc kubenswrapper[4794]: I0216 17:02:52.803369 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b7e6568-6f15-4a8f-aca6-38be84a1a624" path="/var/lib/kubelet/pods/2b7e6568-6f15-4a8f-aca6-38be84a1a624/volumes" Feb 16 17:02:54 crc kubenswrapper[4794]: I0216 17:02:54.858656 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-748f84f96-5pn2r"] Feb 16 17:02:54 crc kubenswrapper[4794]: I0216 17:02:54.858968 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-748f84f96-5pn2r" podUID="b4948f11-fc76-4ab0-880e-670b3db638a9" containerName="controller-manager" containerID="cri-o://4aeff2ceb41fca1d1666270db92f8f3f9eb80007de15d4252e3edfaf2860d748" gracePeriod=30 Feb 16 17:02:54 crc kubenswrapper[4794]: I0216 17:02:54.950512 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55cb94d69b-vbfgw"] Feb 16 17:02:54 crc kubenswrapper[4794]: I0216 17:02:54.950980 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-55cb94d69b-vbfgw" podUID="07616a6b-a602-4f28-a88c-c5f71b56466a" containerName="route-controller-manager" containerID="cri-o://9b24ed5bbb960dd3d24fc18e6c9a2888ca98831cc5a77a70d662cf5fa140b8d3" gracePeriod=30 Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.384893 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55cb94d69b-vbfgw" Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.442718 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcskh\" (UniqueName: \"kubernetes.io/projected/07616a6b-a602-4f28-a88c-c5f71b56466a-kube-api-access-rcskh\") pod \"07616a6b-a602-4f28-a88c-c5f71b56466a\" (UID: \"07616a6b-a602-4f28-a88c-c5f71b56466a\") " Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.442785 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07616a6b-a602-4f28-a88c-c5f71b56466a-client-ca\") pod \"07616a6b-a602-4f28-a88c-c5f71b56466a\" (UID: \"07616a6b-a602-4f28-a88c-c5f71b56466a\") " Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.442810 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07616a6b-a602-4f28-a88c-c5f71b56466a-config\") pod \"07616a6b-a602-4f28-a88c-c5f71b56466a\" (UID: \"07616a6b-a602-4f28-a88c-c5f71b56466a\") " Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.442828 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07616a6b-a602-4f28-a88c-c5f71b56466a-serving-cert\") pod \"07616a6b-a602-4f28-a88c-c5f71b56466a\" (UID: \"07616a6b-a602-4f28-a88c-c5f71b56466a\") " Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.444148 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07616a6b-a602-4f28-a88c-c5f71b56466a-client-ca" (OuterVolumeSpecName: "client-ca") pod "07616a6b-a602-4f28-a88c-c5f71b56466a" (UID: "07616a6b-a602-4f28-a88c-c5f71b56466a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.444265 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07616a6b-a602-4f28-a88c-c5f71b56466a-config" (OuterVolumeSpecName: "config") pod "07616a6b-a602-4f28-a88c-c5f71b56466a" (UID: "07616a6b-a602-4f28-a88c-c5f71b56466a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.448995 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07616a6b-a602-4f28-a88c-c5f71b56466a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "07616a6b-a602-4f28-a88c-c5f71b56466a" (UID: "07616a6b-a602-4f28-a88c-c5f71b56466a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.449678 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07616a6b-a602-4f28-a88c-c5f71b56466a-kube-api-access-rcskh" (OuterVolumeSpecName: "kube-api-access-rcskh") pod "07616a6b-a602-4f28-a88c-c5f71b56466a" (UID: "07616a6b-a602-4f28-a88c-c5f71b56466a"). InnerVolumeSpecName "kube-api-access-rcskh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.458204 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-748f84f96-5pn2r" Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.544185 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrjxg\" (UniqueName: \"kubernetes.io/projected/b4948f11-fc76-4ab0-880e-670b3db638a9-kube-api-access-qrjxg\") pod \"b4948f11-fc76-4ab0-880e-670b3db638a9\" (UID: \"b4948f11-fc76-4ab0-880e-670b3db638a9\") " Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.544274 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4948f11-fc76-4ab0-880e-670b3db638a9-config\") pod \"b4948f11-fc76-4ab0-880e-670b3db638a9\" (UID: \"b4948f11-fc76-4ab0-880e-670b3db638a9\") " Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.544318 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4948f11-fc76-4ab0-880e-670b3db638a9-serving-cert\") pod \"b4948f11-fc76-4ab0-880e-670b3db638a9\" (UID: \"b4948f11-fc76-4ab0-880e-670b3db638a9\") " Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.544350 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b4948f11-fc76-4ab0-880e-670b3db638a9-client-ca\") pod \"b4948f11-fc76-4ab0-880e-670b3db638a9\" (UID: \"b4948f11-fc76-4ab0-880e-670b3db638a9\") " Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.544382 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4948f11-fc76-4ab0-880e-670b3db638a9-proxy-ca-bundles\") pod \"b4948f11-fc76-4ab0-880e-670b3db638a9\" (UID: \"b4948f11-fc76-4ab0-880e-670b3db638a9\") " Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.544557 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcskh\" (UniqueName: \"kubernetes.io/projected/07616a6b-a602-4f28-a88c-c5f71b56466a-kube-api-access-rcskh\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.544567 4794 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07616a6b-a602-4f28-a88c-c5f71b56466a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.544577 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07616a6b-a602-4f28-a88c-c5f71b56466a-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.544585 4794 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07616a6b-a602-4f28-a88c-c5f71b56466a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.545187 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4948f11-fc76-4ab0-880e-670b3db638a9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b4948f11-fc76-4ab0-880e-670b3db638a9" (UID: "b4948f11-fc76-4ab0-880e-670b3db638a9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.545244 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4948f11-fc76-4ab0-880e-670b3db638a9-client-ca" (OuterVolumeSpecName: "client-ca") pod "b4948f11-fc76-4ab0-880e-670b3db638a9" (UID: "b4948f11-fc76-4ab0-880e-670b3db638a9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.545918 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4948f11-fc76-4ab0-880e-670b3db638a9-config" (OuterVolumeSpecName: "config") pod "b4948f11-fc76-4ab0-880e-670b3db638a9" (UID: "b4948f11-fc76-4ab0-880e-670b3db638a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.553668 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4948f11-fc76-4ab0-880e-670b3db638a9-kube-api-access-qrjxg" (OuterVolumeSpecName: "kube-api-access-qrjxg") pod "b4948f11-fc76-4ab0-880e-670b3db638a9" (UID: "b4948f11-fc76-4ab0-880e-670b3db638a9"). InnerVolumeSpecName "kube-api-access-qrjxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.553662 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4948f11-fc76-4ab0-880e-670b3db638a9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b4948f11-fc76-4ab0-880e-670b3db638a9" (UID: "b4948f11-fc76-4ab0-880e-670b3db638a9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.613115 4794 generic.go:334] "Generic (PLEG): container finished" podID="b4948f11-fc76-4ab0-880e-670b3db638a9" containerID="4aeff2ceb41fca1d1666270db92f8f3f9eb80007de15d4252e3edfaf2860d748" exitCode=0 Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.613189 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-748f84f96-5pn2r" Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.613199 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-748f84f96-5pn2r" event={"ID":"b4948f11-fc76-4ab0-880e-670b3db638a9","Type":"ContainerDied","Data":"4aeff2ceb41fca1d1666270db92f8f3f9eb80007de15d4252e3edfaf2860d748"} Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.613357 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-748f84f96-5pn2r" event={"ID":"b4948f11-fc76-4ab0-880e-670b3db638a9","Type":"ContainerDied","Data":"1f116119c81ef9cdeaa100618d064abaaa7f9ad421f124dedd31a7018dcc44b6"} Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.613385 4794 scope.go:117] "RemoveContainer" containerID="4aeff2ceb41fca1d1666270db92f8f3f9eb80007de15d4252e3edfaf2860d748" Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.614753 4794 generic.go:334] "Generic (PLEG): container finished" podID="07616a6b-a602-4f28-a88c-c5f71b56466a" containerID="9b24ed5bbb960dd3d24fc18e6c9a2888ca98831cc5a77a70d662cf5fa140b8d3" exitCode=0 Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.614778 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55cb94d69b-vbfgw" event={"ID":"07616a6b-a602-4f28-a88c-c5f71b56466a","Type":"ContainerDied","Data":"9b24ed5bbb960dd3d24fc18e6c9a2888ca98831cc5a77a70d662cf5fa140b8d3"} Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.614796 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55cb94d69b-vbfgw" event={"ID":"07616a6b-a602-4f28-a88c-c5f71b56466a","Type":"ContainerDied","Data":"76afd5d4fae18a3a01f97ed6b9cd9258e6d923456e942e0faf3367e8b6a5a85f"} Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.614812 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55cb94d69b-vbfgw" Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.640827 4794 scope.go:117] "RemoveContainer" containerID="4aeff2ceb41fca1d1666270db92f8f3f9eb80007de15d4252e3edfaf2860d748" Feb 16 17:02:55 crc kubenswrapper[4794]: E0216 17:02:55.643730 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4aeff2ceb41fca1d1666270db92f8f3f9eb80007de15d4252e3edfaf2860d748\": container with ID starting with 4aeff2ceb41fca1d1666270db92f8f3f9eb80007de15d4252e3edfaf2860d748 not found: ID does not exist" containerID="4aeff2ceb41fca1d1666270db92f8f3f9eb80007de15d4252e3edfaf2860d748" Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.643775 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aeff2ceb41fca1d1666270db92f8f3f9eb80007de15d4252e3edfaf2860d748"} err="failed to get container status \"4aeff2ceb41fca1d1666270db92f8f3f9eb80007de15d4252e3edfaf2860d748\": rpc error: code = NotFound desc = could not find container \"4aeff2ceb41fca1d1666270db92f8f3f9eb80007de15d4252e3edfaf2860d748\": container with ID starting with 4aeff2ceb41fca1d1666270db92f8f3f9eb80007de15d4252e3edfaf2860d748 not found: ID does not exist" Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.643806 4794 scope.go:117] "RemoveContainer" containerID="9b24ed5bbb960dd3d24fc18e6c9a2888ca98831cc5a77a70d662cf5fa140b8d3" Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.644812 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-748f84f96-5pn2r"] Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.645408 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4948f11-fc76-4ab0-880e-670b3db638a9-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.645424 4794 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4948f11-fc76-4ab0-880e-670b3db638a9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.645436 4794 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b4948f11-fc76-4ab0-880e-670b3db638a9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.645446 4794 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4948f11-fc76-4ab0-880e-670b3db638a9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.645457 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrjxg\" (UniqueName: \"kubernetes.io/projected/b4948f11-fc76-4ab0-880e-670b3db638a9-kube-api-access-qrjxg\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.647467 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-748f84f96-5pn2r"] Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.656357 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55cb94d69b-vbfgw"] Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.656870 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55cb94d69b-vbfgw"] Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.663081 4794 scope.go:117] "RemoveContainer" containerID="9b24ed5bbb960dd3d24fc18e6c9a2888ca98831cc5a77a70d662cf5fa140b8d3" Feb 16 17:02:55 crc kubenswrapper[4794]: E0216 17:02:55.663591 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b24ed5bbb960dd3d24fc18e6c9a2888ca98831cc5a77a70d662cf5fa140b8d3\": container with ID starting with 9b24ed5bbb960dd3d24fc18e6c9a2888ca98831cc5a77a70d662cf5fa140b8d3 not found: ID does not exist" containerID="9b24ed5bbb960dd3d24fc18e6c9a2888ca98831cc5a77a70d662cf5fa140b8d3" Feb 16 17:02:55 crc kubenswrapper[4794]: I0216 17:02:55.663635 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b24ed5bbb960dd3d24fc18e6c9a2888ca98831cc5a77a70d662cf5fa140b8d3"} err="failed to get container status \"9b24ed5bbb960dd3d24fc18e6c9a2888ca98831cc5a77a70d662cf5fa140b8d3\": rpc error: code = NotFound desc = could not find container \"9b24ed5bbb960dd3d24fc18e6c9a2888ca98831cc5a77a70d662cf5fa140b8d3\": container with ID starting with 9b24ed5bbb960dd3d24fc18e6c9a2888ca98831cc5a77a70d662cf5fa140b8d3 not found: ID does not exist" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.385819 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d87d4c7cc-kn29w"] Feb 16 17:02:56 crc kubenswrapper[4794]: E0216 17:02:56.386088 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7039bec8-af08-4439-be97-c6ee7d3a1c3b" containerName="registry-server" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.386104 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="7039bec8-af08-4439-be97-c6ee7d3a1c3b" containerName="registry-server" Feb 16 17:02:56 crc kubenswrapper[4794]: E0216 17:02:56.386116 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7039bec8-af08-4439-be97-c6ee7d3a1c3b" containerName="extract-utilities" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.386125 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="7039bec8-af08-4439-be97-c6ee7d3a1c3b" containerName="extract-utilities" Feb 16 17:02:56 crc kubenswrapper[4794]: E0216 17:02:56.386143 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b7e6568-6f15-4a8f-aca6-38be84a1a624" containerName="oauth-openshift" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.386152 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b7e6568-6f15-4a8f-aca6-38be84a1a624" containerName="oauth-openshift" Feb 16 17:02:56 crc kubenswrapper[4794]: E0216 17:02:56.386163 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7039bec8-af08-4439-be97-c6ee7d3a1c3b" containerName="extract-content" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.386171 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="7039bec8-af08-4439-be97-c6ee7d3a1c3b" containerName="extract-content" Feb 16 17:02:56 crc kubenswrapper[4794]: E0216 17:02:56.386183 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07616a6b-a602-4f28-a88c-c5f71b56466a" containerName="route-controller-manager" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.386193 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="07616a6b-a602-4f28-a88c-c5f71b56466a" containerName="route-controller-manager" Feb 16 17:02:56 crc kubenswrapper[4794]: E0216 17:02:56.386206 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c2a611b-e699-45f3-a8ca-a687be266a1f" containerName="registry-server" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.386214 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c2a611b-e699-45f3-a8ca-a687be266a1f" containerName="registry-server" Feb 16 17:02:56 crc kubenswrapper[4794]: E0216 17:02:56.386227 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4948f11-fc76-4ab0-880e-670b3db638a9" containerName="controller-manager" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.386234 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4948f11-fc76-4ab0-880e-670b3db638a9" containerName="controller-manager" Feb 16 17:02:56 crc kubenswrapper[4794]: E0216 17:02:56.386246 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c2a611b-e699-45f3-a8ca-a687be266a1f" containerName="extract-utilities" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.386254 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c2a611b-e699-45f3-a8ca-a687be266a1f" containerName="extract-utilities" Feb 16 17:02:56 crc kubenswrapper[4794]: E0216 17:02:56.386264 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4" containerName="extract-content" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.386272 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4" containerName="extract-content" Feb 16 17:02:56 crc kubenswrapper[4794]: E0216 17:02:56.386282 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4" containerName="registry-server" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.386289 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4" containerName="registry-server" Feb 16 17:02:56 crc kubenswrapper[4794]: E0216 17:02:56.386342 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c2a611b-e699-45f3-a8ca-a687be266a1f" containerName="extract-content" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.386350 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c2a611b-e699-45f3-a8ca-a687be266a1f" containerName="extract-content" Feb 16 17:02:56 crc kubenswrapper[4794]: E0216 17:02:56.386364 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4" containerName="extract-utilities" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.386372 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4" containerName="extract-utilities" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.386507 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="68b9b9c6-df6b-4125-bd0c-6352e6f4f2d4" containerName="registry-server" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.386522 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c2a611b-e699-45f3-a8ca-a687be266a1f" containerName="registry-server" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.386535 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="7039bec8-af08-4439-be97-c6ee7d3a1c3b" containerName="registry-server" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.386548 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b7e6568-6f15-4a8f-aca6-38be84a1a624" containerName="oauth-openshift" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.386560 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="07616a6b-a602-4f28-a88c-c5f71b56466a" containerName="route-controller-manager" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.386574 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4948f11-fc76-4ab0-880e-670b3db638a9" containerName="controller-manager" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.386993 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d87d4c7cc-kn29w" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.390086 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.390345 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6688d8457f-5ktw8"] Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.390674 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.390946 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.391268 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6688d8457f-5ktw8" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.391376 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.393687 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.393931 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.394248 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.394788 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.395246 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.395451 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.395857 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.396208 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.403568 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d87d4c7cc-kn29w"] Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.404832 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.418142 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6688d8457f-5ktw8"] Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.457123 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fe50bc22-7feb-42a8-8fec-634bb678dac4-proxy-ca-bundles\") pod \"controller-manager-d87d4c7cc-kn29w\" (UID: \"fe50bc22-7feb-42a8-8fec-634bb678dac4\") " pod="openshift-controller-manager/controller-manager-d87d4c7cc-kn29w" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.458762 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvv4q\" (UniqueName: \"kubernetes.io/projected/fe50bc22-7feb-42a8-8fec-634bb678dac4-kube-api-access-vvv4q\") pod \"controller-manager-d87d4c7cc-kn29w\" (UID: \"fe50bc22-7feb-42a8-8fec-634bb678dac4\") " pod="openshift-controller-manager/controller-manager-d87d4c7cc-kn29w" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.458868 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d0d5957-7932-46a3-b42d-b53e6d67c148-serving-cert\") pod \"route-controller-manager-6688d8457f-5ktw8\" (UID: \"7d0d5957-7932-46a3-b42d-b53e6d67c148\") " pod="openshift-route-controller-manager/route-controller-manager-6688d8457f-5ktw8" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.459327 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fe50bc22-7feb-42a8-8fec-634bb678dac4-client-ca\") pod \"controller-manager-d87d4c7cc-kn29w\" (UID: \"fe50bc22-7feb-42a8-8fec-634bb678dac4\") " pod="openshift-controller-manager/controller-manager-d87d4c7cc-kn29w" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.459433 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d0d5957-7932-46a3-b42d-b53e6d67c148-client-ca\") pod \"route-controller-manager-6688d8457f-5ktw8\" (UID: \"7d0d5957-7932-46a3-b42d-b53e6d67c148\") " pod="openshift-route-controller-manager/route-controller-manager-6688d8457f-5ktw8" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.459482 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe50bc22-7feb-42a8-8fec-634bb678dac4-serving-cert\") pod \"controller-manager-d87d4c7cc-kn29w\" (UID: \"fe50bc22-7feb-42a8-8fec-634bb678dac4\") " pod="openshift-controller-manager/controller-manager-d87d4c7cc-kn29w" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.459513 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d0d5957-7932-46a3-b42d-b53e6d67c148-config\") pod \"route-controller-manager-6688d8457f-5ktw8\" (UID: \"7d0d5957-7932-46a3-b42d-b53e6d67c148\") " pod="openshift-route-controller-manager/route-controller-manager-6688d8457f-5ktw8" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.459541 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mn2j\" (UniqueName: \"kubernetes.io/projected/7d0d5957-7932-46a3-b42d-b53e6d67c148-kube-api-access-4mn2j\") pod \"route-controller-manager-6688d8457f-5ktw8\" (UID: \"7d0d5957-7932-46a3-b42d-b53e6d67c148\") " pod="openshift-route-controller-manager/route-controller-manager-6688d8457f-5ktw8" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.459578 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe50bc22-7feb-42a8-8fec-634bb678dac4-config\") pod \"controller-manager-d87d4c7cc-kn29w\" (UID: \"fe50bc22-7feb-42a8-8fec-634bb678dac4\") " pod="openshift-controller-manager/controller-manager-d87d4c7cc-kn29w" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.560878 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fe50bc22-7feb-42a8-8fec-634bb678dac4-client-ca\") pod \"controller-manager-d87d4c7cc-kn29w\" (UID: \"fe50bc22-7feb-42a8-8fec-634bb678dac4\") " pod="openshift-controller-manager/controller-manager-d87d4c7cc-kn29w" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.560970 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d0d5957-7932-46a3-b42d-b53e6d67c148-client-ca\") pod \"route-controller-manager-6688d8457f-5ktw8\" (UID: \"7d0d5957-7932-46a3-b42d-b53e6d67c148\") " pod="openshift-route-controller-manager/route-controller-manager-6688d8457f-5ktw8" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.561003 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe50bc22-7feb-42a8-8fec-634bb678dac4-serving-cert\") pod \"controller-manager-d87d4c7cc-kn29w\" (UID: \"fe50bc22-7feb-42a8-8fec-634bb678dac4\") " pod="openshift-controller-manager/controller-manager-d87d4c7cc-kn29w" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.561027 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d0d5957-7932-46a3-b42d-b53e6d67c148-config\") pod \"route-controller-manager-6688d8457f-5ktw8\" (UID: \"7d0d5957-7932-46a3-b42d-b53e6d67c148\") " pod="openshift-route-controller-manager/route-controller-manager-6688d8457f-5ktw8" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.561061 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mn2j\" (UniqueName: \"kubernetes.io/projected/7d0d5957-7932-46a3-b42d-b53e6d67c148-kube-api-access-4mn2j\") pod \"route-controller-manager-6688d8457f-5ktw8\" (UID: \"7d0d5957-7932-46a3-b42d-b53e6d67c148\") " pod="openshift-route-controller-manager/route-controller-manager-6688d8457f-5ktw8" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.561086 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe50bc22-7feb-42a8-8fec-634bb678dac4-config\") pod \"controller-manager-d87d4c7cc-kn29w\" (UID: \"fe50bc22-7feb-42a8-8fec-634bb678dac4\") " pod="openshift-controller-manager/controller-manager-d87d4c7cc-kn29w" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.561110 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fe50bc22-7feb-42a8-8fec-634bb678dac4-proxy-ca-bundles\") pod \"controller-manager-d87d4c7cc-kn29w\" (UID: \"fe50bc22-7feb-42a8-8fec-634bb678dac4\") " pod="openshift-controller-manager/controller-manager-d87d4c7cc-kn29w" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.561126 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvv4q\" (UniqueName: \"kubernetes.io/projected/fe50bc22-7feb-42a8-8fec-634bb678dac4-kube-api-access-vvv4q\") pod \"controller-manager-d87d4c7cc-kn29w\" (UID: \"fe50bc22-7feb-42a8-8fec-634bb678dac4\") " pod="openshift-controller-manager/controller-manager-d87d4c7cc-kn29w" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.561156 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d0d5957-7932-46a3-b42d-b53e6d67c148-serving-cert\") pod \"route-controller-manager-6688d8457f-5ktw8\" (UID: \"7d0d5957-7932-46a3-b42d-b53e6d67c148\") " pod="openshift-route-controller-manager/route-controller-manager-6688d8457f-5ktw8" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.562038 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d0d5957-7932-46a3-b42d-b53e6d67c148-client-ca\") pod \"route-controller-manager-6688d8457f-5ktw8\" (UID: \"7d0d5957-7932-46a3-b42d-b53e6d67c148\") " pod="openshift-route-controller-manager/route-controller-manager-6688d8457f-5ktw8" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.562420 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fe50bc22-7feb-42a8-8fec-634bb678dac4-client-ca\") pod \"controller-manager-d87d4c7cc-kn29w\" (UID: \"fe50bc22-7feb-42a8-8fec-634bb678dac4\") " pod="openshift-controller-manager/controller-manager-d87d4c7cc-kn29w" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.562659 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe50bc22-7feb-42a8-8fec-634bb678dac4-config\") pod \"controller-manager-d87d4c7cc-kn29w\" (UID: \"fe50bc22-7feb-42a8-8fec-634bb678dac4\") " pod="openshift-controller-manager/controller-manager-d87d4c7cc-kn29w" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.563273 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fe50bc22-7feb-42a8-8fec-634bb678dac4-proxy-ca-bundles\") pod \"controller-manager-d87d4c7cc-kn29w\" (UID: \"fe50bc22-7feb-42a8-8fec-634bb678dac4\") " pod="openshift-controller-manager/controller-manager-d87d4c7cc-kn29w" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.563982 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d0d5957-7932-46a3-b42d-b53e6d67c148-config\") pod \"route-controller-manager-6688d8457f-5ktw8\" (UID: \"7d0d5957-7932-46a3-b42d-b53e6d67c148\") " pod="openshift-route-controller-manager/route-controller-manager-6688d8457f-5ktw8" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.566636 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe50bc22-7feb-42a8-8fec-634bb678dac4-serving-cert\") pod \"controller-manager-d87d4c7cc-kn29w\" (UID: \"fe50bc22-7feb-42a8-8fec-634bb678dac4\") " pod="openshift-controller-manager/controller-manager-d87d4c7cc-kn29w" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.567366 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d0d5957-7932-46a3-b42d-b53e6d67c148-serving-cert\") pod \"route-controller-manager-6688d8457f-5ktw8\" (UID: \"7d0d5957-7932-46a3-b42d-b53e6d67c148\") " pod="openshift-route-controller-manager/route-controller-manager-6688d8457f-5ktw8" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.589512 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mn2j\" (UniqueName: \"kubernetes.io/projected/7d0d5957-7932-46a3-b42d-b53e6d67c148-kube-api-access-4mn2j\") pod \"route-controller-manager-6688d8457f-5ktw8\" (UID: \"7d0d5957-7932-46a3-b42d-b53e6d67c148\") " pod="openshift-route-controller-manager/route-controller-manager-6688d8457f-5ktw8" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.601029 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvv4q\" (UniqueName: \"kubernetes.io/projected/fe50bc22-7feb-42a8-8fec-634bb678dac4-kube-api-access-vvv4q\") pod \"controller-manager-d87d4c7cc-kn29w\" (UID: \"fe50bc22-7feb-42a8-8fec-634bb678dac4\") " pod="openshift-controller-manager/controller-manager-d87d4c7cc-kn29w" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.705876 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d87d4c7cc-kn29w" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.712848 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6688d8457f-5ktw8" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.799982 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07616a6b-a602-4f28-a88c-c5f71b56466a" path="/var/lib/kubelet/pods/07616a6b-a602-4f28-a88c-c5f71b56466a/volumes" Feb 16 17:02:56 crc kubenswrapper[4794]: I0216 17:02:56.800599 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4948f11-fc76-4ab0-880e-670b3db638a9" path="/var/lib/kubelet/pods/b4948f11-fc76-4ab0-880e-670b3db638a9/volumes" Feb 16 17:02:57 crc kubenswrapper[4794]: I0216 17:02:57.007735 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d87d4c7cc-kn29w"] Feb 16 17:02:57 crc kubenswrapper[4794]: W0216 17:02:57.018063 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe50bc22_7feb_42a8_8fec_634bb678dac4.slice/crio-c51935b8954290b165890ac177b099e793aa1126a2ecdb3c80f66dab6803c0eb WatchSource:0}: Error finding container c51935b8954290b165890ac177b099e793aa1126a2ecdb3c80f66dab6803c0eb: Status 404 returned error can't find the container with id c51935b8954290b165890ac177b099e793aa1126a2ecdb3c80f66dab6803c0eb Feb 16 17:02:57 crc kubenswrapper[4794]: I0216 17:02:57.259265 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6688d8457f-5ktw8"] Feb 16 17:02:57 crc kubenswrapper[4794]: W0216 17:02:57.270783 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d0d5957_7932_46a3_b42d_b53e6d67c148.slice/crio-e3b9f2704c731269181b81780cb66992719880a4ce02b5d77661f525a1a2bea0 WatchSource:0}: Error finding container e3b9f2704c731269181b81780cb66992719880a4ce02b5d77661f525a1a2bea0: Status 404 returned error can't find the container with id e3b9f2704c731269181b81780cb66992719880a4ce02b5d77661f525a1a2bea0 Feb 16 17:02:57 crc kubenswrapper[4794]: I0216 17:02:57.627621 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d87d4c7cc-kn29w" event={"ID":"fe50bc22-7feb-42a8-8fec-634bb678dac4","Type":"ContainerStarted","Data":"43f7b9b3da8c60d401db92989a15bd186d77fbbbd3cadb6daa6567ee4df48d29"} Feb 16 17:02:57 crc kubenswrapper[4794]: I0216 17:02:57.627998 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d87d4c7cc-kn29w" event={"ID":"fe50bc22-7feb-42a8-8fec-634bb678dac4","Type":"ContainerStarted","Data":"c51935b8954290b165890ac177b099e793aa1126a2ecdb3c80f66dab6803c0eb"} Feb 16 17:02:57 crc kubenswrapper[4794]: I0216 17:02:57.628022 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d87d4c7cc-kn29w" Feb 16 17:02:57 crc kubenswrapper[4794]: I0216 17:02:57.629284 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6688d8457f-5ktw8" event={"ID":"7d0d5957-7932-46a3-b42d-b53e6d67c148","Type":"ContainerStarted","Data":"d8b78d34923252c5a86b42fdec5053e5160fbccd6a24550eb479270d8a52b94a"} Feb 16 17:02:57 crc kubenswrapper[4794]: I0216 17:02:57.629338 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6688d8457f-5ktw8" event={"ID":"7d0d5957-7932-46a3-b42d-b53e6d67c148","Type":"ContainerStarted","Data":"e3b9f2704c731269181b81780cb66992719880a4ce02b5d77661f525a1a2bea0"} Feb 16 17:02:57 crc kubenswrapper[4794]: I0216 17:02:57.629533 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6688d8457f-5ktw8" Feb 16 17:02:57 crc kubenswrapper[4794]: I0216 17:02:57.630757 4794 patch_prober.go:28] interesting pod/route-controller-manager-6688d8457f-5ktw8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" start-of-body= Feb 16 17:02:57 crc kubenswrapper[4794]: I0216 17:02:57.630800 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6688d8457f-5ktw8" podUID="7d0d5957-7932-46a3-b42d-b53e6d67c148" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" Feb 16 17:02:57 crc kubenswrapper[4794]: I0216 17:02:57.634703 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d87d4c7cc-kn29w" Feb 16 17:02:57 crc kubenswrapper[4794]: I0216 17:02:57.653755 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d87d4c7cc-kn29w" podStartSLOduration=3.6537390370000002 podStartE2EDuration="3.653739037s" podCreationTimestamp="2026-02-16 17:02:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:57.650026774 +0000 UTC m=+203.598121421" watchObservedRunningTime="2026-02-16 17:02:57.653739037 +0000 UTC m=+203.601833704" Feb 16 17:02:57 crc kubenswrapper[4794]: I0216 17:02:57.670278 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6688d8457f-5ktw8" podStartSLOduration=3.670241055 podStartE2EDuration="3.670241055s" podCreationTimestamp="2026-02-16 17:02:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:57.669523675 +0000 UTC m=+203.617618352" watchObservedRunningTime="2026-02-16 17:02:57.670241055 +0000 UTC m=+203.618335702" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.384399 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5d49c9496c-ptmzk"] Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.385214 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.389639 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.389873 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.390905 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.390933 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.391053 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.391143 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.391197 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.391372 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.392210 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.392581 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.393212 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.393376 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.398064 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.405719 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.416482 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5d49c9496c-ptmzk"] Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.422875 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.485428 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f1f6d94a-c62e-408a-8cce-f051d76fc074-v4-0-config-user-template-error\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.485681 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1f6d94a-c62e-408a-8cce-f051d76fc074-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.485705 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f1f6d94a-c62e-408a-8cce-f051d76fc074-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.485729 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f1f6d94a-c62e-408a-8cce-f051d76fc074-v4-0-config-system-router-certs\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.485746 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7p2n\" (UniqueName: \"kubernetes.io/projected/f1f6d94a-c62e-408a-8cce-f051d76fc074-kube-api-access-m7p2n\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.485766 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f1f6d94a-c62e-408a-8cce-f051d76fc074-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.485783 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f1f6d94a-c62e-408a-8cce-f051d76fc074-v4-0-config-system-session\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.485805 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f1f6d94a-c62e-408a-8cce-f051d76fc074-audit-dir\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.485835 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f6d94a-c62e-408a-8cce-f051d76fc074-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.485855 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f1f6d94a-c62e-408a-8cce-f051d76fc074-v4-0-config-system-service-ca\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.485877 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f1f6d94a-c62e-408a-8cce-f051d76fc074-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.485893 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f1f6d94a-c62e-408a-8cce-f051d76fc074-audit-policies\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.485924 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f1f6d94a-c62e-408a-8cce-f051d76fc074-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.485943 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f1f6d94a-c62e-408a-8cce-f051d76fc074-v4-0-config-user-template-login\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.587364 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f1f6d94a-c62e-408a-8cce-f051d76fc074-audit-dir\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.587464 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f6d94a-c62e-408a-8cce-f051d76fc074-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.587502 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f1f6d94a-c62e-408a-8cce-f051d76fc074-v4-0-config-system-service-ca\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.587543 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f1f6d94a-c62e-408a-8cce-f051d76fc074-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.587577 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f1f6d94a-c62e-408a-8cce-f051d76fc074-audit-policies\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.587594 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f1f6d94a-c62e-408a-8cce-f051d76fc074-audit-dir\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.587643 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f1f6d94a-c62e-408a-8cce-f051d76fc074-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.587681 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f1f6d94a-c62e-408a-8cce-f051d76fc074-v4-0-config-user-template-login\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.587711 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f1f6d94a-c62e-408a-8cce-f051d76fc074-v4-0-config-user-template-error\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.587775 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1f6d94a-c62e-408a-8cce-f051d76fc074-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.587808 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f1f6d94a-c62e-408a-8cce-f051d76fc074-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.587845 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f1f6d94a-c62e-408a-8cce-f051d76fc074-v4-0-config-system-router-certs\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.587881 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7p2n\" (UniqueName: \"kubernetes.io/projected/f1f6d94a-c62e-408a-8cce-f051d76fc074-kube-api-access-m7p2n\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.587913 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f1f6d94a-c62e-408a-8cce-f051d76fc074-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.587949 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f1f6d94a-c62e-408a-8cce-f051d76fc074-v4-0-config-system-session\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.588497 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f1f6d94a-c62e-408a-8cce-f051d76fc074-v4-0-config-system-service-ca\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.589709 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f1f6d94a-c62e-408a-8cce-f051d76fc074-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.590045 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f1f6d94a-c62e-408a-8cce-f051d76fc074-audit-policies\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.590156 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1f6d94a-c62e-408a-8cce-f051d76fc074-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.597071 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f1f6d94a-c62e-408a-8cce-f051d76fc074-v4-0-config-user-template-error\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.597109 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f1f6d94a-c62e-408a-8cce-f051d76fc074-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.599853 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f1f6d94a-c62e-408a-8cce-f051d76fc074-v4-0-config-system-router-certs\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.600036 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f1f6d94a-c62e-408a-8cce-f051d76fc074-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.600061 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f6d94a-c62e-408a-8cce-f051d76fc074-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.600651 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f1f6d94a-c62e-408a-8cce-f051d76fc074-v4-0-config-user-template-login\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.602061 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f1f6d94a-c62e-408a-8cce-f051d76fc074-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.603770 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f1f6d94a-c62e-408a-8cce-f051d76fc074-v4-0-config-system-session\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.607951 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7p2n\" (UniqueName: \"kubernetes.io/projected/f1f6d94a-c62e-408a-8cce-f051d76fc074-kube-api-access-m7p2n\") pod \"oauth-openshift-5d49c9496c-ptmzk\" (UID: \"f1f6d94a-c62e-408a-8cce-f051d76fc074\") " pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.641495 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6688d8457f-5ktw8" Feb 16 17:02:58 crc kubenswrapper[4794]: I0216 17:02:58.717770 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:02:59 crc kubenswrapper[4794]: I0216 17:02:59.161551 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5d49c9496c-ptmzk"] Feb 16 17:02:59 crc kubenswrapper[4794]: W0216 17:02:59.181545 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1f6d94a_c62e_408a_8cce_f051d76fc074.slice/crio-e825f9a94f9413fb89632740658fc740802cdd01a8bf1daf687056c7bfb3b2f7 WatchSource:0}: Error finding container e825f9a94f9413fb89632740658fc740802cdd01a8bf1daf687056c7bfb3b2f7: Status 404 returned error can't find the container with id e825f9a94f9413fb89632740658fc740802cdd01a8bf1daf687056c7bfb3b2f7 Feb 16 17:02:59 crc kubenswrapper[4794]: I0216 17:02:59.642424 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" event={"ID":"f1f6d94a-c62e-408a-8cce-f051d76fc074","Type":"ContainerStarted","Data":"007bd8759ae4b6f2dae67c434b02370f68b0293c8368ee3e71a7c662894a1823"} Feb 16 17:02:59 crc kubenswrapper[4794]: I0216 17:02:59.642459 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" event={"ID":"f1f6d94a-c62e-408a-8cce-f051d76fc074","Type":"ContainerStarted","Data":"e825f9a94f9413fb89632740658fc740802cdd01a8bf1daf687056c7bfb3b2f7"} Feb 16 17:03:00 crc kubenswrapper[4794]: I0216 17:03:00.648028 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:03:00 crc kubenswrapper[4794]: I0216 17:03:00.654117 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" Feb 16 17:03:00 crc kubenswrapper[4794]: I0216 17:03:00.673677 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5d49c9496c-ptmzk" podStartSLOduration=35.673651646 podStartE2EDuration="35.673651646s" podCreationTimestamp="2026-02-16 17:02:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:03:00.672689439 +0000 UTC m=+206.620784106" watchObservedRunningTime="2026-02-16 17:03:00.673651646 +0000 UTC m=+206.621746333" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.097069 4794 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.097998 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.098069 4794 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.098641 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f" gracePeriod=15 Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.098692 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38" gracePeriod=15 Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.098730 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d" gracePeriod=15 Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.098752 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992" gracePeriod=15 Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.098754 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873" gracePeriod=15 Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.099797 4794 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 17:03:04 crc kubenswrapper[4794]: E0216 17:03:04.100106 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.100121 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 17:03:04 crc kubenswrapper[4794]: E0216 17:03:04.100140 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.100146 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 16 17:03:04 crc kubenswrapper[4794]: E0216 17:03:04.100158 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.100170 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 17:03:04 crc kubenswrapper[4794]: E0216 17:03:04.100189 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.100198 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 17:03:04 crc kubenswrapper[4794]: E0216 17:03:04.100215 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.100224 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 17:03:04 crc kubenswrapper[4794]: E0216 17:03:04.100235 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.100242 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 17:03:04 crc kubenswrapper[4794]: E0216 17:03:04.100260 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.100268 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.100463 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.100478 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.100490 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.100497 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.100505 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.100700 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.141943 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.157529 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.157581 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.157613 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.157644 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.157664 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.157689 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.157704 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.157725 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.260065 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.260512 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.260221 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.260552 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.260623 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.260635 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.260652 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.260683 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.260696 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.260699 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.260730 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.260703 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.260784 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.260811 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.260881 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.260913 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:03:04 crc kubenswrapper[4794]: E0216 17:03:04.351630 4794 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 16 17:03:04 crc kubenswrapper[4794]: E0216 17:03:04.351869 4794 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 16 17:03:04 crc kubenswrapper[4794]: E0216 17:03:04.352071 4794 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 16 17:03:04 crc kubenswrapper[4794]: E0216 17:03:04.352276 4794 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 16 17:03:04 crc kubenswrapper[4794]: E0216 17:03:04.352505 4794 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.352537 4794 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 16 17:03:04 crc kubenswrapper[4794]: E0216 17:03:04.352720 4794 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="200ms" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.440286 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:03:04 crc kubenswrapper[4794]: E0216 17:03:04.467901 4794 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.151:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894c8d839744040 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 17:03:04.465858624 +0000 UTC m=+210.413953271,LastTimestamp:2026-02-16 17:03:04.465858624 +0000 UTC m=+210.413953271,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 17:03:04 crc kubenswrapper[4794]: E0216 17:03:04.554721 4794 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="400ms" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.674404 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.675698 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.676430 4794 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38" exitCode=0 Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.676455 4794 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992" exitCode=0 Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.676465 4794 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873" exitCode=0 Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.676473 4794 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d" exitCode=2 Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.676558 4794 scope.go:117] "RemoveContainer" containerID="a8eda4d5bc3f99d2c2ade7332ca636a2860ee6767ed864d9fca9fd9f82577c5d" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.677682 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"7fb3169b0f71d84ba12f85ffc52334d969e6b1f589251c2134815d0cb755960a"} Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.680170 4794 generic.go:334] "Generic (PLEG): container finished" podID="4cd94818-7cad-4289-9c9d-ebdddf83a6c8" containerID="9edf09f0a5c258a8a2a6e66cb66b99e2069e04fe0028acc4c1d87e32eefe84a1" exitCode=0 Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.680211 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4cd94818-7cad-4289-9c9d-ebdddf83a6c8","Type":"ContainerDied","Data":"9edf09f0a5c258a8a2a6e66cb66b99e2069e04fe0028acc4c1d87e32eefe84a1"} Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.680988 4794 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.681396 4794 status_manager.go:851] "Failed to get status for pod" podUID="4cd94818-7cad-4289-9c9d-ebdddf83a6c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.681938 4794 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.793545 4794 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.793933 4794 status_manager.go:851] "Failed to get status for pod" podUID="4cd94818-7cad-4289-9c9d-ebdddf83a6c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 16 17:03:04 crc kubenswrapper[4794]: I0216 17:03:04.794315 4794 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 16 17:03:04 crc kubenswrapper[4794]: E0216 17:03:04.955828 4794 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="800ms" Feb 16 17:03:05 crc kubenswrapper[4794]: I0216 17:03:05.692143 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"e3f7d8b8783366f704c98b2c23ee01f3156a43ced533c9404fec957563d92f83"} Feb 16 17:03:05 crc kubenswrapper[4794]: I0216 17:03:05.693768 4794 status_manager.go:851] "Failed to get status for pod" podUID="4cd94818-7cad-4289-9c9d-ebdddf83a6c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 16 17:03:05 crc kubenswrapper[4794]: I0216 17:03:05.694327 4794 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 16 17:03:05 crc kubenswrapper[4794]: I0216 17:03:05.695979 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 17:03:05 crc kubenswrapper[4794]: E0216 17:03:05.757579 4794 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="1.6s" Feb 16 17:03:06 crc kubenswrapper[4794]: I0216 17:03:06.144990 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 17:03:06 crc kubenswrapper[4794]: I0216 17:03:06.145576 4794 status_manager.go:851] "Failed to get status for pod" podUID="4cd94818-7cad-4289-9c9d-ebdddf83a6c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 16 17:03:06 crc kubenswrapper[4794]: I0216 17:03:06.145892 4794 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 16 17:03:06 crc kubenswrapper[4794]: I0216 17:03:06.190276 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cd94818-7cad-4289-9c9d-ebdddf83a6c8-kubelet-dir\") pod \"4cd94818-7cad-4289-9c9d-ebdddf83a6c8\" (UID: \"4cd94818-7cad-4289-9c9d-ebdddf83a6c8\") " Feb 16 17:03:06 crc kubenswrapper[4794]: I0216 17:03:06.190408 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cd94818-7cad-4289-9c9d-ebdddf83a6c8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4cd94818-7cad-4289-9c9d-ebdddf83a6c8" (UID: "4cd94818-7cad-4289-9c9d-ebdddf83a6c8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:03:06 crc kubenswrapper[4794]: I0216 17:03:06.190415 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cd94818-7cad-4289-9c9d-ebdddf83a6c8-kube-api-access\") pod \"4cd94818-7cad-4289-9c9d-ebdddf83a6c8\" (UID: \"4cd94818-7cad-4289-9c9d-ebdddf83a6c8\") " Feb 16 17:03:06 crc kubenswrapper[4794]: I0216 17:03:06.190461 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4cd94818-7cad-4289-9c9d-ebdddf83a6c8-var-lock\") pod \"4cd94818-7cad-4289-9c9d-ebdddf83a6c8\" (UID: \"4cd94818-7cad-4289-9c9d-ebdddf83a6c8\") " Feb 16 17:03:06 crc kubenswrapper[4794]: I0216 17:03:06.190596 4794 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cd94818-7cad-4289-9c9d-ebdddf83a6c8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:06 crc kubenswrapper[4794]: I0216 17:03:06.190688 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cd94818-7cad-4289-9c9d-ebdddf83a6c8-var-lock" (OuterVolumeSpecName: "var-lock") pod "4cd94818-7cad-4289-9c9d-ebdddf83a6c8" (UID: "4cd94818-7cad-4289-9c9d-ebdddf83a6c8"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:03:06 crc kubenswrapper[4794]: I0216 17:03:06.196059 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cd94818-7cad-4289-9c9d-ebdddf83a6c8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4cd94818-7cad-4289-9c9d-ebdddf83a6c8" (UID: "4cd94818-7cad-4289-9c9d-ebdddf83a6c8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:03:06 crc kubenswrapper[4794]: I0216 17:03:06.291898 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cd94818-7cad-4289-9c9d-ebdddf83a6c8-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:06 crc kubenswrapper[4794]: I0216 17:03:06.291976 4794 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4cd94818-7cad-4289-9c9d-ebdddf83a6c8-var-lock\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:06 crc kubenswrapper[4794]: I0216 17:03:06.706497 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 17:03:06 crc kubenswrapper[4794]: I0216 17:03:06.706501 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4cd94818-7cad-4289-9c9d-ebdddf83a6c8","Type":"ContainerDied","Data":"509f565bef29b46d1fd24ad7e0aa27c2c8539e60eee58c65d7cec2ffc9841677"} Feb 16 17:03:06 crc kubenswrapper[4794]: I0216 17:03:06.707335 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="509f565bef29b46d1fd24ad7e0aa27c2c8539e60eee58c65d7cec2ffc9841677" Feb 16 17:03:06 crc kubenswrapper[4794]: I0216 17:03:06.709734 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 17:03:06 crc kubenswrapper[4794]: I0216 17:03:06.711187 4794 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f" exitCode=0 Feb 16 17:03:06 crc kubenswrapper[4794]: I0216 17:03:06.722825 4794 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 16 17:03:06 crc kubenswrapper[4794]: I0216 17:03:06.723236 4794 status_manager.go:851] "Failed to get status for pod" podUID="4cd94818-7cad-4289-9c9d-ebdddf83a6c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 16 17:03:07 crc kubenswrapper[4794]: I0216 17:03:07.276466 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 17:03:07 crc kubenswrapper[4794]: I0216 17:03:07.278176 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:03:07 crc kubenswrapper[4794]: I0216 17:03:07.278805 4794 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 16 17:03:07 crc kubenswrapper[4794]: I0216 17:03:07.279074 4794 status_manager.go:851] "Failed to get status for pod" podUID="4cd94818-7cad-4289-9c9d-ebdddf83a6c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 16 17:03:07 crc kubenswrapper[4794]: I0216 17:03:07.279636 4794 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 16 17:03:07 crc kubenswrapper[4794]: I0216 17:03:07.307416 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:03:07 crc kubenswrapper[4794]: I0216 17:03:07.307886 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 17:03:07 crc kubenswrapper[4794]: I0216 17:03:07.308019 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 17:03:07 crc kubenswrapper[4794]: I0216 17:03:07.308044 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 17:03:07 crc kubenswrapper[4794]: I0216 17:03:07.308193 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:03:07 crc kubenswrapper[4794]: I0216 17:03:07.308257 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:03:07 crc kubenswrapper[4794]: I0216 17:03:07.308457 4794 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:07 crc kubenswrapper[4794]: I0216 17:03:07.308484 4794 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:07 crc kubenswrapper[4794]: I0216 17:03:07.308499 4794 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:07 crc kubenswrapper[4794]: E0216 17:03:07.358610 4794 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="3.2s" Feb 16 17:03:07 crc kubenswrapper[4794]: I0216 17:03:07.720489 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 17:03:07 crc kubenswrapper[4794]: I0216 17:03:07.723008 4794 scope.go:117] "RemoveContainer" containerID="d200142d2888a7c2ea4f3f93fa6f9c3173bca214d6f5886d224d359460ecff38" Feb 16 17:03:07 crc kubenswrapper[4794]: I0216 17:03:07.723223 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:03:07 crc kubenswrapper[4794]: I0216 17:03:07.748335 4794 scope.go:117] "RemoveContainer" containerID="725c67151028fd83ac9a9d3fcd794aa5bfe1f2c5fe944f5cb6f5701422478992" Feb 16 17:03:07 crc kubenswrapper[4794]: I0216 17:03:07.755925 4794 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 16 17:03:07 crc kubenswrapper[4794]: I0216 17:03:07.756319 4794 status_manager.go:851] "Failed to get status for pod" podUID="4cd94818-7cad-4289-9c9d-ebdddf83a6c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 16 17:03:07 crc kubenswrapper[4794]: I0216 17:03:07.756773 4794 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 16 17:03:07 crc kubenswrapper[4794]: I0216 17:03:07.762360 4794 scope.go:117] "RemoveContainer" containerID="caeb12e07078490ca2d40bb7de73154d187f73d864b9b926c178e9844531b873" Feb 16 17:03:07 crc kubenswrapper[4794]: I0216 17:03:07.775673 4794 scope.go:117] "RemoveContainer" containerID="0020aa06c487240558c084c0a1a90ee3fbb6e0353aa0fc6119f18e82fb0c1c0d" Feb 16 17:03:07 crc kubenswrapper[4794]: I0216 17:03:07.789563 4794 scope.go:117] "RemoveContainer" containerID="a38b04d5c49a3f57d2fc5a362757445c7cadb70294f4a8a2d5102a041d4fca6f" Feb 16 17:03:07 crc kubenswrapper[4794]: I0216 17:03:07.806379 4794 scope.go:117] "RemoveContainer" containerID="4a9b0aa3ba1c6ee098cf88b9326e66366edc7ac6b74eb8afc1343909b28bc608" Feb 16 17:03:08 crc kubenswrapper[4794]: I0216 17:03:08.802822 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 16 17:03:10 crc kubenswrapper[4794]: E0216 17:03:10.570525 4794 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="6.4s" Feb 16 17:03:12 crc kubenswrapper[4794]: E0216 17:03:12.762868 4794 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.151:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894c8d839744040 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 17:03:04.465858624 +0000 UTC m=+210.413953271,LastTimestamp:2026-02-16 17:03:04.465858624 +0000 UTC m=+210.413953271,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 17:03:12 crc kubenswrapper[4794]: E0216 17:03:12.825005 4794 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.151:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" volumeName="registry-storage" Feb 16 17:03:14 crc kubenswrapper[4794]: I0216 17:03:14.793802 4794 status_manager.go:851] "Failed to get status for pod" podUID="4cd94818-7cad-4289-9c9d-ebdddf83a6c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 16 17:03:14 crc kubenswrapper[4794]: I0216 17:03:14.795319 4794 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 16 17:03:15 crc kubenswrapper[4794]: I0216 17:03:15.790405 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:03:15 crc kubenswrapper[4794]: I0216 17:03:15.791420 4794 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 16 17:03:15 crc kubenswrapper[4794]: I0216 17:03:15.791788 4794 status_manager.go:851] "Failed to get status for pod" podUID="4cd94818-7cad-4289-9c9d-ebdddf83a6c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 16 17:03:15 crc kubenswrapper[4794]: I0216 17:03:15.804710 4794 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="35d61ecd-11f5-4131-b26d-7411c7be73e4" Feb 16 17:03:15 crc kubenswrapper[4794]: I0216 17:03:15.804737 4794 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="35d61ecd-11f5-4131-b26d-7411c7be73e4" Feb 16 17:03:15 crc kubenswrapper[4794]: E0216 17:03:15.805044 4794 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:03:15 crc kubenswrapper[4794]: I0216 17:03:15.805461 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:03:15 crc kubenswrapper[4794]: W0216 17:03:15.822495 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-e2454a280d4ac2b8a502ed693c102213fa9ee211a6e0fae102054e53d4e82040 WatchSource:0}: Error finding container e2454a280d4ac2b8a502ed693c102213fa9ee211a6e0fae102054e53d4e82040: Status 404 returned error can't find the container with id e2454a280d4ac2b8a502ed693c102213fa9ee211a6e0fae102054e53d4e82040 Feb 16 17:03:16 crc kubenswrapper[4794]: I0216 17:03:16.788483 4794 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="2edd9f214fa6338e882652a4e54183edc8e646bbdc13a6ef108c16dc4e6a90a6" exitCode=0 Feb 16 17:03:16 crc kubenswrapper[4794]: I0216 17:03:16.788586 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"2edd9f214fa6338e882652a4e54183edc8e646bbdc13a6ef108c16dc4e6a90a6"} Feb 16 17:03:16 crc kubenswrapper[4794]: I0216 17:03:16.788741 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e2454a280d4ac2b8a502ed693c102213fa9ee211a6e0fae102054e53d4e82040"} Feb 16 17:03:16 crc kubenswrapper[4794]: I0216 17:03:16.789158 4794 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="35d61ecd-11f5-4131-b26d-7411c7be73e4" Feb 16 17:03:16 crc kubenswrapper[4794]: I0216 17:03:16.789172 4794 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="35d61ecd-11f5-4131-b26d-7411c7be73e4" Feb 16 17:03:16 crc kubenswrapper[4794]: E0216 17:03:16.789601 4794 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:03:16 crc kubenswrapper[4794]: I0216 17:03:16.790061 4794 status_manager.go:851] "Failed to get status for pod" podUID="4cd94818-7cad-4289-9c9d-ebdddf83a6c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 16 17:03:16 crc kubenswrapper[4794]: I0216 17:03:16.790419 4794 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 16 17:03:16 crc kubenswrapper[4794]: E0216 17:03:16.971248 4794 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="7s" Feb 16 17:03:17 crc kubenswrapper[4794]: I0216 17:03:17.794850 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"78cb866e04b789ddcf4b5e65316404a29f7e03c087ae6925cc1812dc60178140"} Feb 16 17:03:17 crc kubenswrapper[4794]: I0216 17:03:17.795172 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"67663b46c2b64e915a70c1898194210c19902692957b36796231c1083179f6c7"} Feb 16 17:03:18 crc kubenswrapper[4794]: I0216 17:03:18.802370 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d9fc7cd0f7bbcb7450db0c3e31a693444dd980fa567de241c5263d71641d061e"} Feb 16 17:03:18 crc kubenswrapper[4794]: I0216 17:03:18.802922 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"28dc1954b50effbd8047960d6596069a8d467eb8b0ec28b6a2f65f90ee8bad33"} Feb 16 17:03:18 crc kubenswrapper[4794]: I0216 17:03:18.802949 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:03:18 crc kubenswrapper[4794]: I0216 17:03:18.802960 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"eb267a71ec85f650da71e0501e9e90e02024147647652f08d7c3bf20de5fef58"} Feb 16 17:03:18 crc kubenswrapper[4794]: I0216 17:03:18.802747 4794 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="35d61ecd-11f5-4131-b26d-7411c7be73e4" Feb 16 17:03:18 crc kubenswrapper[4794]: I0216 17:03:18.802983 4794 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="35d61ecd-11f5-4131-b26d-7411c7be73e4" Feb 16 17:03:19 crc kubenswrapper[4794]: I0216 17:03:19.075890 4794 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 16 17:03:19 crc kubenswrapper[4794]: I0216 17:03:19.075970 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 16 17:03:19 crc kubenswrapper[4794]: I0216 17:03:19.809679 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 17:03:19 crc kubenswrapper[4794]: I0216 17:03:19.809720 4794 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc" exitCode=1 Feb 16 17:03:19 crc kubenswrapper[4794]: I0216 17:03:19.809747 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc"} Feb 16 17:03:19 crc kubenswrapper[4794]: I0216 17:03:19.810137 4794 scope.go:117] "RemoveContainer" containerID="4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc" Feb 16 17:03:20 crc kubenswrapper[4794]: I0216 17:03:20.140218 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:03:20 crc kubenswrapper[4794]: I0216 17:03:20.140573 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:03:20 crc kubenswrapper[4794]: I0216 17:03:20.140614 4794 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" Feb 16 17:03:20 crc kubenswrapper[4794]: I0216 17:03:20.142270 4794 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a"} pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 17:03:20 crc kubenswrapper[4794]: I0216 17:03:20.142368 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" containerID="cri-o://97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a" gracePeriod=600 Feb 16 17:03:20 crc kubenswrapper[4794]: I0216 17:03:20.806174 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:03:20 crc kubenswrapper[4794]: I0216 17:03:20.806790 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:03:20 crc kubenswrapper[4794]: I0216 17:03:20.811538 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:03:20 crc kubenswrapper[4794]: I0216 17:03:20.817826 4794 generic.go:334] "Generic (PLEG): container finished" podID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerID="97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a" exitCode=0 Feb 16 17:03:20 crc kubenswrapper[4794]: I0216 17:03:20.817887 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerDied","Data":"97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a"} Feb 16 17:03:20 crc kubenswrapper[4794]: I0216 17:03:20.817917 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerStarted","Data":"b2e80e5061d3d639e2192db6249af8300dc44db1cba1d8938a19b86cfdd0833f"} Feb 16 17:03:20 crc kubenswrapper[4794]: I0216 17:03:20.820524 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 17:03:20 crc kubenswrapper[4794]: I0216 17:03:20.820583 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"24e0689419b5ee1d32e73c863543f1af4d7958f6f4e9beaa513c431cfb5d48a7"} Feb 16 17:03:23 crc kubenswrapper[4794]: I0216 17:03:23.821780 4794 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:03:24 crc kubenswrapper[4794]: I0216 17:03:24.821914 4794 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1d099e64-254a-4ab2-9284-dea4610351b7" Feb 16 17:03:24 crc kubenswrapper[4794]: I0216 17:03:24.837591 4794 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="35d61ecd-11f5-4131-b26d-7411c7be73e4" Feb 16 17:03:24 crc kubenswrapper[4794]: I0216 17:03:24.837635 4794 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="35d61ecd-11f5-4131-b26d-7411c7be73e4" Feb 16 17:03:24 crc kubenswrapper[4794]: I0216 17:03:24.840996 4794 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1d099e64-254a-4ab2-9284-dea4610351b7" Feb 16 17:03:24 crc kubenswrapper[4794]: I0216 17:03:24.842084 4794 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://67663b46c2b64e915a70c1898194210c19902692957b36796231c1083179f6c7" Feb 16 17:03:24 crc kubenswrapper[4794]: I0216 17:03:24.842110 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:03:25 crc kubenswrapper[4794]: I0216 17:03:25.847104 4794 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="35d61ecd-11f5-4131-b26d-7411c7be73e4" Feb 16 17:03:25 crc kubenswrapper[4794]: I0216 17:03:25.847136 4794 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="35d61ecd-11f5-4131-b26d-7411c7be73e4" Feb 16 17:03:25 crc kubenswrapper[4794]: I0216 17:03:25.851169 4794 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1d099e64-254a-4ab2-9284-dea4610351b7" Feb 16 17:03:27 crc kubenswrapper[4794]: I0216 17:03:27.516383 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 17:03:27 crc kubenswrapper[4794]: I0216 17:03:27.517090 4794 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 16 17:03:27 crc kubenswrapper[4794]: I0216 17:03:27.517132 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 16 17:03:29 crc kubenswrapper[4794]: I0216 17:03:29.074996 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 17:03:33 crc kubenswrapper[4794]: I0216 17:03:33.045770 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 17:03:33 crc kubenswrapper[4794]: I0216 17:03:33.768791 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 17:03:33 crc kubenswrapper[4794]: I0216 17:03:33.869031 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 17:03:33 crc kubenswrapper[4794]: I0216 17:03:33.938014 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 16 17:03:33 crc kubenswrapper[4794]: I0216 17:03:33.944564 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 17:03:34 crc kubenswrapper[4794]: I0216 17:03:34.338031 4794 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 17:03:34 crc kubenswrapper[4794]: I0216 17:03:34.341683 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=30.341666838 podStartE2EDuration="30.341666838s" podCreationTimestamp="2026-02-16 17:03:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:03:23.843322037 +0000 UTC m=+229.791416704" watchObservedRunningTime="2026-02-16 17:03:34.341666838 +0000 UTC m=+240.289761485" Feb 16 17:03:34 crc kubenswrapper[4794]: I0216 17:03:34.341913 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 17:03:34 crc kubenswrapper[4794]: I0216 17:03:34.341944 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 17:03:34 crc kubenswrapper[4794]: I0216 17:03:34.346715 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:03:34 crc kubenswrapper[4794]: I0216 17:03:34.368749 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=11.368721819 podStartE2EDuration="11.368721819s" podCreationTimestamp="2026-02-16 17:03:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:03:34.363676679 +0000 UTC m=+240.311771326" watchObservedRunningTime="2026-02-16 17:03:34.368721819 +0000 UTC m=+240.316816506" Feb 16 17:03:34 crc kubenswrapper[4794]: I0216 17:03:34.591197 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 16 17:03:34 crc kubenswrapper[4794]: I0216 17:03:34.616028 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 17:03:34 crc kubenswrapper[4794]: I0216 17:03:34.922441 4794 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 17:03:34 crc kubenswrapper[4794]: I0216 17:03:34.923014 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://e3f7d8b8783366f704c98b2c23ee01f3156a43ced533c9404fec957563d92f83" gracePeriod=5 Feb 16 17:03:35 crc kubenswrapper[4794]: I0216 17:03:35.478386 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 16 17:03:35 crc kubenswrapper[4794]: I0216 17:03:35.522459 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 17:03:35 crc kubenswrapper[4794]: I0216 17:03:35.910166 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 17:03:36 crc kubenswrapper[4794]: I0216 17:03:36.117438 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 17:03:36 crc kubenswrapper[4794]: I0216 17:03:36.163721 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 17:03:36 crc kubenswrapper[4794]: I0216 17:03:36.270156 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 17:03:36 crc kubenswrapper[4794]: I0216 17:03:36.389676 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 17:03:36 crc kubenswrapper[4794]: I0216 17:03:36.418145 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 17:03:36 crc kubenswrapper[4794]: I0216 17:03:36.639710 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 17:03:36 crc kubenswrapper[4794]: I0216 17:03:36.695694 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 17:03:36 crc kubenswrapper[4794]: I0216 17:03:36.815759 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 16 17:03:36 crc kubenswrapper[4794]: I0216 17:03:36.821467 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 17:03:36 crc kubenswrapper[4794]: I0216 17:03:36.949813 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 17:03:37 crc kubenswrapper[4794]: I0216 17:03:37.005071 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 17:03:37 crc kubenswrapper[4794]: I0216 17:03:37.187250 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 16 17:03:37 crc kubenswrapper[4794]: I0216 17:03:37.249279 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 16 17:03:37 crc kubenswrapper[4794]: I0216 17:03:37.516708 4794 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 16 17:03:37 crc kubenswrapper[4794]: I0216 17:03:37.516785 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 16 17:03:37 crc kubenswrapper[4794]: I0216 17:03:37.549992 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 17:03:37 crc kubenswrapper[4794]: I0216 17:03:37.558026 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 17:03:37 crc kubenswrapper[4794]: I0216 17:03:37.659090 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 17:03:37 crc kubenswrapper[4794]: I0216 17:03:37.680276 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 17:03:37 crc kubenswrapper[4794]: I0216 17:03:37.715209 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 17:03:37 crc kubenswrapper[4794]: I0216 17:03:37.842681 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 17:03:37 crc kubenswrapper[4794]: I0216 17:03:37.997489 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 17:03:38 crc kubenswrapper[4794]: I0216 17:03:38.004430 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 17:03:38 crc kubenswrapper[4794]: I0216 17:03:38.239724 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 17:03:38 crc kubenswrapper[4794]: I0216 17:03:38.249383 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 17:03:38 crc kubenswrapper[4794]: I0216 17:03:38.257331 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 17:03:38 crc kubenswrapper[4794]: I0216 17:03:38.298549 4794 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 17:03:38 crc kubenswrapper[4794]: I0216 17:03:38.334888 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 16 17:03:38 crc kubenswrapper[4794]: I0216 17:03:38.423535 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 17:03:38 crc kubenswrapper[4794]: I0216 17:03:38.424438 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 17:03:38 crc kubenswrapper[4794]: I0216 17:03:38.430198 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 17:03:38 crc kubenswrapper[4794]: I0216 17:03:38.526228 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 17:03:38 crc kubenswrapper[4794]: I0216 17:03:38.533955 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 17:03:38 crc kubenswrapper[4794]: I0216 17:03:38.556053 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 17:03:38 crc kubenswrapper[4794]: I0216 17:03:38.600608 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 17:03:38 crc kubenswrapper[4794]: I0216 17:03:38.703037 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 17:03:38 crc kubenswrapper[4794]: I0216 17:03:38.730070 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 17:03:38 crc kubenswrapper[4794]: I0216 17:03:38.745897 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 16 17:03:38 crc kubenswrapper[4794]: I0216 17:03:38.836853 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 17:03:38 crc kubenswrapper[4794]: I0216 17:03:38.851896 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 17:03:38 crc kubenswrapper[4794]: I0216 17:03:38.907188 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 17:03:38 crc kubenswrapper[4794]: I0216 17:03:38.972890 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 16 17:03:39 crc kubenswrapper[4794]: I0216 17:03:39.100256 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 17:03:39 crc kubenswrapper[4794]: I0216 17:03:39.101245 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 17:03:39 crc kubenswrapper[4794]: I0216 17:03:39.106623 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 16 17:03:39 crc kubenswrapper[4794]: I0216 17:03:39.243025 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 17:03:39 crc kubenswrapper[4794]: I0216 17:03:39.328391 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 17:03:39 crc kubenswrapper[4794]: I0216 17:03:39.435803 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 17:03:39 crc kubenswrapper[4794]: I0216 17:03:39.480433 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 17:03:39 crc kubenswrapper[4794]: I0216 17:03:39.493176 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 17:03:39 crc kubenswrapper[4794]: I0216 17:03:39.571531 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 17:03:39 crc kubenswrapper[4794]: I0216 17:03:39.616183 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 16 17:03:39 crc kubenswrapper[4794]: I0216 17:03:39.643051 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 16 17:03:39 crc kubenswrapper[4794]: I0216 17:03:39.698860 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 17:03:39 crc kubenswrapper[4794]: I0216 17:03:39.713217 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 17:03:39 crc kubenswrapper[4794]: I0216 17:03:39.782451 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 17:03:39 crc kubenswrapper[4794]: I0216 17:03:39.924560 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.260987 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.261646 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.264929 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.320723 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.379749 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.403166 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.409202 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.430419 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.473285 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.511636 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.511711 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.535433 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.535493 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.535571 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.535607 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.535639 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.535664 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.535666 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.535691 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.535792 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.535931 4794 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.535947 4794 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.535959 4794 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.535970 4794 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.545229 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.566169 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.589702 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.594671 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.636901 4794 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.649089 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.798334 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.798401 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.798837 4794 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.807837 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.807891 4794 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="b81de265-848c-4032-9bca-b6933c66b40e" Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.809634 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.809673 4794 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="b81de265-848c-4032-9bca-b6933c66b40e" Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.949190 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.949253 4794 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="e3f7d8b8783366f704c98b2c23ee01f3156a43ced533c9404fec957563d92f83" exitCode=137 Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.949328 4794 scope.go:117] "RemoveContainer" containerID="e3f7d8b8783366f704c98b2c23ee01f3156a43ced533c9404fec957563d92f83" Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.949375 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.968076 4794 scope.go:117] "RemoveContainer" containerID="e3f7d8b8783366f704c98b2c23ee01f3156a43ced533c9404fec957563d92f83" Feb 16 17:03:40 crc kubenswrapper[4794]: E0216 17:03:40.968595 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3f7d8b8783366f704c98b2c23ee01f3156a43ced533c9404fec957563d92f83\": container with ID starting with e3f7d8b8783366f704c98b2c23ee01f3156a43ced533c9404fec957563d92f83 not found: ID does not exist" containerID="e3f7d8b8783366f704c98b2c23ee01f3156a43ced533c9404fec957563d92f83" Feb 16 17:03:40 crc kubenswrapper[4794]: I0216 17:03:40.968640 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3f7d8b8783366f704c98b2c23ee01f3156a43ced533c9404fec957563d92f83"} err="failed to get container status \"e3f7d8b8783366f704c98b2c23ee01f3156a43ced533c9404fec957563d92f83\": rpc error: code = NotFound desc = could not find container \"e3f7d8b8783366f704c98b2c23ee01f3156a43ced533c9404fec957563d92f83\": container with ID starting with e3f7d8b8783366f704c98b2c23ee01f3156a43ced533c9404fec957563d92f83 not found: ID does not exist" Feb 16 17:03:41 crc kubenswrapper[4794]: I0216 17:03:41.071469 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 16 17:03:41 crc kubenswrapper[4794]: I0216 17:03:41.072947 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 17:03:41 crc kubenswrapper[4794]: I0216 17:03:41.193134 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 17:03:41 crc kubenswrapper[4794]: I0216 17:03:41.195112 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 17:03:41 crc kubenswrapper[4794]: I0216 17:03:41.206759 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 17:03:41 crc kubenswrapper[4794]: I0216 17:03:41.233554 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 17:03:41 crc kubenswrapper[4794]: I0216 17:03:41.292075 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 16 17:03:41 crc kubenswrapper[4794]: I0216 17:03:41.355908 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 17:03:41 crc kubenswrapper[4794]: I0216 17:03:41.356347 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 17:03:41 crc kubenswrapper[4794]: I0216 17:03:41.407507 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 17:03:41 crc kubenswrapper[4794]: I0216 17:03:41.470706 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 17:03:41 crc kubenswrapper[4794]: I0216 17:03:41.536843 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 17:03:41 crc kubenswrapper[4794]: I0216 17:03:41.567246 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 17:03:41 crc kubenswrapper[4794]: I0216 17:03:41.599498 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 17:03:41 crc kubenswrapper[4794]: I0216 17:03:41.626435 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 16 17:03:41 crc kubenswrapper[4794]: I0216 17:03:41.638910 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 17:03:41 crc kubenswrapper[4794]: I0216 17:03:41.913561 4794 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 17:03:42 crc kubenswrapper[4794]: I0216 17:03:42.012236 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 16 17:03:42 crc kubenswrapper[4794]: I0216 17:03:42.120538 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 17:03:42 crc kubenswrapper[4794]: I0216 17:03:42.167267 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 16 17:03:42 crc kubenswrapper[4794]: I0216 17:03:42.221672 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 17:03:42 crc kubenswrapper[4794]: I0216 17:03:42.283598 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 17:03:42 crc kubenswrapper[4794]: I0216 17:03:42.310411 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 16 17:03:42 crc kubenswrapper[4794]: I0216 17:03:42.312508 4794 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 17:03:42 crc kubenswrapper[4794]: I0216 17:03:42.368447 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 16 17:03:42 crc kubenswrapper[4794]: I0216 17:03:42.416853 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 17:03:42 crc kubenswrapper[4794]: I0216 17:03:42.431024 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 17:03:42 crc kubenswrapper[4794]: I0216 17:03:42.650286 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 17:03:42 crc kubenswrapper[4794]: I0216 17:03:42.657790 4794 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 17:03:42 crc kubenswrapper[4794]: I0216 17:03:42.848766 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 17:03:42 crc kubenswrapper[4794]: I0216 17:03:42.872825 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 17:03:42 crc kubenswrapper[4794]: I0216 17:03:42.903753 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 16 17:03:42 crc kubenswrapper[4794]: I0216 17:03:42.929712 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 17:03:42 crc kubenswrapper[4794]: I0216 17:03:42.972973 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 17:03:42 crc kubenswrapper[4794]: I0216 17:03:42.973049 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 17:03:42 crc kubenswrapper[4794]: I0216 17:03:42.978925 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 17:03:43 crc kubenswrapper[4794]: I0216 17:03:43.015509 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 17:03:43 crc kubenswrapper[4794]: I0216 17:03:43.030747 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 16 17:03:43 crc kubenswrapper[4794]: I0216 17:03:43.196597 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 17:03:43 crc kubenswrapper[4794]: I0216 17:03:43.201547 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 17:03:43 crc kubenswrapper[4794]: I0216 17:03:43.210958 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 17:03:43 crc kubenswrapper[4794]: I0216 17:03:43.257165 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 17:03:43 crc kubenswrapper[4794]: I0216 17:03:43.383850 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 16 17:03:43 crc kubenswrapper[4794]: I0216 17:03:43.482604 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 17:03:43 crc kubenswrapper[4794]: I0216 17:03:43.582087 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 16 17:03:43 crc kubenswrapper[4794]: I0216 17:03:43.719777 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 17:03:43 crc kubenswrapper[4794]: I0216 17:03:43.739778 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 17:03:43 crc kubenswrapper[4794]: I0216 17:03:43.769089 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 17:03:43 crc kubenswrapper[4794]: I0216 17:03:43.879645 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 17:03:43 crc kubenswrapper[4794]: I0216 17:03:43.914902 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 17:03:43 crc kubenswrapper[4794]: I0216 17:03:43.931834 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 17:03:44 crc kubenswrapper[4794]: I0216 17:03:44.000805 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 16 17:03:44 crc kubenswrapper[4794]: I0216 17:03:44.048962 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 16 17:03:44 crc kubenswrapper[4794]: I0216 17:03:44.090386 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 17:03:44 crc kubenswrapper[4794]: I0216 17:03:44.243331 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 17:03:44 crc kubenswrapper[4794]: I0216 17:03:44.260208 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 17:03:44 crc kubenswrapper[4794]: I0216 17:03:44.277070 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 16 17:03:44 crc kubenswrapper[4794]: I0216 17:03:44.347366 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 16 17:03:44 crc kubenswrapper[4794]: I0216 17:03:44.373004 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 17:03:44 crc kubenswrapper[4794]: I0216 17:03:44.381503 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 17:03:44 crc kubenswrapper[4794]: I0216 17:03:44.401439 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 17:03:44 crc kubenswrapper[4794]: I0216 17:03:44.439272 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 16 17:03:44 crc kubenswrapper[4794]: I0216 17:03:44.486095 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 17:03:44 crc kubenswrapper[4794]: I0216 17:03:44.544245 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 17:03:44 crc kubenswrapper[4794]: I0216 17:03:44.613445 4794 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 16 17:03:44 crc kubenswrapper[4794]: I0216 17:03:44.677041 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 17:03:44 crc kubenswrapper[4794]: I0216 17:03:44.722260 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 16 17:03:44 crc kubenswrapper[4794]: I0216 17:03:44.769671 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 17:03:44 crc kubenswrapper[4794]: I0216 17:03:44.825619 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 17:03:44 crc kubenswrapper[4794]: I0216 17:03:44.851368 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 17:03:44 crc kubenswrapper[4794]: I0216 17:03:44.867890 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 16 17:03:44 crc kubenswrapper[4794]: I0216 17:03:44.870051 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 17:03:44 crc kubenswrapper[4794]: I0216 17:03:44.923929 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 17:03:44 crc kubenswrapper[4794]: I0216 17:03:44.933952 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 17:03:44 crc kubenswrapper[4794]: I0216 17:03:44.942682 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 17:03:45 crc kubenswrapper[4794]: I0216 17:03:45.009680 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 17:03:45 crc kubenswrapper[4794]: I0216 17:03:45.018374 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 17:03:45 crc kubenswrapper[4794]: I0216 17:03:45.085648 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 17:03:45 crc kubenswrapper[4794]: I0216 17:03:45.102984 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 17:03:45 crc kubenswrapper[4794]: I0216 17:03:45.272160 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 17:03:45 crc kubenswrapper[4794]: I0216 17:03:45.349941 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 17:03:45 crc kubenswrapper[4794]: I0216 17:03:45.418777 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 17:03:45 crc kubenswrapper[4794]: I0216 17:03:45.498092 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 17:03:45 crc kubenswrapper[4794]: I0216 17:03:45.555869 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 17:03:45 crc kubenswrapper[4794]: I0216 17:03:45.568359 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 17:03:45 crc kubenswrapper[4794]: I0216 17:03:45.568783 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 17:03:45 crc kubenswrapper[4794]: I0216 17:03:45.575427 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 16 17:03:45 crc kubenswrapper[4794]: I0216 17:03:45.595896 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 17:03:45 crc kubenswrapper[4794]: I0216 17:03:45.694074 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 17:03:45 crc kubenswrapper[4794]: I0216 17:03:45.731589 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 16 17:03:45 crc kubenswrapper[4794]: I0216 17:03:45.826796 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 17:03:45 crc kubenswrapper[4794]: I0216 17:03:45.838202 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 17:03:45 crc kubenswrapper[4794]: I0216 17:03:45.878648 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 17:03:45 crc kubenswrapper[4794]: I0216 17:03:45.938822 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 17:03:45 crc kubenswrapper[4794]: I0216 17:03:45.941222 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 17:03:45 crc kubenswrapper[4794]: I0216 17:03:45.948749 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 16 17:03:46 crc kubenswrapper[4794]: I0216 17:03:46.138260 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 17:03:46 crc kubenswrapper[4794]: I0216 17:03:46.146982 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 17:03:46 crc kubenswrapper[4794]: I0216 17:03:46.190177 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 16 17:03:46 crc kubenswrapper[4794]: I0216 17:03:46.212023 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 16 17:03:46 crc kubenswrapper[4794]: I0216 17:03:46.279042 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 17:03:46 crc kubenswrapper[4794]: I0216 17:03:46.339122 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 17:03:46 crc kubenswrapper[4794]: I0216 17:03:46.357233 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 17:03:46 crc kubenswrapper[4794]: I0216 17:03:46.410586 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 16 17:03:46 crc kubenswrapper[4794]: I0216 17:03:46.423030 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 17:03:46 crc kubenswrapper[4794]: I0216 17:03:46.453669 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 17:03:46 crc kubenswrapper[4794]: I0216 17:03:46.483704 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 17:03:46 crc kubenswrapper[4794]: I0216 17:03:46.514405 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 17:03:46 crc kubenswrapper[4794]: I0216 17:03:46.675776 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 17:03:46 crc kubenswrapper[4794]: I0216 17:03:46.683119 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 17:03:46 crc kubenswrapper[4794]: I0216 17:03:46.705728 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 17:03:46 crc kubenswrapper[4794]: I0216 17:03:46.717198 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 17:03:46 crc kubenswrapper[4794]: I0216 17:03:46.790751 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 16 17:03:46 crc kubenswrapper[4794]: I0216 17:03:46.798140 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 17:03:46 crc kubenswrapper[4794]: I0216 17:03:46.850778 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 16 17:03:46 crc kubenswrapper[4794]: I0216 17:03:46.868834 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 17:03:46 crc kubenswrapper[4794]: I0216 17:03:46.990784 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 17:03:47 crc kubenswrapper[4794]: I0216 17:03:47.175954 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 17:03:47 crc kubenswrapper[4794]: I0216 17:03:47.243326 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 17:03:47 crc kubenswrapper[4794]: I0216 17:03:47.255884 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 17:03:47 crc kubenswrapper[4794]: I0216 17:03:47.258087 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 17:03:47 crc kubenswrapper[4794]: I0216 17:03:47.300690 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 17:03:47 crc kubenswrapper[4794]: I0216 17:03:47.326008 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 17:03:47 crc kubenswrapper[4794]: I0216 17:03:47.516571 4794 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 16 17:03:47 crc kubenswrapper[4794]: I0216 17:03:47.516631 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 16 17:03:47 crc kubenswrapper[4794]: I0216 17:03:47.516687 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 17:03:47 crc kubenswrapper[4794]: I0216 17:03:47.517249 4794 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"24e0689419b5ee1d32e73c863543f1af4d7958f6f4e9beaa513c431cfb5d48a7"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 16 17:03:47 crc kubenswrapper[4794]: I0216 17:03:47.517370 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://24e0689419b5ee1d32e73c863543f1af4d7958f6f4e9beaa513c431cfb5d48a7" gracePeriod=30 Feb 16 17:03:47 crc kubenswrapper[4794]: I0216 17:03:47.524161 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 17:03:47 crc kubenswrapper[4794]: I0216 17:03:47.556794 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 17:03:47 crc kubenswrapper[4794]: I0216 17:03:47.723115 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 17:03:47 crc kubenswrapper[4794]: I0216 17:03:47.726339 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 17:03:47 crc kubenswrapper[4794]: I0216 17:03:47.762940 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 17:03:47 crc kubenswrapper[4794]: I0216 17:03:47.836829 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 17:03:47 crc kubenswrapper[4794]: I0216 17:03:47.980557 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 17:03:48 crc kubenswrapper[4794]: I0216 17:03:48.151927 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 17:03:48 crc kubenswrapper[4794]: I0216 17:03:48.175975 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 17:03:48 crc kubenswrapper[4794]: I0216 17:03:48.177313 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 17:03:48 crc kubenswrapper[4794]: I0216 17:03:48.216150 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 16 17:03:48 crc kubenswrapper[4794]: I0216 17:03:48.254789 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 17:03:48 crc kubenswrapper[4794]: I0216 17:03:48.293151 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 17:03:48 crc kubenswrapper[4794]: I0216 17:03:48.296702 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 17:03:48 crc kubenswrapper[4794]: I0216 17:03:48.484036 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 17:03:48 crc kubenswrapper[4794]: I0216 17:03:48.488138 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 17:03:48 crc kubenswrapper[4794]: I0216 17:03:48.601357 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 17:03:48 crc kubenswrapper[4794]: I0216 17:03:48.707055 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 16 17:03:48 crc kubenswrapper[4794]: I0216 17:03:48.716126 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 17:03:48 crc kubenswrapper[4794]: I0216 17:03:48.746505 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 17:03:48 crc kubenswrapper[4794]: I0216 17:03:48.764774 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 16 17:03:48 crc kubenswrapper[4794]: I0216 17:03:48.817042 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 16 17:03:48 crc kubenswrapper[4794]: I0216 17:03:48.862244 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 17:03:48 crc kubenswrapper[4794]: I0216 17:03:48.864787 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 16 17:03:48 crc kubenswrapper[4794]: I0216 17:03:48.910957 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 17:03:48 crc kubenswrapper[4794]: I0216 17:03:48.922007 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 17:03:49 crc kubenswrapper[4794]: I0216 17:03:49.299749 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 17:03:49 crc kubenswrapper[4794]: I0216 17:03:49.337153 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 17:03:49 crc kubenswrapper[4794]: I0216 17:03:49.352915 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 17:03:49 crc kubenswrapper[4794]: I0216 17:03:49.352968 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 17:03:49 crc kubenswrapper[4794]: I0216 17:03:49.448264 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 16 17:03:49 crc kubenswrapper[4794]: I0216 17:03:49.493352 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 17:03:49 crc kubenswrapper[4794]: I0216 17:03:49.512476 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 17:03:49 crc kubenswrapper[4794]: I0216 17:03:49.555631 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 17:03:49 crc kubenswrapper[4794]: I0216 17:03:49.935921 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 17:03:50 crc kubenswrapper[4794]: I0216 17:03:50.103678 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 17:03:50 crc kubenswrapper[4794]: I0216 17:03:50.131172 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 17:03:50 crc kubenswrapper[4794]: I0216 17:03:50.260894 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 17:03:50 crc kubenswrapper[4794]: I0216 17:03:50.478366 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 17:03:50 crc kubenswrapper[4794]: I0216 17:03:50.567444 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 17:03:50 crc kubenswrapper[4794]: I0216 17:03:50.686861 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 17:03:50 crc kubenswrapper[4794]: I0216 17:03:50.721863 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 17:03:50 crc kubenswrapper[4794]: I0216 17:03:50.786391 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 17:03:50 crc kubenswrapper[4794]: I0216 17:03:50.863028 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 17:03:50 crc kubenswrapper[4794]: I0216 17:03:50.874688 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 17:03:51 crc kubenswrapper[4794]: I0216 17:03:51.023332 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 17:03:52 crc kubenswrapper[4794]: I0216 17:03:52.005566 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 17:03:52 crc kubenswrapper[4794]: I0216 17:03:52.411922 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 17:03:53 crc kubenswrapper[4794]: I0216 17:03:53.340577 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 17:04:09 crc kubenswrapper[4794]: I0216 17:04:09.104344 4794 generic.go:334] "Generic (PLEG): container finished" podID="9c029145-bf5d-4a8c-9419-fdcf93c96a4d" containerID="f9cf8e3246408184e6b3aa25436ea6945ac6e95059e56bb5f8c5bec5791fe540" exitCode=0 Feb 16 17:04:09 crc kubenswrapper[4794]: I0216 17:04:09.104438 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-85b84" event={"ID":"9c029145-bf5d-4a8c-9419-fdcf93c96a4d","Type":"ContainerDied","Data":"f9cf8e3246408184e6b3aa25436ea6945ac6e95059e56bb5f8c5bec5791fe540"} Feb 16 17:04:09 crc kubenswrapper[4794]: I0216 17:04:09.105338 4794 scope.go:117] "RemoveContainer" containerID="f9cf8e3246408184e6b3aa25436ea6945ac6e95059e56bb5f8c5bec5791fe540" Feb 16 17:04:10 crc kubenswrapper[4794]: I0216 17:04:10.110785 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-85b84" event={"ID":"9c029145-bf5d-4a8c-9419-fdcf93c96a4d","Type":"ContainerStarted","Data":"45798dba8bdc71a58534bef3e846c8bfcaec57fd997841d29df53d367345ce9d"} Feb 16 17:04:10 crc kubenswrapper[4794]: I0216 17:04:10.111453 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-85b84" Feb 16 17:04:10 crc kubenswrapper[4794]: I0216 17:04:10.114723 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-85b84" Feb 16 17:04:18 crc kubenswrapper[4794]: I0216 17:04:18.153417 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 16 17:04:18 crc kubenswrapper[4794]: I0216 17:04:18.155645 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 17:04:18 crc kubenswrapper[4794]: I0216 17:04:18.155703 4794 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="24e0689419b5ee1d32e73c863543f1af4d7958f6f4e9beaa513c431cfb5d48a7" exitCode=137 Feb 16 17:04:18 crc kubenswrapper[4794]: I0216 17:04:18.155743 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"24e0689419b5ee1d32e73c863543f1af4d7958f6f4e9beaa513c431cfb5d48a7"} Feb 16 17:04:18 crc kubenswrapper[4794]: I0216 17:04:18.155779 4794 scope.go:117] "RemoveContainer" containerID="4defaeccb0c13dd1f8f16284b9a707b722ef54bbfa6d6032e390771d08a750bc" Feb 16 17:04:19 crc kubenswrapper[4794]: I0216 17:04:19.169062 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 16 17:04:19 crc kubenswrapper[4794]: I0216 17:04:19.170732 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8163a07bb2d61c51f5951ad115d5d2e5a0029415040bd9ea839f33de34aefd31"} Feb 16 17:04:27 crc kubenswrapper[4794]: I0216 17:04:27.516116 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 17:04:27 crc kubenswrapper[4794]: I0216 17:04:27.522441 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 17:04:28 crc kubenswrapper[4794]: I0216 17:04:28.222383 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 17:04:28 crc kubenswrapper[4794]: I0216 17:04:28.229007 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 17:04:34 crc kubenswrapper[4794]: I0216 17:04:34.590652 4794 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 16 17:05:00 crc kubenswrapper[4794]: I0216 17:05:00.318272 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tpgxs"] Feb 16 17:05:00 crc kubenswrapper[4794]: E0216 17:05:00.319017 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 17:05:00 crc kubenswrapper[4794]: I0216 17:05:00.319029 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 17:05:00 crc kubenswrapper[4794]: E0216 17:05:00.319040 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cd94818-7cad-4289-9c9d-ebdddf83a6c8" containerName="installer" Feb 16 17:05:00 crc kubenswrapper[4794]: I0216 17:05:00.319047 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd94818-7cad-4289-9c9d-ebdddf83a6c8" containerName="installer" Feb 16 17:05:00 crc kubenswrapper[4794]: I0216 17:05:00.319139 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 17:05:00 crc kubenswrapper[4794]: I0216 17:05:00.319153 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cd94818-7cad-4289-9c9d-ebdddf83a6c8" containerName="installer" Feb 16 17:05:00 crc kubenswrapper[4794]: I0216 17:05:00.319526 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-tpgxs" Feb 16 17:05:00 crc kubenswrapper[4794]: I0216 17:05:00.330914 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tpgxs"] Feb 16 17:05:00 crc kubenswrapper[4794]: I0216 17:05:00.505844 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/79f7753a-f961-4052-b281-ad09dc2b10a2-registry-certificates\") pod \"image-registry-66df7c8f76-tpgxs\" (UID: \"79f7753a-f961-4052-b281-ad09dc2b10a2\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpgxs" Feb 16 17:05:00 crc kubenswrapper[4794]: I0216 17:05:00.505892 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/79f7753a-f961-4052-b281-ad09dc2b10a2-bound-sa-token\") pod \"image-registry-66df7c8f76-tpgxs\" (UID: \"79f7753a-f961-4052-b281-ad09dc2b10a2\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpgxs" Feb 16 17:05:00 crc kubenswrapper[4794]: I0216 17:05:00.505926 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/79f7753a-f961-4052-b281-ad09dc2b10a2-registry-tls\") pod \"image-registry-66df7c8f76-tpgxs\" (UID: \"79f7753a-f961-4052-b281-ad09dc2b10a2\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpgxs" Feb 16 17:05:00 crc kubenswrapper[4794]: I0216 17:05:00.505955 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-tpgxs\" (UID: \"79f7753a-f961-4052-b281-ad09dc2b10a2\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpgxs" Feb 16 17:05:00 crc kubenswrapper[4794]: I0216 17:05:00.505990 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/79f7753a-f961-4052-b281-ad09dc2b10a2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tpgxs\" (UID: \"79f7753a-f961-4052-b281-ad09dc2b10a2\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpgxs" Feb 16 17:05:00 crc kubenswrapper[4794]: I0216 17:05:00.506022 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79f7753a-f961-4052-b281-ad09dc2b10a2-trusted-ca\") pod \"image-registry-66df7c8f76-tpgxs\" (UID: \"79f7753a-f961-4052-b281-ad09dc2b10a2\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpgxs" Feb 16 17:05:00 crc kubenswrapper[4794]: I0216 17:05:00.506040 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82l92\" (UniqueName: \"kubernetes.io/projected/79f7753a-f961-4052-b281-ad09dc2b10a2-kube-api-access-82l92\") pod \"image-registry-66df7c8f76-tpgxs\" (UID: \"79f7753a-f961-4052-b281-ad09dc2b10a2\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpgxs" Feb 16 17:05:00 crc kubenswrapper[4794]: I0216 17:05:00.506064 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/79f7753a-f961-4052-b281-ad09dc2b10a2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tpgxs\" (UID: \"79f7753a-f961-4052-b281-ad09dc2b10a2\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpgxs" Feb 16 17:05:00 crc kubenswrapper[4794]: I0216 17:05:00.530880 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-tpgxs\" (UID: \"79f7753a-f961-4052-b281-ad09dc2b10a2\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpgxs" Feb 16 17:05:00 crc kubenswrapper[4794]: I0216 17:05:00.607607 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/79f7753a-f961-4052-b281-ad09dc2b10a2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tpgxs\" (UID: \"79f7753a-f961-4052-b281-ad09dc2b10a2\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpgxs" Feb 16 17:05:00 crc kubenswrapper[4794]: I0216 17:05:00.608123 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/79f7753a-f961-4052-b281-ad09dc2b10a2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tpgxs\" (UID: \"79f7753a-f961-4052-b281-ad09dc2b10a2\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpgxs" Feb 16 17:05:00 crc kubenswrapper[4794]: I0216 17:05:00.608259 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79f7753a-f961-4052-b281-ad09dc2b10a2-trusted-ca\") pod \"image-registry-66df7c8f76-tpgxs\" (UID: \"79f7753a-f961-4052-b281-ad09dc2b10a2\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpgxs" Feb 16 17:05:00 crc kubenswrapper[4794]: I0216 17:05:00.608284 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82l92\" (UniqueName: \"kubernetes.io/projected/79f7753a-f961-4052-b281-ad09dc2b10a2-kube-api-access-82l92\") pod \"image-registry-66df7c8f76-tpgxs\" (UID: \"79f7753a-f961-4052-b281-ad09dc2b10a2\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpgxs" Feb 16 17:05:00 crc kubenswrapper[4794]: I0216 17:05:00.608343 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/79f7753a-f961-4052-b281-ad09dc2b10a2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tpgxs\" (UID: \"79f7753a-f961-4052-b281-ad09dc2b10a2\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpgxs" Feb 16 17:05:00 crc kubenswrapper[4794]: I0216 17:05:00.608726 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/79f7753a-f961-4052-b281-ad09dc2b10a2-registry-certificates\") pod \"image-registry-66df7c8f76-tpgxs\" (UID: \"79f7753a-f961-4052-b281-ad09dc2b10a2\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpgxs" Feb 16 17:05:00 crc kubenswrapper[4794]: I0216 17:05:00.609616 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/79f7753a-f961-4052-b281-ad09dc2b10a2-bound-sa-token\") pod \"image-registry-66df7c8f76-tpgxs\" (UID: \"79f7753a-f961-4052-b281-ad09dc2b10a2\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpgxs" Feb 16 17:05:00 crc kubenswrapper[4794]: I0216 17:05:00.609752 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/79f7753a-f961-4052-b281-ad09dc2b10a2-registry-tls\") pod \"image-registry-66df7c8f76-tpgxs\" (UID: \"79f7753a-f961-4052-b281-ad09dc2b10a2\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpgxs" Feb 16 17:05:00 crc kubenswrapper[4794]: I0216 17:05:00.609933 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/79f7753a-f961-4052-b281-ad09dc2b10a2-registry-certificates\") pod \"image-registry-66df7c8f76-tpgxs\" (UID: \"79f7753a-f961-4052-b281-ad09dc2b10a2\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpgxs" Feb 16 17:05:00 crc kubenswrapper[4794]: I0216 17:05:00.610209 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79f7753a-f961-4052-b281-ad09dc2b10a2-trusted-ca\") pod \"image-registry-66df7c8f76-tpgxs\" (UID: \"79f7753a-f961-4052-b281-ad09dc2b10a2\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpgxs" Feb 16 17:05:00 crc kubenswrapper[4794]: I0216 17:05:00.614647 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/79f7753a-f961-4052-b281-ad09dc2b10a2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tpgxs\" (UID: \"79f7753a-f961-4052-b281-ad09dc2b10a2\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpgxs" Feb 16 17:05:00 crc kubenswrapper[4794]: I0216 17:05:00.616155 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/79f7753a-f961-4052-b281-ad09dc2b10a2-registry-tls\") pod \"image-registry-66df7c8f76-tpgxs\" (UID: \"79f7753a-f961-4052-b281-ad09dc2b10a2\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpgxs" Feb 16 17:05:00 crc kubenswrapper[4794]: I0216 17:05:00.626956 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/79f7753a-f961-4052-b281-ad09dc2b10a2-bound-sa-token\") pod \"image-registry-66df7c8f76-tpgxs\" (UID: \"79f7753a-f961-4052-b281-ad09dc2b10a2\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpgxs" Feb 16 17:05:00 crc kubenswrapper[4794]: I0216 17:05:00.637351 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82l92\" (UniqueName: \"kubernetes.io/projected/79f7753a-f961-4052-b281-ad09dc2b10a2-kube-api-access-82l92\") pod \"image-registry-66df7c8f76-tpgxs\" (UID: \"79f7753a-f961-4052-b281-ad09dc2b10a2\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpgxs" Feb 16 17:05:00 crc kubenswrapper[4794]: I0216 17:05:00.640581 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-tpgxs" Feb 16 17:05:00 crc kubenswrapper[4794]: I0216 17:05:00.851494 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tpgxs"] Feb 16 17:05:01 crc kubenswrapper[4794]: I0216 17:05:01.425562 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-tpgxs" event={"ID":"79f7753a-f961-4052-b281-ad09dc2b10a2","Type":"ContainerStarted","Data":"7186eca2cf47b02cc69c88198acedce6d8bf6457a701526c6d0fc3fced019d9f"} Feb 16 17:05:01 crc kubenswrapper[4794]: I0216 17:05:01.425634 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-tpgxs" event={"ID":"79f7753a-f961-4052-b281-ad09dc2b10a2","Type":"ContainerStarted","Data":"8b70c0387eb515fd7a8c2b94ac2cf47c92c3ba2d58ef46241d2b88b27e3e18b6"} Feb 16 17:05:01 crc kubenswrapper[4794]: I0216 17:05:01.425721 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-tpgxs" Feb 16 17:05:01 crc kubenswrapper[4794]: I0216 17:05:01.448622 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-tpgxs" podStartSLOduration=1.448599929 podStartE2EDuration="1.448599929s" podCreationTimestamp="2026-02-16 17:05:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:05:01.442478478 +0000 UTC m=+327.390573125" watchObservedRunningTime="2026-02-16 17:05:01.448599929 +0000 UTC m=+327.396694616" Feb 16 17:05:20 crc kubenswrapper[4794]: I0216 17:05:20.141034 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:05:20 crc kubenswrapper[4794]: I0216 17:05:20.141629 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:05:20 crc kubenswrapper[4794]: I0216 17:05:20.652471 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-tpgxs" Feb 16 17:05:20 crc kubenswrapper[4794]: I0216 17:05:20.733737 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-h6xgf"] Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.386537 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6v5np"] Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.387662 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6v5np" podUID="aa756591-c2f4-430e-8f17-bd040051f77d" containerName="registry-server" containerID="cri-o://872d1b9c96df1b502dd7971130ede6ef9e6714b71a7ffd21124860e6b42c7de5" gracePeriod=30 Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.403502 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7cctn"] Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.408297 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7cctn" podUID="0f9ab6e7-980e-4a61-9072-cd2baa7c51ab" containerName="registry-server" containerID="cri-o://c42d498b85c6841f1e1c4f7ce19346e9e3f22d61cc77ccd19b5868e00ad59207" gracePeriod=30 Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.410520 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-85b84"] Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.410738 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-85b84" podUID="9c029145-bf5d-4a8c-9419-fdcf93c96a4d" containerName="marketplace-operator" containerID="cri-o://45798dba8bdc71a58534bef3e846c8bfcaec57fd997841d29df53d367345ce9d" gracePeriod=30 Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.431487 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5sk9z"] Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.431774 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5sk9z" podUID="3d9a576c-db95-4e07-9d36-c93e7adfbc46" containerName="registry-server" containerID="cri-o://101e2ace45e8f91bb0ec9f38f8a90442863142c22f3975f77a23c79e98e18732" gracePeriod=30 Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.442673 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8hqkn"] Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.443788 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8hqkn" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.461119 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7nzlb"] Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.461375 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7nzlb" podUID="1bfe3d12-bcac-4380-b906-7abe78d56232" containerName="registry-server" containerID="cri-o://b01e36befcd84ac0ca5e00992989458aca376661573ac71d358aa9145e63c6a8" gracePeriod=30 Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.465592 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8hqkn"] Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.522157 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7dbed710-cd99-4571-8aca-92145b798f65-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8hqkn\" (UID: \"7dbed710-cd99-4571-8aca-92145b798f65\") " pod="openshift-marketplace/marketplace-operator-79b997595-8hqkn" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.522214 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4792z\" (UniqueName: \"kubernetes.io/projected/7dbed710-cd99-4571-8aca-92145b798f65-kube-api-access-4792z\") pod \"marketplace-operator-79b997595-8hqkn\" (UID: \"7dbed710-cd99-4571-8aca-92145b798f65\") " pod="openshift-marketplace/marketplace-operator-79b997595-8hqkn" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.522264 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7dbed710-cd99-4571-8aca-92145b798f65-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8hqkn\" (UID: \"7dbed710-cd99-4571-8aca-92145b798f65\") " pod="openshift-marketplace/marketplace-operator-79b997595-8hqkn" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.623001 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4792z\" (UniqueName: \"kubernetes.io/projected/7dbed710-cd99-4571-8aca-92145b798f65-kube-api-access-4792z\") pod \"marketplace-operator-79b997595-8hqkn\" (UID: \"7dbed710-cd99-4571-8aca-92145b798f65\") " pod="openshift-marketplace/marketplace-operator-79b997595-8hqkn" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.623078 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7dbed710-cd99-4571-8aca-92145b798f65-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8hqkn\" (UID: \"7dbed710-cd99-4571-8aca-92145b798f65\") " pod="openshift-marketplace/marketplace-operator-79b997595-8hqkn" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.623109 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7dbed710-cd99-4571-8aca-92145b798f65-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8hqkn\" (UID: \"7dbed710-cd99-4571-8aca-92145b798f65\") " pod="openshift-marketplace/marketplace-operator-79b997595-8hqkn" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.627018 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7dbed710-cd99-4571-8aca-92145b798f65-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8hqkn\" (UID: \"7dbed710-cd99-4571-8aca-92145b798f65\") " pod="openshift-marketplace/marketplace-operator-79b997595-8hqkn" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.634385 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7dbed710-cd99-4571-8aca-92145b798f65-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8hqkn\" (UID: \"7dbed710-cd99-4571-8aca-92145b798f65\") " pod="openshift-marketplace/marketplace-operator-79b997595-8hqkn" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.640784 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4792z\" (UniqueName: \"kubernetes.io/projected/7dbed710-cd99-4571-8aca-92145b798f65-kube-api-access-4792z\") pod \"marketplace-operator-79b997595-8hqkn\" (UID: \"7dbed710-cd99-4571-8aca-92145b798f65\") " pod="openshift-marketplace/marketplace-operator-79b997595-8hqkn" Feb 16 17:05:41 crc kubenswrapper[4794]: E0216 17:05:41.693593 4794 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 101e2ace45e8f91bb0ec9f38f8a90442863142c22f3975f77a23c79e98e18732 is running failed: container process not found" containerID="101e2ace45e8f91bb0ec9f38f8a90442863142c22f3975f77a23c79e98e18732" cmd=["grpc_health_probe","-addr=:50051"] Feb 16 17:05:41 crc kubenswrapper[4794]: E0216 17:05:41.693817 4794 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 101e2ace45e8f91bb0ec9f38f8a90442863142c22f3975f77a23c79e98e18732 is running failed: container process not found" containerID="101e2ace45e8f91bb0ec9f38f8a90442863142c22f3975f77a23c79e98e18732" cmd=["grpc_health_probe","-addr=:50051"] Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.693950 4794 generic.go:334] "Generic (PLEG): container finished" podID="0f9ab6e7-980e-4a61-9072-cd2baa7c51ab" containerID="c42d498b85c6841f1e1c4f7ce19346e9e3f22d61cc77ccd19b5868e00ad59207" exitCode=0 Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.693996 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cctn" event={"ID":"0f9ab6e7-980e-4a61-9072-cd2baa7c51ab","Type":"ContainerDied","Data":"c42d498b85c6841f1e1c4f7ce19346e9e3f22d61cc77ccd19b5868e00ad59207"} Feb 16 17:05:41 crc kubenswrapper[4794]: E0216 17:05:41.694046 4794 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 101e2ace45e8f91bb0ec9f38f8a90442863142c22f3975f77a23c79e98e18732 is running failed: container process not found" containerID="101e2ace45e8f91bb0ec9f38f8a90442863142c22f3975f77a23c79e98e18732" cmd=["grpc_health_probe","-addr=:50051"] Feb 16 17:05:41 crc kubenswrapper[4794]: E0216 17:05:41.694065 4794 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 101e2ace45e8f91bb0ec9f38f8a90442863142c22f3975f77a23c79e98e18732 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-5sk9z" podUID="3d9a576c-db95-4e07-9d36-c93e7adfbc46" containerName="registry-server" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.700493 4794 generic.go:334] "Generic (PLEG): container finished" podID="3d9a576c-db95-4e07-9d36-c93e7adfbc46" containerID="101e2ace45e8f91bb0ec9f38f8a90442863142c22f3975f77a23c79e98e18732" exitCode=0 Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.700566 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5sk9z" event={"ID":"3d9a576c-db95-4e07-9d36-c93e7adfbc46","Type":"ContainerDied","Data":"101e2ace45e8f91bb0ec9f38f8a90442863142c22f3975f77a23c79e98e18732"} Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.709845 4794 generic.go:334] "Generic (PLEG): container finished" podID="9c029145-bf5d-4a8c-9419-fdcf93c96a4d" containerID="45798dba8bdc71a58534bef3e846c8bfcaec57fd997841d29df53d367345ce9d" exitCode=0 Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.709917 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-85b84" event={"ID":"9c029145-bf5d-4a8c-9419-fdcf93c96a4d","Type":"ContainerDied","Data":"45798dba8bdc71a58534bef3e846c8bfcaec57fd997841d29df53d367345ce9d"} Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.709950 4794 scope.go:117] "RemoveContainer" containerID="f9cf8e3246408184e6b3aa25436ea6945ac6e95059e56bb5f8c5bec5791fe540" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.713442 4794 generic.go:334] "Generic (PLEG): container finished" podID="aa756591-c2f4-430e-8f17-bd040051f77d" containerID="872d1b9c96df1b502dd7971130ede6ef9e6714b71a7ffd21124860e6b42c7de5" exitCode=0 Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.713485 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6v5np" event={"ID":"aa756591-c2f4-430e-8f17-bd040051f77d","Type":"ContainerDied","Data":"872d1b9c96df1b502dd7971130ede6ef9e6714b71a7ffd21124860e6b42c7de5"} Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.715501 4794 generic.go:334] "Generic (PLEG): container finished" podID="1bfe3d12-bcac-4380-b906-7abe78d56232" containerID="b01e36befcd84ac0ca5e00992989458aca376661573ac71d358aa9145e63c6a8" exitCode=0 Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.715526 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7nzlb" event={"ID":"1bfe3d12-bcac-4380-b906-7abe78d56232","Type":"ContainerDied","Data":"b01e36befcd84ac0ca5e00992989458aca376661573ac71d358aa9145e63c6a8"} Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.836429 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8hqkn" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.852752 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6v5np" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.857509 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7cctn" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.861189 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7nzlb" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.877562 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-85b84" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.897152 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5sk9z" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.936752 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8t2r\" (UniqueName: \"kubernetes.io/projected/1bfe3d12-bcac-4380-b906-7abe78d56232-kube-api-access-w8t2r\") pod \"1bfe3d12-bcac-4380-b906-7abe78d56232\" (UID: \"1bfe3d12-bcac-4380-b906-7abe78d56232\") " Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.936819 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n8rd\" (UniqueName: \"kubernetes.io/projected/aa756591-c2f4-430e-8f17-bd040051f77d-kube-api-access-7n8rd\") pod \"aa756591-c2f4-430e-8f17-bd040051f77d\" (UID: \"aa756591-c2f4-430e-8f17-bd040051f77d\") " Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.936855 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9c029145-bf5d-4a8c-9419-fdcf93c96a4d-marketplace-operator-metrics\") pod \"9c029145-bf5d-4a8c-9419-fdcf93c96a4d\" (UID: \"9c029145-bf5d-4a8c-9419-fdcf93c96a4d\") " Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.936923 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d9a576c-db95-4e07-9d36-c93e7adfbc46-utilities\") pod \"3d9a576c-db95-4e07-9d36-c93e7adfbc46\" (UID: \"3d9a576c-db95-4e07-9d36-c93e7adfbc46\") " Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.936975 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa756591-c2f4-430e-8f17-bd040051f77d-utilities\") pod \"aa756591-c2f4-430e-8f17-bd040051f77d\" (UID: \"aa756591-c2f4-430e-8f17-bd040051f77d\") " Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.937017 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c029145-bf5d-4a8c-9419-fdcf93c96a4d-marketplace-trusted-ca\") pod \"9c029145-bf5d-4a8c-9419-fdcf93c96a4d\" (UID: \"9c029145-bf5d-4a8c-9419-fdcf93c96a4d\") " Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.937049 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f9ab6e7-980e-4a61-9072-cd2baa7c51ab-catalog-content\") pod \"0f9ab6e7-980e-4a61-9072-cd2baa7c51ab\" (UID: \"0f9ab6e7-980e-4a61-9072-cd2baa7c51ab\") " Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.937076 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9mgj\" (UniqueName: \"kubernetes.io/projected/3d9a576c-db95-4e07-9d36-c93e7adfbc46-kube-api-access-x9mgj\") pod \"3d9a576c-db95-4e07-9d36-c93e7adfbc46\" (UID: \"3d9a576c-db95-4e07-9d36-c93e7adfbc46\") " Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.937115 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lq6f6\" (UniqueName: \"kubernetes.io/projected/0f9ab6e7-980e-4a61-9072-cd2baa7c51ab-kube-api-access-lq6f6\") pod \"0f9ab6e7-980e-4a61-9072-cd2baa7c51ab\" (UID: \"0f9ab6e7-980e-4a61-9072-cd2baa7c51ab\") " Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.937144 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa756591-c2f4-430e-8f17-bd040051f77d-catalog-content\") pod \"aa756591-c2f4-430e-8f17-bd040051f77d\" (UID: \"aa756591-c2f4-430e-8f17-bd040051f77d\") " Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.937216 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jbgr\" (UniqueName: \"kubernetes.io/projected/9c029145-bf5d-4a8c-9419-fdcf93c96a4d-kube-api-access-9jbgr\") pod \"9c029145-bf5d-4a8c-9419-fdcf93c96a4d\" (UID: \"9c029145-bf5d-4a8c-9419-fdcf93c96a4d\") " Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.937244 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f9ab6e7-980e-4a61-9072-cd2baa7c51ab-utilities\") pod \"0f9ab6e7-980e-4a61-9072-cd2baa7c51ab\" (UID: \"0f9ab6e7-980e-4a61-9072-cd2baa7c51ab\") " Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.937293 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bfe3d12-bcac-4380-b906-7abe78d56232-catalog-content\") pod \"1bfe3d12-bcac-4380-b906-7abe78d56232\" (UID: \"1bfe3d12-bcac-4380-b906-7abe78d56232\") " Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.941510 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bfe3d12-bcac-4380-b906-7abe78d56232-utilities\") pod \"1bfe3d12-bcac-4380-b906-7abe78d56232\" (UID: \"1bfe3d12-bcac-4380-b906-7abe78d56232\") " Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.938255 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c029145-bf5d-4a8c-9419-fdcf93c96a4d-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "9c029145-bf5d-4a8c-9419-fdcf93c96a4d" (UID: "9c029145-bf5d-4a8c-9419-fdcf93c96a4d"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.944156 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa756591-c2f4-430e-8f17-bd040051f77d-utilities" (OuterVolumeSpecName: "utilities") pod "aa756591-c2f4-430e-8f17-bd040051f77d" (UID: "aa756591-c2f4-430e-8f17-bd040051f77d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.944744 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f9ab6e7-980e-4a61-9072-cd2baa7c51ab-utilities" (OuterVolumeSpecName: "utilities") pod "0f9ab6e7-980e-4a61-9072-cd2baa7c51ab" (UID: "0f9ab6e7-980e-4a61-9072-cd2baa7c51ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.945064 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d9a576c-db95-4e07-9d36-c93e7adfbc46-utilities" (OuterVolumeSpecName: "utilities") pod "3d9a576c-db95-4e07-9d36-c93e7adfbc46" (UID: "3d9a576c-db95-4e07-9d36-c93e7adfbc46"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.946165 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bfe3d12-bcac-4380-b906-7abe78d56232-utilities" (OuterVolumeSpecName: "utilities") pod "1bfe3d12-bcac-4380-b906-7abe78d56232" (UID: "1bfe3d12-bcac-4380-b906-7abe78d56232"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.946809 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bfe3d12-bcac-4380-b906-7abe78d56232-kube-api-access-w8t2r" (OuterVolumeSpecName: "kube-api-access-w8t2r") pod "1bfe3d12-bcac-4380-b906-7abe78d56232" (UID: "1bfe3d12-bcac-4380-b906-7abe78d56232"). InnerVolumeSpecName "kube-api-access-w8t2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.951338 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d9a576c-db95-4e07-9d36-c93e7adfbc46-kube-api-access-x9mgj" (OuterVolumeSpecName: "kube-api-access-x9mgj") pod "3d9a576c-db95-4e07-9d36-c93e7adfbc46" (UID: "3d9a576c-db95-4e07-9d36-c93e7adfbc46"). InnerVolumeSpecName "kube-api-access-x9mgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.958252 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c029145-bf5d-4a8c-9419-fdcf93c96a4d-kube-api-access-9jbgr" (OuterVolumeSpecName: "kube-api-access-9jbgr") pod "9c029145-bf5d-4a8c-9419-fdcf93c96a4d" (UID: "9c029145-bf5d-4a8c-9419-fdcf93c96a4d"). InnerVolumeSpecName "kube-api-access-9jbgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.958457 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa756591-c2f4-430e-8f17-bd040051f77d-kube-api-access-7n8rd" (OuterVolumeSpecName: "kube-api-access-7n8rd") pod "aa756591-c2f4-430e-8f17-bd040051f77d" (UID: "aa756591-c2f4-430e-8f17-bd040051f77d"). InnerVolumeSpecName "kube-api-access-7n8rd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.959002 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f9ab6e7-980e-4a61-9072-cd2baa7c51ab-kube-api-access-lq6f6" (OuterVolumeSpecName: "kube-api-access-lq6f6") pod "0f9ab6e7-980e-4a61-9072-cd2baa7c51ab" (UID: "0f9ab6e7-980e-4a61-9072-cd2baa7c51ab"). InnerVolumeSpecName "kube-api-access-lq6f6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.960099 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c029145-bf5d-4a8c-9419-fdcf93c96a4d-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "9c029145-bf5d-4a8c-9419-fdcf93c96a4d" (UID: "9c029145-bf5d-4a8c-9419-fdcf93c96a4d"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.969654 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d9a576c-db95-4e07-9d36-c93e7adfbc46-catalog-content\") pod \"3d9a576c-db95-4e07-9d36-c93e7adfbc46\" (UID: \"3d9a576c-db95-4e07-9d36-c93e7adfbc46\") " Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.970333 4794 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9c029145-bf5d-4a8c-9419-fdcf93c96a4d-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.970354 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d9a576c-db95-4e07-9d36-c93e7adfbc46-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.970367 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa756591-c2f4-430e-8f17-bd040051f77d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.970379 4794 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c029145-bf5d-4a8c-9419-fdcf93c96a4d-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.970390 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9mgj\" (UniqueName: \"kubernetes.io/projected/3d9a576c-db95-4e07-9d36-c93e7adfbc46-kube-api-access-x9mgj\") on node \"crc\" DevicePath \"\"" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.970403 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lq6f6\" (UniqueName: \"kubernetes.io/projected/0f9ab6e7-980e-4a61-9072-cd2baa7c51ab-kube-api-access-lq6f6\") on node \"crc\" DevicePath \"\"" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.970415 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f9ab6e7-980e-4a61-9072-cd2baa7c51ab-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.970571 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jbgr\" (UniqueName: \"kubernetes.io/projected/9c029145-bf5d-4a8c-9419-fdcf93c96a4d-kube-api-access-9jbgr\") on node \"crc\" DevicePath \"\"" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.970943 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bfe3d12-bcac-4380-b906-7abe78d56232-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.970961 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8t2r\" (UniqueName: \"kubernetes.io/projected/1bfe3d12-bcac-4380-b906-7abe78d56232-kube-api-access-w8t2r\") on node \"crc\" DevicePath \"\"" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.970971 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n8rd\" (UniqueName: \"kubernetes.io/projected/aa756591-c2f4-430e-8f17-bd040051f77d-kube-api-access-7n8rd\") on node \"crc\" DevicePath \"\"" Feb 16 17:05:41 crc kubenswrapper[4794]: I0216 17:05:41.995089 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d9a576c-db95-4e07-9d36-c93e7adfbc46-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d9a576c-db95-4e07-9d36-c93e7adfbc46" (UID: "3d9a576c-db95-4e07-9d36-c93e7adfbc46"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.027201 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f9ab6e7-980e-4a61-9072-cd2baa7c51ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0f9ab6e7-980e-4a61-9072-cd2baa7c51ab" (UID: "0f9ab6e7-980e-4a61-9072-cd2baa7c51ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.037507 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa756591-c2f4-430e-8f17-bd040051f77d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa756591-c2f4-430e-8f17-bd040051f77d" (UID: "aa756591-c2f4-430e-8f17-bd040051f77d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.073085 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f9ab6e7-980e-4a61-9072-cd2baa7c51ab-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.073116 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa756591-c2f4-430e-8f17-bd040051f77d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.073127 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d9a576c-db95-4e07-9d36-c93e7adfbc46-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.101440 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8hqkn"] Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.124253 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bfe3d12-bcac-4380-b906-7abe78d56232-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1bfe3d12-bcac-4380-b906-7abe78d56232" (UID: "1bfe3d12-bcac-4380-b906-7abe78d56232"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.173697 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bfe3d12-bcac-4380-b906-7abe78d56232-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.723053 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cctn" event={"ID":"0f9ab6e7-980e-4a61-9072-cd2baa7c51ab","Type":"ContainerDied","Data":"2d17d3d6a236065cd7e811f43742b7a5f2dae8d121ac92522410e06373e9ed16"} Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.723384 4794 scope.go:117] "RemoveContainer" containerID="c42d498b85c6841f1e1c4f7ce19346e9e3f22d61cc77ccd19b5868e00ad59207" Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.723074 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7cctn" Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.725100 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5sk9z" event={"ID":"3d9a576c-db95-4e07-9d36-c93e7adfbc46","Type":"ContainerDied","Data":"9940f5d7c5b0dda7893f45c0ed536276c8eeabc4543591fa673ebe10815dd2a9"} Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.725129 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5sk9z" Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.726192 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-85b84" event={"ID":"9c029145-bf5d-4a8c-9419-fdcf93c96a4d","Type":"ContainerDied","Data":"0c85789584138b14fa1f1c4029ec1f6fff79042b1fdd1262df8c4445cb5ae128"} Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.726253 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-85b84" Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.728591 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6v5np" event={"ID":"aa756591-c2f4-430e-8f17-bd040051f77d","Type":"ContainerDied","Data":"a7a07ef59f3883f8372eff4d9509e673c252c5232022c8b539a224431faa3010"} Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.728644 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6v5np" Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.732114 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7nzlb" event={"ID":"1bfe3d12-bcac-4380-b906-7abe78d56232","Type":"ContainerDied","Data":"6efa4dc49181c7cf0212d254d20676f72be8f2186405d309e0572457abafc23c"} Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.732150 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7nzlb" Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.734069 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8hqkn" event={"ID":"7dbed710-cd99-4571-8aca-92145b798f65","Type":"ContainerStarted","Data":"95305e36fa4f2a6c5a0765efe1331ac813cfacb7aa275d8be5fe66192924e00b"} Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.734121 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8hqkn" event={"ID":"7dbed710-cd99-4571-8aca-92145b798f65","Type":"ContainerStarted","Data":"c426563b93894fb3b5403a70e62ad8f3a1227fcf43a18658307919ab0a166fcf"} Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.739785 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-8hqkn" Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.747253 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-8hqkn" Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.750266 4794 scope.go:117] "RemoveContainer" containerID="495ae736baebd4c74d9e49656cdb4b3cc30f38d7199efe52d7009055906d49de" Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.771243 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-8hqkn" podStartSLOduration=1.771221986 podStartE2EDuration="1.771221986s" podCreationTimestamp="2026-02-16 17:05:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:05:42.767161046 +0000 UTC m=+368.715255703" watchObservedRunningTime="2026-02-16 17:05:42.771221986 +0000 UTC m=+368.719316663" Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.773753 4794 scope.go:117] "RemoveContainer" containerID="71bdf376a8635ab531c452263b9c2823c51d4966b981aa47b197296044fdd364" Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.810884 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5sk9z"] Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.822317 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5sk9z"] Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.827633 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7cctn"] Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.830424 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7cctn"] Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.840068 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7nzlb"] Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.846173 4794 scope.go:117] "RemoveContainer" containerID="101e2ace45e8f91bb0ec9f38f8a90442863142c22f3975f77a23c79e98e18732" Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.864196 4794 scope.go:117] "RemoveContainer" containerID="c5b8ee0b6432c5bc56708a1b1812d6a38bbd76d7dff5c48d0b77a8f2c85fbb38" Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.867438 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7nzlb"] Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.873963 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6v5np"] Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.880495 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6v5np"] Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.883555 4794 scope.go:117] "RemoveContainer" containerID="2d75b48ef75f4e53d1c41c694ceb6430dc93619c33bb03bc02136286684c8a61" Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.885329 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-85b84"] Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.889357 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-85b84"] Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.901045 4794 scope.go:117] "RemoveContainer" containerID="45798dba8bdc71a58534bef3e846c8bfcaec57fd997841d29df53d367345ce9d" Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.912456 4794 scope.go:117] "RemoveContainer" containerID="872d1b9c96df1b502dd7971130ede6ef9e6714b71a7ffd21124860e6b42c7de5" Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.923580 4794 scope.go:117] "RemoveContainer" containerID="8bdb027dee1055b133f8785550e922a775ef974fd3cab4d1bb112e3a933160f7" Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.936178 4794 scope.go:117] "RemoveContainer" containerID="8be47568071a475c4b7ba4c8c9f0978791a2a0e64a8f50e98b9aeb572de37aa6" Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.948811 4794 scope.go:117] "RemoveContainer" containerID="b01e36befcd84ac0ca5e00992989458aca376661573ac71d358aa9145e63c6a8" Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.966994 4794 scope.go:117] "RemoveContainer" containerID="c584e24afd6b22886dc219def6085e7103673de36483b3ebd2d33856c94b59ae" Feb 16 17:05:42 crc kubenswrapper[4794]: I0216 17:05:42.995268 4794 scope.go:117] "RemoveContainer" containerID="385cc116e4325ac949f928abfe7837ffb98d24c0e02c0cda253ac2e2c30ff8bc" Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.602102 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qrdnt"] Feb 16 17:05:43 crc kubenswrapper[4794]: E0216 17:05:43.602296 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa756591-c2f4-430e-8f17-bd040051f77d" containerName="extract-content" Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.602321 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa756591-c2f4-430e-8f17-bd040051f77d" containerName="extract-content" Feb 16 17:05:43 crc kubenswrapper[4794]: E0216 17:05:43.602335 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa756591-c2f4-430e-8f17-bd040051f77d" containerName="registry-server" Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.602340 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa756591-c2f4-430e-8f17-bd040051f77d" containerName="registry-server" Feb 16 17:05:43 crc kubenswrapper[4794]: E0216 17:05:43.602351 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f9ab6e7-980e-4a61-9072-cd2baa7c51ab" containerName="extract-utilities" Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.602357 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f9ab6e7-980e-4a61-9072-cd2baa7c51ab" containerName="extract-utilities" Feb 16 17:05:43 crc kubenswrapper[4794]: E0216 17:05:43.602365 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa756591-c2f4-430e-8f17-bd040051f77d" containerName="extract-utilities" Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.602372 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa756591-c2f4-430e-8f17-bd040051f77d" containerName="extract-utilities" Feb 16 17:05:43 crc kubenswrapper[4794]: E0216 17:05:43.602380 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9a576c-db95-4e07-9d36-c93e7adfbc46" containerName="extract-content" Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.602386 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9a576c-db95-4e07-9d36-c93e7adfbc46" containerName="extract-content" Feb 16 17:05:43 crc kubenswrapper[4794]: E0216 17:05:43.602392 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c029145-bf5d-4a8c-9419-fdcf93c96a4d" containerName="marketplace-operator" Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.602398 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c029145-bf5d-4a8c-9419-fdcf93c96a4d" containerName="marketplace-operator" Feb 16 17:05:43 crc kubenswrapper[4794]: E0216 17:05:43.602408 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9a576c-db95-4e07-9d36-c93e7adfbc46" containerName="extract-utilities" Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.602415 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9a576c-db95-4e07-9d36-c93e7adfbc46" containerName="extract-utilities" Feb 16 17:05:43 crc kubenswrapper[4794]: E0216 17:05:43.602427 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c029145-bf5d-4a8c-9419-fdcf93c96a4d" containerName="marketplace-operator" Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.602434 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c029145-bf5d-4a8c-9419-fdcf93c96a4d" containerName="marketplace-operator" Feb 16 17:05:43 crc kubenswrapper[4794]: E0216 17:05:43.602442 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bfe3d12-bcac-4380-b906-7abe78d56232" containerName="extract-utilities" Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.602450 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bfe3d12-bcac-4380-b906-7abe78d56232" containerName="extract-utilities" Feb 16 17:05:43 crc kubenswrapper[4794]: E0216 17:05:43.602459 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9a576c-db95-4e07-9d36-c93e7adfbc46" containerName="registry-server" Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.602466 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9a576c-db95-4e07-9d36-c93e7adfbc46" containerName="registry-server" Feb 16 17:05:43 crc kubenswrapper[4794]: E0216 17:05:43.602475 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bfe3d12-bcac-4380-b906-7abe78d56232" containerName="registry-server" Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.602482 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bfe3d12-bcac-4380-b906-7abe78d56232" containerName="registry-server" Feb 16 17:05:43 crc kubenswrapper[4794]: E0216 17:05:43.602493 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f9ab6e7-980e-4a61-9072-cd2baa7c51ab" containerName="extract-content" Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.602498 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f9ab6e7-980e-4a61-9072-cd2baa7c51ab" containerName="extract-content" Feb 16 17:05:43 crc kubenswrapper[4794]: E0216 17:05:43.602504 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f9ab6e7-980e-4a61-9072-cd2baa7c51ab" containerName="registry-server" Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.602510 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f9ab6e7-980e-4a61-9072-cd2baa7c51ab" containerName="registry-server" Feb 16 17:05:43 crc kubenswrapper[4794]: E0216 17:05:43.602520 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bfe3d12-bcac-4380-b906-7abe78d56232" containerName="extract-content" Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.602526 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bfe3d12-bcac-4380-b906-7abe78d56232" containerName="extract-content" Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.602661 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d9a576c-db95-4e07-9d36-c93e7adfbc46" containerName="registry-server" Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.602677 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa756591-c2f4-430e-8f17-bd040051f77d" containerName="registry-server" Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.602688 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c029145-bf5d-4a8c-9419-fdcf93c96a4d" containerName="marketplace-operator" Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.602696 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f9ab6e7-980e-4a61-9072-cd2baa7c51ab" containerName="registry-server" Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.602704 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bfe3d12-bcac-4380-b906-7abe78d56232" containerName="registry-server" Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.602715 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c029145-bf5d-4a8c-9419-fdcf93c96a4d" containerName="marketplace-operator" Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.603394 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qrdnt" Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.605696 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.616678 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qrdnt"] Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.695408 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cf8044a-dc2d-47f7-9edb-166f30ac8ab2-utilities\") pod \"certified-operators-qrdnt\" (UID: \"9cf8044a-dc2d-47f7-9edb-166f30ac8ab2\") " pod="openshift-marketplace/certified-operators-qrdnt" Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.695647 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwltm\" (UniqueName: \"kubernetes.io/projected/9cf8044a-dc2d-47f7-9edb-166f30ac8ab2-kube-api-access-cwltm\") pod \"certified-operators-qrdnt\" (UID: \"9cf8044a-dc2d-47f7-9edb-166f30ac8ab2\") " pod="openshift-marketplace/certified-operators-qrdnt" Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.695718 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cf8044a-dc2d-47f7-9edb-166f30ac8ab2-catalog-content\") pod \"certified-operators-qrdnt\" (UID: \"9cf8044a-dc2d-47f7-9edb-166f30ac8ab2\") " pod="openshift-marketplace/certified-operators-qrdnt" Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.797200 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cf8044a-dc2d-47f7-9edb-166f30ac8ab2-catalog-content\") pod \"certified-operators-qrdnt\" (UID: \"9cf8044a-dc2d-47f7-9edb-166f30ac8ab2\") " pod="openshift-marketplace/certified-operators-qrdnt" Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.797292 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cf8044a-dc2d-47f7-9edb-166f30ac8ab2-utilities\") pod \"certified-operators-qrdnt\" (UID: \"9cf8044a-dc2d-47f7-9edb-166f30ac8ab2\") " pod="openshift-marketplace/certified-operators-qrdnt" Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.797391 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwltm\" (UniqueName: \"kubernetes.io/projected/9cf8044a-dc2d-47f7-9edb-166f30ac8ab2-kube-api-access-cwltm\") pod \"certified-operators-qrdnt\" (UID: \"9cf8044a-dc2d-47f7-9edb-166f30ac8ab2\") " pod="openshift-marketplace/certified-operators-qrdnt" Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.798337 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cf8044a-dc2d-47f7-9edb-166f30ac8ab2-catalog-content\") pod \"certified-operators-qrdnt\" (UID: \"9cf8044a-dc2d-47f7-9edb-166f30ac8ab2\") " pod="openshift-marketplace/certified-operators-qrdnt" Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.798392 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cf8044a-dc2d-47f7-9edb-166f30ac8ab2-utilities\") pod \"certified-operators-qrdnt\" (UID: \"9cf8044a-dc2d-47f7-9edb-166f30ac8ab2\") " pod="openshift-marketplace/certified-operators-qrdnt" Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.807466 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dwwrn"] Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.811557 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dwwrn" Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.813546 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.817717 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dwwrn"] Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.825743 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwltm\" (UniqueName: \"kubernetes.io/projected/9cf8044a-dc2d-47f7-9edb-166f30ac8ab2-kube-api-access-cwltm\") pod \"certified-operators-qrdnt\" (UID: \"9cf8044a-dc2d-47f7-9edb-166f30ac8ab2\") " pod="openshift-marketplace/certified-operators-qrdnt" Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.899071 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlrqb\" (UniqueName: \"kubernetes.io/projected/d7f1aaab-f576-46e7-8dde-d4cf89e2ff10-kube-api-access-tlrqb\") pod \"community-operators-dwwrn\" (UID: \"d7f1aaab-f576-46e7-8dde-d4cf89e2ff10\") " pod="openshift-marketplace/community-operators-dwwrn" Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.899137 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7f1aaab-f576-46e7-8dde-d4cf89e2ff10-catalog-content\") pod \"community-operators-dwwrn\" (UID: \"d7f1aaab-f576-46e7-8dde-d4cf89e2ff10\") " pod="openshift-marketplace/community-operators-dwwrn" Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.899628 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7f1aaab-f576-46e7-8dde-d4cf89e2ff10-utilities\") pod \"community-operators-dwwrn\" (UID: \"d7f1aaab-f576-46e7-8dde-d4cf89e2ff10\") " pod="openshift-marketplace/community-operators-dwwrn" Feb 16 17:05:43 crc kubenswrapper[4794]: I0216 17:05:43.932116 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qrdnt" Feb 16 17:05:44 crc kubenswrapper[4794]: I0216 17:05:44.000950 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7f1aaab-f576-46e7-8dde-d4cf89e2ff10-utilities\") pod \"community-operators-dwwrn\" (UID: \"d7f1aaab-f576-46e7-8dde-d4cf89e2ff10\") " pod="openshift-marketplace/community-operators-dwwrn" Feb 16 17:05:44 crc kubenswrapper[4794]: I0216 17:05:44.001048 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlrqb\" (UniqueName: \"kubernetes.io/projected/d7f1aaab-f576-46e7-8dde-d4cf89e2ff10-kube-api-access-tlrqb\") pod \"community-operators-dwwrn\" (UID: \"d7f1aaab-f576-46e7-8dde-d4cf89e2ff10\") " pod="openshift-marketplace/community-operators-dwwrn" Feb 16 17:05:44 crc kubenswrapper[4794]: I0216 17:05:44.001088 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7f1aaab-f576-46e7-8dde-d4cf89e2ff10-catalog-content\") pod \"community-operators-dwwrn\" (UID: \"d7f1aaab-f576-46e7-8dde-d4cf89e2ff10\") " pod="openshift-marketplace/community-operators-dwwrn" Feb 16 17:05:44 crc kubenswrapper[4794]: I0216 17:05:44.001795 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7f1aaab-f576-46e7-8dde-d4cf89e2ff10-catalog-content\") pod \"community-operators-dwwrn\" (UID: \"d7f1aaab-f576-46e7-8dde-d4cf89e2ff10\") " pod="openshift-marketplace/community-operators-dwwrn" Feb 16 17:05:44 crc kubenswrapper[4794]: I0216 17:05:44.002113 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7f1aaab-f576-46e7-8dde-d4cf89e2ff10-utilities\") pod \"community-operators-dwwrn\" (UID: \"d7f1aaab-f576-46e7-8dde-d4cf89e2ff10\") " pod="openshift-marketplace/community-operators-dwwrn" Feb 16 17:05:44 crc kubenswrapper[4794]: I0216 17:05:44.022545 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlrqb\" (UniqueName: \"kubernetes.io/projected/d7f1aaab-f576-46e7-8dde-d4cf89e2ff10-kube-api-access-tlrqb\") pod \"community-operators-dwwrn\" (UID: \"d7f1aaab-f576-46e7-8dde-d4cf89e2ff10\") " pod="openshift-marketplace/community-operators-dwwrn" Feb 16 17:05:44 crc kubenswrapper[4794]: I0216 17:05:44.136188 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qrdnt"] Feb 16 17:05:44 crc kubenswrapper[4794]: I0216 17:05:44.153011 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dwwrn" Feb 16 17:05:44 crc kubenswrapper[4794]: I0216 17:05:44.322741 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dwwrn"] Feb 16 17:05:44 crc kubenswrapper[4794]: W0216 17:05:44.332912 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7f1aaab_f576_46e7_8dde_d4cf89e2ff10.slice/crio-d67fcf38f42548e3104f350630dbd8176775d4818acb806a72966fc388414957 WatchSource:0}: Error finding container d67fcf38f42548e3104f350630dbd8176775d4818acb806a72966fc388414957: Status 404 returned error can't find the container with id d67fcf38f42548e3104f350630dbd8176775d4818acb806a72966fc388414957 Feb 16 17:05:44 crc kubenswrapper[4794]: I0216 17:05:44.751475 4794 generic.go:334] "Generic (PLEG): container finished" podID="d7f1aaab-f576-46e7-8dde-d4cf89e2ff10" containerID="251df175e58f63312d4ba404cc9acdb9b104211360d71987a7039a9e36ad839a" exitCode=0 Feb 16 17:05:44 crc kubenswrapper[4794]: I0216 17:05:44.751570 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwwrn" event={"ID":"d7f1aaab-f576-46e7-8dde-d4cf89e2ff10","Type":"ContainerDied","Data":"251df175e58f63312d4ba404cc9acdb9b104211360d71987a7039a9e36ad839a"} Feb 16 17:05:44 crc kubenswrapper[4794]: I0216 17:05:44.751617 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwwrn" event={"ID":"d7f1aaab-f576-46e7-8dde-d4cf89e2ff10","Type":"ContainerStarted","Data":"d67fcf38f42548e3104f350630dbd8176775d4818acb806a72966fc388414957"} Feb 16 17:05:44 crc kubenswrapper[4794]: I0216 17:05:44.752848 4794 generic.go:334] "Generic (PLEG): container finished" podID="9cf8044a-dc2d-47f7-9edb-166f30ac8ab2" containerID="901c4b8f6ddad898a22d3847ce4f308c6f37ceab4aff8de9d30fa16de856d012" exitCode=0 Feb 16 17:05:44 crc kubenswrapper[4794]: I0216 17:05:44.753338 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qrdnt" event={"ID":"9cf8044a-dc2d-47f7-9edb-166f30ac8ab2","Type":"ContainerDied","Data":"901c4b8f6ddad898a22d3847ce4f308c6f37ceab4aff8de9d30fa16de856d012"} Feb 16 17:05:44 crc kubenswrapper[4794]: I0216 17:05:44.753359 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qrdnt" event={"ID":"9cf8044a-dc2d-47f7-9edb-166f30ac8ab2","Type":"ContainerStarted","Data":"7e0d3fbb6573eb76f5a71dd29810ee407785da0f608f860054282e1e6e16be48"} Feb 16 17:05:44 crc kubenswrapper[4794]: I0216 17:05:44.798863 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f9ab6e7-980e-4a61-9072-cd2baa7c51ab" path="/var/lib/kubelet/pods/0f9ab6e7-980e-4a61-9072-cd2baa7c51ab/volumes" Feb 16 17:05:44 crc kubenswrapper[4794]: I0216 17:05:44.799517 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bfe3d12-bcac-4380-b906-7abe78d56232" path="/var/lib/kubelet/pods/1bfe3d12-bcac-4380-b906-7abe78d56232/volumes" Feb 16 17:05:44 crc kubenswrapper[4794]: I0216 17:05:44.800069 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d9a576c-db95-4e07-9d36-c93e7adfbc46" path="/var/lib/kubelet/pods/3d9a576c-db95-4e07-9d36-c93e7adfbc46/volumes" Feb 16 17:05:44 crc kubenswrapper[4794]: I0216 17:05:44.801152 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c029145-bf5d-4a8c-9419-fdcf93c96a4d" path="/var/lib/kubelet/pods/9c029145-bf5d-4a8c-9419-fdcf93c96a4d/volumes" Feb 16 17:05:44 crc kubenswrapper[4794]: I0216 17:05:44.801589 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa756591-c2f4-430e-8f17-bd040051f77d" path="/var/lib/kubelet/pods/aa756591-c2f4-430e-8f17-bd040051f77d/volumes" Feb 16 17:05:45 crc kubenswrapper[4794]: I0216 17:05:45.759949 4794 generic.go:334] "Generic (PLEG): container finished" podID="d7f1aaab-f576-46e7-8dde-d4cf89e2ff10" containerID="ae8719d0b4ed1ae27deb79bb76ad515bc32746f81ce1e16f0ec62a38bc744fdb" exitCode=0 Feb 16 17:05:45 crc kubenswrapper[4794]: I0216 17:05:45.760154 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwwrn" event={"ID":"d7f1aaab-f576-46e7-8dde-d4cf89e2ff10","Type":"ContainerDied","Data":"ae8719d0b4ed1ae27deb79bb76ad515bc32746f81ce1e16f0ec62a38bc744fdb"} Feb 16 17:05:45 crc kubenswrapper[4794]: I0216 17:05:45.762134 4794 generic.go:334] "Generic (PLEG): container finished" podID="9cf8044a-dc2d-47f7-9edb-166f30ac8ab2" containerID="e8db6b17c88c8787b4d7492a2bfe162c4b65610ff33c9f119effb10dc0d72d45" exitCode=0 Feb 16 17:05:45 crc kubenswrapper[4794]: I0216 17:05:45.762163 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qrdnt" event={"ID":"9cf8044a-dc2d-47f7-9edb-166f30ac8ab2","Type":"ContainerDied","Data":"e8db6b17c88c8787b4d7492a2bfe162c4b65610ff33c9f119effb10dc0d72d45"} Feb 16 17:05:45 crc kubenswrapper[4794]: I0216 17:05:45.772394 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" podUID="789593ed-6d75-46b7-9c80-641a7b76a749" containerName="registry" containerID="cri-o://91c2ff5c3f37f22bdd653102c9b771591f66a337535d5dc54f1705a24aab60c2" gracePeriod=30 Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.003063 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q8h66"] Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.006568 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q8h66" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.011869 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8h66"] Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.015502 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.132105 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/984a7bce-d46b-4339-bd79-7fce25092b99-utilities\") pod \"redhat-marketplace-q8h66\" (UID: \"984a7bce-d46b-4339-bd79-7fce25092b99\") " pod="openshift-marketplace/redhat-marketplace-q8h66" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.132192 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/984a7bce-d46b-4339-bd79-7fce25092b99-catalog-content\") pod \"redhat-marketplace-q8h66\" (UID: \"984a7bce-d46b-4339-bd79-7fce25092b99\") " pod="openshift-marketplace/redhat-marketplace-q8h66" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.132272 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzmdg\" (UniqueName: \"kubernetes.io/projected/984a7bce-d46b-4339-bd79-7fce25092b99-kube-api-access-hzmdg\") pod \"redhat-marketplace-q8h66\" (UID: \"984a7bce-d46b-4339-bd79-7fce25092b99\") " pod="openshift-marketplace/redhat-marketplace-q8h66" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.140280 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.197824 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zw7gt"] Feb 16 17:05:46 crc kubenswrapper[4794]: E0216 17:05:46.198032 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="789593ed-6d75-46b7-9c80-641a7b76a749" containerName="registry" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.198046 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="789593ed-6d75-46b7-9c80-641a7b76a749" containerName="registry" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.198136 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="789593ed-6d75-46b7-9c80-641a7b76a749" containerName="registry" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.198805 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zw7gt" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.201757 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.205278 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zw7gt"] Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.233570 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/789593ed-6d75-46b7-9c80-641a7b76a749-trusted-ca\") pod \"789593ed-6d75-46b7-9c80-641a7b76a749\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.233651 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shpd2\" (UniqueName: \"kubernetes.io/projected/789593ed-6d75-46b7-9c80-641a7b76a749-kube-api-access-shpd2\") pod \"789593ed-6d75-46b7-9c80-641a7b76a749\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.233728 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/789593ed-6d75-46b7-9c80-641a7b76a749-installation-pull-secrets\") pod \"789593ed-6d75-46b7-9c80-641a7b76a749\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.233774 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/789593ed-6d75-46b7-9c80-641a7b76a749-registry-certificates\") pod \"789593ed-6d75-46b7-9c80-641a7b76a749\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.233920 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"789593ed-6d75-46b7-9c80-641a7b76a749\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.233954 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/789593ed-6d75-46b7-9c80-641a7b76a749-ca-trust-extracted\") pod \"789593ed-6d75-46b7-9c80-641a7b76a749\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.233979 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/789593ed-6d75-46b7-9c80-641a7b76a749-registry-tls\") pod \"789593ed-6d75-46b7-9c80-641a7b76a749\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.234022 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/789593ed-6d75-46b7-9c80-641a7b76a749-bound-sa-token\") pod \"789593ed-6d75-46b7-9c80-641a7b76a749\" (UID: \"789593ed-6d75-46b7-9c80-641a7b76a749\") " Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.234184 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzmdg\" (UniqueName: \"kubernetes.io/projected/984a7bce-d46b-4339-bd79-7fce25092b99-kube-api-access-hzmdg\") pod \"redhat-marketplace-q8h66\" (UID: \"984a7bce-d46b-4339-bd79-7fce25092b99\") " pod="openshift-marketplace/redhat-marketplace-q8h66" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.234231 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/984a7bce-d46b-4339-bd79-7fce25092b99-utilities\") pod \"redhat-marketplace-q8h66\" (UID: \"984a7bce-d46b-4339-bd79-7fce25092b99\") " pod="openshift-marketplace/redhat-marketplace-q8h66" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.234271 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/984a7bce-d46b-4339-bd79-7fce25092b99-catalog-content\") pod \"redhat-marketplace-q8h66\" (UID: \"984a7bce-d46b-4339-bd79-7fce25092b99\") " pod="openshift-marketplace/redhat-marketplace-q8h66" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.234789 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/984a7bce-d46b-4339-bd79-7fce25092b99-catalog-content\") pod \"redhat-marketplace-q8h66\" (UID: \"984a7bce-d46b-4339-bd79-7fce25092b99\") " pod="openshift-marketplace/redhat-marketplace-q8h66" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.235117 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/789593ed-6d75-46b7-9c80-641a7b76a749-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "789593ed-6d75-46b7-9c80-641a7b76a749" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.235428 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/984a7bce-d46b-4339-bd79-7fce25092b99-utilities\") pod \"redhat-marketplace-q8h66\" (UID: \"984a7bce-d46b-4339-bd79-7fce25092b99\") " pod="openshift-marketplace/redhat-marketplace-q8h66" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.235532 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/789593ed-6d75-46b7-9c80-641a7b76a749-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "789593ed-6d75-46b7-9c80-641a7b76a749" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.239823 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/789593ed-6d75-46b7-9c80-641a7b76a749-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "789593ed-6d75-46b7-9c80-641a7b76a749" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.245500 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/789593ed-6d75-46b7-9c80-641a7b76a749-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "789593ed-6d75-46b7-9c80-641a7b76a749" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.248685 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "789593ed-6d75-46b7-9c80-641a7b76a749" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.250904 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/789593ed-6d75-46b7-9c80-641a7b76a749-kube-api-access-shpd2" (OuterVolumeSpecName: "kube-api-access-shpd2") pod "789593ed-6d75-46b7-9c80-641a7b76a749" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749"). InnerVolumeSpecName "kube-api-access-shpd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.250962 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/789593ed-6d75-46b7-9c80-641a7b76a749-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "789593ed-6d75-46b7-9c80-641a7b76a749" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.253952 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzmdg\" (UniqueName: \"kubernetes.io/projected/984a7bce-d46b-4339-bd79-7fce25092b99-kube-api-access-hzmdg\") pod \"redhat-marketplace-q8h66\" (UID: \"984a7bce-d46b-4339-bd79-7fce25092b99\") " pod="openshift-marketplace/redhat-marketplace-q8h66" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.258025 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/789593ed-6d75-46b7-9c80-641a7b76a749-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "789593ed-6d75-46b7-9c80-641a7b76a749" (UID: "789593ed-6d75-46b7-9c80-641a7b76a749"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.333867 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q8h66" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.335270 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3d4ba8e-df36-4a0b-8ea2-014e4f94993d-utilities\") pod \"redhat-operators-zw7gt\" (UID: \"b3d4ba8e-df36-4a0b-8ea2-014e4f94993d\") " pod="openshift-marketplace/redhat-operators-zw7gt" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.335556 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfvfc\" (UniqueName: \"kubernetes.io/projected/b3d4ba8e-df36-4a0b-8ea2-014e4f94993d-kube-api-access-bfvfc\") pod \"redhat-operators-zw7gt\" (UID: \"b3d4ba8e-df36-4a0b-8ea2-014e4f94993d\") " pod="openshift-marketplace/redhat-operators-zw7gt" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.335580 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3d4ba8e-df36-4a0b-8ea2-014e4f94993d-catalog-content\") pod \"redhat-operators-zw7gt\" (UID: \"b3d4ba8e-df36-4a0b-8ea2-014e4f94993d\") " pod="openshift-marketplace/redhat-operators-zw7gt" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.335623 4794 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/789593ed-6d75-46b7-9c80-641a7b76a749-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.335633 4794 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/789593ed-6d75-46b7-9c80-641a7b76a749-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.335643 4794 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/789593ed-6d75-46b7-9c80-641a7b76a749-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.335654 4794 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/789593ed-6d75-46b7-9c80-641a7b76a749-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.335663 4794 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/789593ed-6d75-46b7-9c80-641a7b76a749-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.335673 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shpd2\" (UniqueName: \"kubernetes.io/projected/789593ed-6d75-46b7-9c80-641a7b76a749-kube-api-access-shpd2\") on node \"crc\" DevicePath \"\"" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.335682 4794 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/789593ed-6d75-46b7-9c80-641a7b76a749-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.437064 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3d4ba8e-df36-4a0b-8ea2-014e4f94993d-utilities\") pod \"redhat-operators-zw7gt\" (UID: \"b3d4ba8e-df36-4a0b-8ea2-014e4f94993d\") " pod="openshift-marketplace/redhat-operators-zw7gt" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.437123 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfvfc\" (UniqueName: \"kubernetes.io/projected/b3d4ba8e-df36-4a0b-8ea2-014e4f94993d-kube-api-access-bfvfc\") pod \"redhat-operators-zw7gt\" (UID: \"b3d4ba8e-df36-4a0b-8ea2-014e4f94993d\") " pod="openshift-marketplace/redhat-operators-zw7gt" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.437148 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3d4ba8e-df36-4a0b-8ea2-014e4f94993d-catalog-content\") pod \"redhat-operators-zw7gt\" (UID: \"b3d4ba8e-df36-4a0b-8ea2-014e4f94993d\") " pod="openshift-marketplace/redhat-operators-zw7gt" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.437689 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3d4ba8e-df36-4a0b-8ea2-014e4f94993d-catalog-content\") pod \"redhat-operators-zw7gt\" (UID: \"b3d4ba8e-df36-4a0b-8ea2-014e4f94993d\") " pod="openshift-marketplace/redhat-operators-zw7gt" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.437946 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3d4ba8e-df36-4a0b-8ea2-014e4f94993d-utilities\") pod \"redhat-operators-zw7gt\" (UID: \"b3d4ba8e-df36-4a0b-8ea2-014e4f94993d\") " pod="openshift-marketplace/redhat-operators-zw7gt" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.463412 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfvfc\" (UniqueName: \"kubernetes.io/projected/b3d4ba8e-df36-4a0b-8ea2-014e4f94993d-kube-api-access-bfvfc\") pod \"redhat-operators-zw7gt\" (UID: \"b3d4ba8e-df36-4a0b-8ea2-014e4f94993d\") " pod="openshift-marketplace/redhat-operators-zw7gt" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.509431 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8h66"] Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.519738 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zw7gt" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.722622 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zw7gt"] Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.767900 4794 generic.go:334] "Generic (PLEG): container finished" podID="789593ed-6d75-46b7-9c80-641a7b76a749" containerID="91c2ff5c3f37f22bdd653102c9b771591f66a337535d5dc54f1705a24aab60c2" exitCode=0 Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.767978 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.768013 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" event={"ID":"789593ed-6d75-46b7-9c80-641a7b76a749","Type":"ContainerDied","Data":"91c2ff5c3f37f22bdd653102c9b771591f66a337535d5dc54f1705a24aab60c2"} Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.768057 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-h6xgf" event={"ID":"789593ed-6d75-46b7-9c80-641a7b76a749","Type":"ContainerDied","Data":"6cc72aa74e08be7b7f90c37d77fee398e40cd0ec7d74a50ff72a7b6ca094498d"} Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.768080 4794 scope.go:117] "RemoveContainer" containerID="91c2ff5c3f37f22bdd653102c9b771591f66a337535d5dc54f1705a24aab60c2" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.770818 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zw7gt" event={"ID":"b3d4ba8e-df36-4a0b-8ea2-014e4f94993d","Type":"ContainerStarted","Data":"36514e01d4b272cca18972960527482115416ba7a26792b8c8da1de1a98dea5d"} Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.772440 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8h66" event={"ID":"984a7bce-d46b-4339-bd79-7fce25092b99","Type":"ContainerStarted","Data":"0414b9c3a920558d87de085d6f6b77820dcbd8d1735ba32cd655181a80fed8ac"} Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.772468 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8h66" event={"ID":"984a7bce-d46b-4339-bd79-7fce25092b99","Type":"ContainerStarted","Data":"bd561c736e19406e24a50276e86b968be0fd8efb7b36018733fe201ac93a6c0f"} Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.776664 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwwrn" event={"ID":"d7f1aaab-f576-46e7-8dde-d4cf89e2ff10","Type":"ContainerStarted","Data":"db2ebab7e85a8d9e8cba0303168410efb4262300b8eb121e9af3ab803f143cae"} Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.780698 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qrdnt" event={"ID":"9cf8044a-dc2d-47f7-9edb-166f30ac8ab2","Type":"ContainerStarted","Data":"bdcf07558f2b211b32d97d0a67b6b0b83ff9619d9d916da190c30a7c9096962f"} Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.795568 4794 scope.go:117] "RemoveContainer" containerID="91c2ff5c3f37f22bdd653102c9b771591f66a337535d5dc54f1705a24aab60c2" Feb 16 17:05:46 crc kubenswrapper[4794]: E0216 17:05:46.796975 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91c2ff5c3f37f22bdd653102c9b771591f66a337535d5dc54f1705a24aab60c2\": container with ID starting with 91c2ff5c3f37f22bdd653102c9b771591f66a337535d5dc54f1705a24aab60c2 not found: ID does not exist" containerID="91c2ff5c3f37f22bdd653102c9b771591f66a337535d5dc54f1705a24aab60c2" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.797015 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91c2ff5c3f37f22bdd653102c9b771591f66a337535d5dc54f1705a24aab60c2"} err="failed to get container status \"91c2ff5c3f37f22bdd653102c9b771591f66a337535d5dc54f1705a24aab60c2\": rpc error: code = NotFound desc = could not find container \"91c2ff5c3f37f22bdd653102c9b771591f66a337535d5dc54f1705a24aab60c2\": container with ID starting with 91c2ff5c3f37f22bdd653102c9b771591f66a337535d5dc54f1705a24aab60c2 not found: ID does not exist" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.803815 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dwwrn" podStartSLOduration=2.35740841 podStartE2EDuration="3.803798581s" podCreationTimestamp="2026-02-16 17:05:43 +0000 UTC" firstStartedPulling="2026-02-16 17:05:44.753430783 +0000 UTC m=+370.701525430" lastFinishedPulling="2026-02-16 17:05:46.199820954 +0000 UTC m=+372.147915601" observedRunningTime="2026-02-16 17:05:46.801264653 +0000 UTC m=+372.749359320" watchObservedRunningTime="2026-02-16 17:05:46.803798581 +0000 UTC m=+372.751893228" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.817283 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qrdnt" podStartSLOduration=2.444491101 podStartE2EDuration="3.817176762s" podCreationTimestamp="2026-02-16 17:05:43 +0000 UTC" firstStartedPulling="2026-02-16 17:05:44.755804805 +0000 UTC m=+370.703899452" lastFinishedPulling="2026-02-16 17:05:46.128490466 +0000 UTC m=+372.076585113" observedRunningTime="2026-02-16 17:05:46.815609848 +0000 UTC m=+372.763704505" watchObservedRunningTime="2026-02-16 17:05:46.817176762 +0000 UTC m=+372.765271409" Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.825912 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-h6xgf"] Feb 16 17:05:46 crc kubenswrapper[4794]: I0216 17:05:46.829563 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-h6xgf"] Feb 16 17:05:47 crc kubenswrapper[4794]: I0216 17:05:47.789041 4794 generic.go:334] "Generic (PLEG): container finished" podID="b3d4ba8e-df36-4a0b-8ea2-014e4f94993d" containerID="6d23f8736844693021642b9f5d3ccf507d5f2d4396d0af12f69be0fc96eeb559" exitCode=0 Feb 16 17:05:47 crc kubenswrapper[4794]: I0216 17:05:47.789134 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zw7gt" event={"ID":"b3d4ba8e-df36-4a0b-8ea2-014e4f94993d","Type":"ContainerDied","Data":"6d23f8736844693021642b9f5d3ccf507d5f2d4396d0af12f69be0fc96eeb559"} Feb 16 17:05:47 crc kubenswrapper[4794]: I0216 17:05:47.792366 4794 generic.go:334] "Generic (PLEG): container finished" podID="984a7bce-d46b-4339-bd79-7fce25092b99" containerID="0414b9c3a920558d87de085d6f6b77820dcbd8d1735ba32cd655181a80fed8ac" exitCode=0 Feb 16 17:05:47 crc kubenswrapper[4794]: I0216 17:05:47.792781 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8h66" event={"ID":"984a7bce-d46b-4339-bd79-7fce25092b99","Type":"ContainerDied","Data":"0414b9c3a920558d87de085d6f6b77820dcbd8d1735ba32cd655181a80fed8ac"} Feb 16 17:05:48 crc kubenswrapper[4794]: I0216 17:05:48.797672 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="789593ed-6d75-46b7-9c80-641a7b76a749" path="/var/lib/kubelet/pods/789593ed-6d75-46b7-9c80-641a7b76a749/volumes" Feb 16 17:05:48 crc kubenswrapper[4794]: I0216 17:05:48.802662 4794 generic.go:334] "Generic (PLEG): container finished" podID="984a7bce-d46b-4339-bd79-7fce25092b99" containerID="a70b77d8e9108899cec43e7bfd8f1dfaa354498e7256d26164f0a0d69e6daf55" exitCode=0 Feb 16 17:05:48 crc kubenswrapper[4794]: I0216 17:05:48.802717 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8h66" event={"ID":"984a7bce-d46b-4339-bd79-7fce25092b99","Type":"ContainerDied","Data":"a70b77d8e9108899cec43e7bfd8f1dfaa354498e7256d26164f0a0d69e6daf55"} Feb 16 17:05:49 crc kubenswrapper[4794]: I0216 17:05:49.810172 4794 generic.go:334] "Generic (PLEG): container finished" podID="b3d4ba8e-df36-4a0b-8ea2-014e4f94993d" containerID="7d1da7212afc36f5497524a4d073e2b25a8dd1249bd09e6153504ab2f00b173b" exitCode=0 Feb 16 17:05:49 crc kubenswrapper[4794]: I0216 17:05:49.810272 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zw7gt" event={"ID":"b3d4ba8e-df36-4a0b-8ea2-014e4f94993d","Type":"ContainerDied","Data":"7d1da7212afc36f5497524a4d073e2b25a8dd1249bd09e6153504ab2f00b173b"} Feb 16 17:05:49 crc kubenswrapper[4794]: I0216 17:05:49.813311 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8h66" event={"ID":"984a7bce-d46b-4339-bd79-7fce25092b99","Type":"ContainerStarted","Data":"4be293870f2acdf203fde72d9d96a10fb61411d119b919350d77a2589d275624"} Feb 16 17:05:49 crc kubenswrapper[4794]: I0216 17:05:49.858050 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q8h66" podStartSLOduration=3.365828297 podStartE2EDuration="4.858027776s" podCreationTimestamp="2026-02-16 17:05:45 +0000 UTC" firstStartedPulling="2026-02-16 17:05:47.793215602 +0000 UTC m=+373.741310249" lastFinishedPulling="2026-02-16 17:05:49.285415081 +0000 UTC m=+375.233509728" observedRunningTime="2026-02-16 17:05:49.855122396 +0000 UTC m=+375.803217063" watchObservedRunningTime="2026-02-16 17:05:49.858027776 +0000 UTC m=+375.806122423" Feb 16 17:05:50 crc kubenswrapper[4794]: I0216 17:05:50.140972 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:05:50 crc kubenswrapper[4794]: I0216 17:05:50.141045 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:05:50 crc kubenswrapper[4794]: I0216 17:05:50.819871 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zw7gt" event={"ID":"b3d4ba8e-df36-4a0b-8ea2-014e4f94993d","Type":"ContainerStarted","Data":"782664f3877e2b9b981f3cb549ee7d2eaa8cf0ea3806f16a9e8bb908dc86fe65"} Feb 16 17:05:50 crc kubenswrapper[4794]: I0216 17:05:50.845881 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zw7gt" podStartSLOduration=2.456611167 podStartE2EDuration="4.845863883s" podCreationTimestamp="2026-02-16 17:05:46 +0000 UTC" firstStartedPulling="2026-02-16 17:05:47.790715446 +0000 UTC m=+373.738810093" lastFinishedPulling="2026-02-16 17:05:50.179968162 +0000 UTC m=+376.128062809" observedRunningTime="2026-02-16 17:05:50.845702307 +0000 UTC m=+376.793796964" watchObservedRunningTime="2026-02-16 17:05:50.845863883 +0000 UTC m=+376.793958530" Feb 16 17:05:53 crc kubenswrapper[4794]: I0216 17:05:53.932432 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qrdnt" Feb 16 17:05:53 crc kubenswrapper[4794]: I0216 17:05:53.932758 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qrdnt" Feb 16 17:05:53 crc kubenswrapper[4794]: I0216 17:05:53.968747 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qrdnt" Feb 16 17:05:54 crc kubenswrapper[4794]: I0216 17:05:54.154578 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dwwrn" Feb 16 17:05:54 crc kubenswrapper[4794]: I0216 17:05:54.154678 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dwwrn" Feb 16 17:05:54 crc kubenswrapper[4794]: I0216 17:05:54.196598 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dwwrn" Feb 16 17:05:54 crc kubenswrapper[4794]: I0216 17:05:54.904427 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dwwrn" Feb 16 17:05:54 crc kubenswrapper[4794]: I0216 17:05:54.909486 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qrdnt" Feb 16 17:05:56 crc kubenswrapper[4794]: I0216 17:05:56.334388 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q8h66" Feb 16 17:05:56 crc kubenswrapper[4794]: I0216 17:05:56.334756 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q8h66" Feb 16 17:05:56 crc kubenswrapper[4794]: I0216 17:05:56.385847 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q8h66" Feb 16 17:05:56 crc kubenswrapper[4794]: I0216 17:05:56.520009 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zw7gt" Feb 16 17:05:56 crc kubenswrapper[4794]: I0216 17:05:56.520438 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zw7gt" Feb 16 17:05:56 crc kubenswrapper[4794]: I0216 17:05:56.573067 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zw7gt" Feb 16 17:05:56 crc kubenswrapper[4794]: I0216 17:05:56.904957 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zw7gt" Feb 16 17:05:56 crc kubenswrapper[4794]: I0216 17:05:56.910553 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q8h66" Feb 16 17:06:13 crc kubenswrapper[4794]: I0216 17:06:13.508516 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-hc9j5"] Feb 16 17:06:13 crc kubenswrapper[4794]: I0216 17:06:13.510117 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-hc9j5" Feb 16 17:06:13 crc kubenswrapper[4794]: I0216 17:06:13.514480 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Feb 16 17:06:13 crc kubenswrapper[4794]: I0216 17:06:13.514759 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 16 17:06:13 crc kubenswrapper[4794]: I0216 17:06:13.514794 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 16 17:06:13 crc kubenswrapper[4794]: I0216 17:06:13.515752 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 16 17:06:13 crc kubenswrapper[4794]: I0216 17:06:13.516465 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 16 17:06:13 crc kubenswrapper[4794]: I0216 17:06:13.528838 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-hc9j5"] Feb 16 17:06:13 crc kubenswrapper[4794]: I0216 17:06:13.683399 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx2jq\" (UniqueName: \"kubernetes.io/projected/2bb3e49f-f86b-4d4f-81cc-2dd6be8ec9d9-kube-api-access-qx2jq\") pod \"cluster-monitoring-operator-6d5b84845-hc9j5\" (UID: \"2bb3e49f-f86b-4d4f-81cc-2dd6be8ec9d9\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-hc9j5" Feb 16 17:06:13 crc kubenswrapper[4794]: I0216 17:06:13.683446 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/2bb3e49f-f86b-4d4f-81cc-2dd6be8ec9d9-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-hc9j5\" (UID: \"2bb3e49f-f86b-4d4f-81cc-2dd6be8ec9d9\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-hc9j5" Feb 16 17:06:13 crc kubenswrapper[4794]: I0216 17:06:13.683485 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/2bb3e49f-f86b-4d4f-81cc-2dd6be8ec9d9-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-hc9j5\" (UID: \"2bb3e49f-f86b-4d4f-81cc-2dd6be8ec9d9\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-hc9j5" Feb 16 17:06:13 crc kubenswrapper[4794]: I0216 17:06:13.784966 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx2jq\" (UniqueName: \"kubernetes.io/projected/2bb3e49f-f86b-4d4f-81cc-2dd6be8ec9d9-kube-api-access-qx2jq\") pod \"cluster-monitoring-operator-6d5b84845-hc9j5\" (UID: \"2bb3e49f-f86b-4d4f-81cc-2dd6be8ec9d9\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-hc9j5" Feb 16 17:06:13 crc kubenswrapper[4794]: I0216 17:06:13.785081 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/2bb3e49f-f86b-4d4f-81cc-2dd6be8ec9d9-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-hc9j5\" (UID: \"2bb3e49f-f86b-4d4f-81cc-2dd6be8ec9d9\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-hc9j5" Feb 16 17:06:13 crc kubenswrapper[4794]: I0216 17:06:13.785177 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/2bb3e49f-f86b-4d4f-81cc-2dd6be8ec9d9-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-hc9j5\" (UID: \"2bb3e49f-f86b-4d4f-81cc-2dd6be8ec9d9\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-hc9j5" Feb 16 17:06:13 crc kubenswrapper[4794]: I0216 17:06:13.787792 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/2bb3e49f-f86b-4d4f-81cc-2dd6be8ec9d9-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-hc9j5\" (UID: \"2bb3e49f-f86b-4d4f-81cc-2dd6be8ec9d9\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-hc9j5" Feb 16 17:06:13 crc kubenswrapper[4794]: I0216 17:06:13.792871 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/2bb3e49f-f86b-4d4f-81cc-2dd6be8ec9d9-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-hc9j5\" (UID: \"2bb3e49f-f86b-4d4f-81cc-2dd6be8ec9d9\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-hc9j5" Feb 16 17:06:13 crc kubenswrapper[4794]: I0216 17:06:13.818884 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx2jq\" (UniqueName: \"kubernetes.io/projected/2bb3e49f-f86b-4d4f-81cc-2dd6be8ec9d9-kube-api-access-qx2jq\") pod \"cluster-monitoring-operator-6d5b84845-hc9j5\" (UID: \"2bb3e49f-f86b-4d4f-81cc-2dd6be8ec9d9\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-hc9j5" Feb 16 17:06:13 crc kubenswrapper[4794]: I0216 17:06:13.835876 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-hc9j5" Feb 16 17:06:14 crc kubenswrapper[4794]: I0216 17:06:14.061694 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-hc9j5"] Feb 16 17:06:14 crc kubenswrapper[4794]: I0216 17:06:14.965905 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-hc9j5" event={"ID":"2bb3e49f-f86b-4d4f-81cc-2dd6be8ec9d9","Type":"ContainerStarted","Data":"3427a21ddd3f0fcf8688f370ea7d5be3ce77f1abb0122c492a325c4e7352c2eb"} Feb 16 17:06:15 crc kubenswrapper[4794]: I0216 17:06:15.972908 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-hc9j5" event={"ID":"2bb3e49f-f86b-4d4f-81cc-2dd6be8ec9d9","Type":"ContainerStarted","Data":"bc6c4ffdd1918217db55f8fea84441030d9a37971e220929264692ee58339e5b"} Feb 16 17:06:16 crc kubenswrapper[4794]: I0216 17:06:16.232898 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-hc9j5" podStartSLOduration=1.610248283 podStartE2EDuration="3.232876738s" podCreationTimestamp="2026-02-16 17:06:13 +0000 UTC" firstStartedPulling="2026-02-16 17:06:14.078664952 +0000 UTC m=+400.026759609" lastFinishedPulling="2026-02-16 17:06:15.701293417 +0000 UTC m=+401.649388064" observedRunningTime="2026-02-16 17:06:15.99906564 +0000 UTC m=+401.947160287" watchObservedRunningTime="2026-02-16 17:06:16.232876738 +0000 UTC m=+402.180971385" Feb 16 17:06:16 crc kubenswrapper[4794]: I0216 17:06:16.237207 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hx76n"] Feb 16 17:06:16 crc kubenswrapper[4794]: I0216 17:06:16.238027 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hx76n" Feb 16 17:06:16 crc kubenswrapper[4794]: I0216 17:06:16.251530 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 16 17:06:16 crc kubenswrapper[4794]: I0216 17:06:16.251746 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-pwbpn" Feb 16 17:06:16 crc kubenswrapper[4794]: I0216 17:06:16.270210 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hx76n"] Feb 16 17:06:16 crc kubenswrapper[4794]: I0216 17:06:16.316020 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/e546e186-b222-4348-83cc-a44d668db971-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-hx76n\" (UID: \"e546e186-b222-4348-83cc-a44d668db971\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hx76n" Feb 16 17:06:16 crc kubenswrapper[4794]: I0216 17:06:16.417201 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/e546e186-b222-4348-83cc-a44d668db971-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-hx76n\" (UID: \"e546e186-b222-4348-83cc-a44d668db971\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hx76n" Feb 16 17:06:16 crc kubenswrapper[4794]: I0216 17:06:16.422968 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/e546e186-b222-4348-83cc-a44d668db971-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-hx76n\" (UID: \"e546e186-b222-4348-83cc-a44d668db971\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hx76n" Feb 16 17:06:16 crc kubenswrapper[4794]: I0216 17:06:16.576485 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hx76n" Feb 16 17:06:16 crc kubenswrapper[4794]: I0216 17:06:16.748589 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hx76n"] Feb 16 17:06:16 crc kubenswrapper[4794]: W0216 17:06:16.754346 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode546e186_b222_4348_83cc_a44d668db971.slice/crio-aeb09a0cf4cdf40efde862b7fae790597b1880e005ec5ce1bb7df553f0dbad53 WatchSource:0}: Error finding container aeb09a0cf4cdf40efde862b7fae790597b1880e005ec5ce1bb7df553f0dbad53: Status 404 returned error can't find the container with id aeb09a0cf4cdf40efde862b7fae790597b1880e005ec5ce1bb7df553f0dbad53 Feb 16 17:06:16 crc kubenswrapper[4794]: I0216 17:06:16.979007 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hx76n" event={"ID":"e546e186-b222-4348-83cc-a44d668db971","Type":"ContainerStarted","Data":"aeb09a0cf4cdf40efde862b7fae790597b1880e005ec5ce1bb7df553f0dbad53"} Feb 16 17:06:18 crc kubenswrapper[4794]: I0216 17:06:18.995811 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hx76n" event={"ID":"e546e186-b222-4348-83cc-a44d668db971","Type":"ContainerStarted","Data":"a8a9c9c049500a1fb0686df6bd78146cf7c0c2aa5b8ff8ab563dc4a26775b4c7"} Feb 16 17:06:18 crc kubenswrapper[4794]: I0216 17:06:18.996525 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hx76n" Feb 16 17:06:19 crc kubenswrapper[4794]: I0216 17:06:19.003957 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hx76n" Feb 16 17:06:19 crc kubenswrapper[4794]: I0216 17:06:19.023366 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hx76n" podStartSLOduration=1.851859068 podStartE2EDuration="3.023333203s" podCreationTimestamp="2026-02-16 17:06:16 +0000 UTC" firstStartedPulling="2026-02-16 17:06:16.756108342 +0000 UTC m=+402.704202989" lastFinishedPulling="2026-02-16 17:06:17.927582477 +0000 UTC m=+403.875677124" observedRunningTime="2026-02-16 17:06:19.021343854 +0000 UTC m=+404.969438601" watchObservedRunningTime="2026-02-16 17:06:19.023333203 +0000 UTC m=+404.971427860" Feb 16 17:06:19 crc kubenswrapper[4794]: I0216 17:06:19.289489 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-lw5th"] Feb 16 17:06:19 crc kubenswrapper[4794]: I0216 17:06:19.290565 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-lw5th" Feb 16 17:06:19 crc kubenswrapper[4794]: I0216 17:06:19.292413 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 16 17:06:19 crc kubenswrapper[4794]: I0216 17:06:19.292644 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-9cgk4" Feb 16 17:06:19 crc kubenswrapper[4794]: I0216 17:06:19.292720 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 16 17:06:19 crc kubenswrapper[4794]: I0216 17:06:19.292874 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 16 17:06:19 crc kubenswrapper[4794]: I0216 17:06:19.299910 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-lw5th"] Feb 16 17:06:19 crc kubenswrapper[4794]: I0216 17:06:19.356081 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8491308f-a6cc-498f-a501-9aca7f054392-metrics-client-ca\") pod \"prometheus-operator-db54df47d-lw5th\" (UID: \"8491308f-a6cc-498f-a501-9aca7f054392\") " pod="openshift-monitoring/prometheus-operator-db54df47d-lw5th" Feb 16 17:06:19 crc kubenswrapper[4794]: I0216 17:06:19.356125 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jpj5\" (UniqueName: \"kubernetes.io/projected/8491308f-a6cc-498f-a501-9aca7f054392-kube-api-access-8jpj5\") pod \"prometheus-operator-db54df47d-lw5th\" (UID: \"8491308f-a6cc-498f-a501-9aca7f054392\") " pod="openshift-monitoring/prometheus-operator-db54df47d-lw5th" Feb 16 17:06:19 crc kubenswrapper[4794]: I0216 17:06:19.356172 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/8491308f-a6cc-498f-a501-9aca7f054392-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-lw5th\" (UID: \"8491308f-a6cc-498f-a501-9aca7f054392\") " pod="openshift-monitoring/prometheus-operator-db54df47d-lw5th" Feb 16 17:06:19 crc kubenswrapper[4794]: I0216 17:06:19.356189 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8491308f-a6cc-498f-a501-9aca7f054392-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-lw5th\" (UID: \"8491308f-a6cc-498f-a501-9aca7f054392\") " pod="openshift-monitoring/prometheus-operator-db54df47d-lw5th" Feb 16 17:06:19 crc kubenswrapper[4794]: I0216 17:06:19.457039 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8491308f-a6cc-498f-a501-9aca7f054392-metrics-client-ca\") pod \"prometheus-operator-db54df47d-lw5th\" (UID: \"8491308f-a6cc-498f-a501-9aca7f054392\") " pod="openshift-monitoring/prometheus-operator-db54df47d-lw5th" Feb 16 17:06:19 crc kubenswrapper[4794]: I0216 17:06:19.457410 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jpj5\" (UniqueName: \"kubernetes.io/projected/8491308f-a6cc-498f-a501-9aca7f054392-kube-api-access-8jpj5\") pod \"prometheus-operator-db54df47d-lw5th\" (UID: \"8491308f-a6cc-498f-a501-9aca7f054392\") " pod="openshift-monitoring/prometheus-operator-db54df47d-lw5th" Feb 16 17:06:19 crc kubenswrapper[4794]: I0216 17:06:19.457460 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/8491308f-a6cc-498f-a501-9aca7f054392-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-lw5th\" (UID: \"8491308f-a6cc-498f-a501-9aca7f054392\") " pod="openshift-monitoring/prometheus-operator-db54df47d-lw5th" Feb 16 17:06:19 crc kubenswrapper[4794]: I0216 17:06:19.457478 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8491308f-a6cc-498f-a501-9aca7f054392-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-lw5th\" (UID: \"8491308f-a6cc-498f-a501-9aca7f054392\") " pod="openshift-monitoring/prometheus-operator-db54df47d-lw5th" Feb 16 17:06:19 crc kubenswrapper[4794]: I0216 17:06:19.458109 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8491308f-a6cc-498f-a501-9aca7f054392-metrics-client-ca\") pod \"prometheus-operator-db54df47d-lw5th\" (UID: \"8491308f-a6cc-498f-a501-9aca7f054392\") " pod="openshift-monitoring/prometheus-operator-db54df47d-lw5th" Feb 16 17:06:19 crc kubenswrapper[4794]: I0216 17:06:19.463467 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8491308f-a6cc-498f-a501-9aca7f054392-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-lw5th\" (UID: \"8491308f-a6cc-498f-a501-9aca7f054392\") " pod="openshift-monitoring/prometheus-operator-db54df47d-lw5th" Feb 16 17:06:19 crc kubenswrapper[4794]: I0216 17:06:19.464866 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/8491308f-a6cc-498f-a501-9aca7f054392-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-lw5th\" (UID: \"8491308f-a6cc-498f-a501-9aca7f054392\") " pod="openshift-monitoring/prometheus-operator-db54df47d-lw5th" Feb 16 17:06:19 crc kubenswrapper[4794]: I0216 17:06:19.476771 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jpj5\" (UniqueName: \"kubernetes.io/projected/8491308f-a6cc-498f-a501-9aca7f054392-kube-api-access-8jpj5\") pod \"prometheus-operator-db54df47d-lw5th\" (UID: \"8491308f-a6cc-498f-a501-9aca7f054392\") " pod="openshift-monitoring/prometheus-operator-db54df47d-lw5th" Feb 16 17:06:19 crc kubenswrapper[4794]: I0216 17:06:19.621629 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-lw5th" Feb 16 17:06:20 crc kubenswrapper[4794]: I0216 17:06:20.080930 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-lw5th"] Feb 16 17:06:20 crc kubenswrapper[4794]: I0216 17:06:20.141050 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:06:20 crc kubenswrapper[4794]: I0216 17:06:20.141631 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:06:20 crc kubenswrapper[4794]: I0216 17:06:20.141688 4794 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" Feb 16 17:06:20 crc kubenswrapper[4794]: I0216 17:06:20.142235 4794 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b2e80e5061d3d639e2192db6249af8300dc44db1cba1d8938a19b86cfdd0833f"} pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 17:06:20 crc kubenswrapper[4794]: I0216 17:06:20.142312 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" containerID="cri-o://b2e80e5061d3d639e2192db6249af8300dc44db1cba1d8938a19b86cfdd0833f" gracePeriod=600 Feb 16 17:06:21 crc kubenswrapper[4794]: I0216 17:06:21.014569 4794 generic.go:334] "Generic (PLEG): container finished" podID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerID="b2e80e5061d3d639e2192db6249af8300dc44db1cba1d8938a19b86cfdd0833f" exitCode=0 Feb 16 17:06:21 crc kubenswrapper[4794]: I0216 17:06:21.014650 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerDied","Data":"b2e80e5061d3d639e2192db6249af8300dc44db1cba1d8938a19b86cfdd0833f"} Feb 16 17:06:21 crc kubenswrapper[4794]: I0216 17:06:21.015024 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerStarted","Data":"c272885df85830363f92a97efc1eb57e276b4a14b8042b5e60c9c53b0e8dd10b"} Feb 16 17:06:21 crc kubenswrapper[4794]: I0216 17:06:21.015045 4794 scope.go:117] "RemoveContainer" containerID="97257c683f36fec9a6d4d0e7ee85af2ea7fa6143869803e437f99862f5e1d18a" Feb 16 17:06:21 crc kubenswrapper[4794]: I0216 17:06:21.021681 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-lw5th" event={"ID":"8491308f-a6cc-498f-a501-9aca7f054392","Type":"ContainerStarted","Data":"b5cf098dc05c3b288954f94461dddcc1f4159da772efc6b909e98852397286bd"} Feb 16 17:06:22 crc kubenswrapper[4794]: I0216 17:06:22.030936 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-lw5th" event={"ID":"8491308f-a6cc-498f-a501-9aca7f054392","Type":"ContainerStarted","Data":"7171d8bdb60fe02da86221dd52365aa898cec912302f3fc9638db15439aa4eb4"} Feb 16 17:06:22 crc kubenswrapper[4794]: I0216 17:06:22.031921 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-lw5th" event={"ID":"8491308f-a6cc-498f-a501-9aca7f054392","Type":"ContainerStarted","Data":"3f6988d76a4a7257eecde4f85dd705c13599e727bbbd2ec7cc78fb47e940eb83"} Feb 16 17:06:22 crc kubenswrapper[4794]: I0216 17:06:22.060153 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-lw5th" podStartSLOduration=1.651942054 podStartE2EDuration="3.060135438s" podCreationTimestamp="2026-02-16 17:06:19 +0000 UTC" firstStartedPulling="2026-02-16 17:06:20.08838051 +0000 UTC m=+406.036475177" lastFinishedPulling="2026-02-16 17:06:21.496573914 +0000 UTC m=+407.444668561" observedRunningTime="2026-02-16 17:06:22.055201138 +0000 UTC m=+408.003295785" watchObservedRunningTime="2026-02-16 17:06:22.060135438 +0000 UTC m=+408.008230085" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.623953 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-ddhwh"] Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.626132 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-ddhwh" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.628629 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.629025 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.629640 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-dp9ns" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.641558 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-ddhwh"] Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.645067 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-9btkh"] Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.646729 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9btkh" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.650137 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.650363 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.650613 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-76n54" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.651498 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.661647 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-9btkh"] Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.710931 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/e8c730f2-32d0-4cf6-b422-c89448a58aec-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-9btkh\" (UID: \"e8c730f2-32d0-4cf6-b422-c89448a58aec\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9btkh" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.711004 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4388adc3-ddda-464a-89d9-a5ec287898f6-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-ddhwh\" (UID: \"4388adc3-ddda-464a-89d9-a5ec287898f6\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-ddhwh" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.711046 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e8c730f2-32d0-4cf6-b422-c89448a58aec-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-9btkh\" (UID: \"e8c730f2-32d0-4cf6-b422-c89448a58aec\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9btkh" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.711067 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/4388adc3-ddda-464a-89d9-a5ec287898f6-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-ddhwh\" (UID: \"4388adc3-ddda-464a-89d9-a5ec287898f6\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-ddhwh" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.711088 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e8c730f2-32d0-4cf6-b422-c89448a58aec-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-9btkh\" (UID: \"e8c730f2-32d0-4cf6-b422-c89448a58aec\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9btkh" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.711107 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4388adc3-ddda-464a-89d9-a5ec287898f6-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-ddhwh\" (UID: \"4388adc3-ddda-464a-89d9-a5ec287898f6\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-ddhwh" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.711128 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/e8c730f2-32d0-4cf6-b422-c89448a58aec-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-9btkh\" (UID: \"e8c730f2-32d0-4cf6-b422-c89448a58aec\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9btkh" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.711151 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/e8c730f2-32d0-4cf6-b422-c89448a58aec-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-9btkh\" (UID: \"e8c730f2-32d0-4cf6-b422-c89448a58aec\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9btkh" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.711169 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4x89\" (UniqueName: \"kubernetes.io/projected/e8c730f2-32d0-4cf6-b422-c89448a58aec-kube-api-access-c4x89\") pod \"kube-state-metrics-777cb5bd5d-9btkh\" (UID: \"e8c730f2-32d0-4cf6-b422-c89448a58aec\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9btkh" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.711188 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn4pl\" (UniqueName: \"kubernetes.io/projected/4388adc3-ddda-464a-89d9-a5ec287898f6-kube-api-access-kn4pl\") pod \"openshift-state-metrics-566fddb674-ddhwh\" (UID: \"4388adc3-ddda-464a-89d9-a5ec287898f6\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-ddhwh" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.786667 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-fh2sl"] Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.787753 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-fh2sl" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.789913 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.789944 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-qqs4b" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.790498 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.812857 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4x89\" (UniqueName: \"kubernetes.io/projected/e8c730f2-32d0-4cf6-b422-c89448a58aec-kube-api-access-c4x89\") pod \"kube-state-metrics-777cb5bd5d-9btkh\" (UID: \"e8c730f2-32d0-4cf6-b422-c89448a58aec\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9btkh" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.812912 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn4pl\" (UniqueName: \"kubernetes.io/projected/4388adc3-ddda-464a-89d9-a5ec287898f6-kube-api-access-kn4pl\") pod \"openshift-state-metrics-566fddb674-ddhwh\" (UID: \"4388adc3-ddda-464a-89d9-a5ec287898f6\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-ddhwh" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.812935 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/e8c730f2-32d0-4cf6-b422-c89448a58aec-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-9btkh\" (UID: \"e8c730f2-32d0-4cf6-b422-c89448a58aec\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9btkh" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.812984 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4388adc3-ddda-464a-89d9-a5ec287898f6-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-ddhwh\" (UID: \"4388adc3-ddda-464a-89d9-a5ec287898f6\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-ddhwh" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.813017 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e8c730f2-32d0-4cf6-b422-c89448a58aec-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-9btkh\" (UID: \"e8c730f2-32d0-4cf6-b422-c89448a58aec\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9btkh" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.813038 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/4388adc3-ddda-464a-89d9-a5ec287898f6-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-ddhwh\" (UID: \"4388adc3-ddda-464a-89d9-a5ec287898f6\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-ddhwh" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.813059 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e8c730f2-32d0-4cf6-b422-c89448a58aec-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-9btkh\" (UID: \"e8c730f2-32d0-4cf6-b422-c89448a58aec\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9btkh" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.813081 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4388adc3-ddda-464a-89d9-a5ec287898f6-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-ddhwh\" (UID: \"4388adc3-ddda-464a-89d9-a5ec287898f6\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-ddhwh" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.813104 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/e8c730f2-32d0-4cf6-b422-c89448a58aec-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-9btkh\" (UID: \"e8c730f2-32d0-4cf6-b422-c89448a58aec\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9btkh" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.813130 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/e8c730f2-32d0-4cf6-b422-c89448a58aec-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-9btkh\" (UID: \"e8c730f2-32d0-4cf6-b422-c89448a58aec\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9btkh" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.813608 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/e8c730f2-32d0-4cf6-b422-c89448a58aec-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-9btkh\" (UID: \"e8c730f2-32d0-4cf6-b422-c89448a58aec\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9btkh" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.814244 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e8c730f2-32d0-4cf6-b422-c89448a58aec-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-9btkh\" (UID: \"e8c730f2-32d0-4cf6-b422-c89448a58aec\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9btkh" Feb 16 17:06:23 crc kubenswrapper[4794]: E0216 17:06:23.814361 4794 secret.go:188] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: secret "kube-state-metrics-tls" not found Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.814375 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/e8c730f2-32d0-4cf6-b422-c89448a58aec-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-9btkh\" (UID: \"e8c730f2-32d0-4cf6-b422-c89448a58aec\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9btkh" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.814387 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4388adc3-ddda-464a-89d9-a5ec287898f6-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-ddhwh\" (UID: \"4388adc3-ddda-464a-89d9-a5ec287898f6\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-ddhwh" Feb 16 17:06:23 crc kubenswrapper[4794]: E0216 17:06:23.814414 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e8c730f2-32d0-4cf6-b422-c89448a58aec-kube-state-metrics-tls podName:e8c730f2-32d0-4cf6-b422-c89448a58aec nodeName:}" failed. No retries permitted until 2026-02-16 17:06:24.31439777 +0000 UTC m=+410.262492417 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/e8c730f2-32d0-4cf6-b422-c89448a58aec-kube-state-metrics-tls") pod "kube-state-metrics-777cb5bd5d-9btkh" (UID: "e8c730f2-32d0-4cf6-b422-c89448a58aec") : secret "kube-state-metrics-tls" not found Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.819025 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4388adc3-ddda-464a-89d9-a5ec287898f6-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-ddhwh\" (UID: \"4388adc3-ddda-464a-89d9-a5ec287898f6\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-ddhwh" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.821918 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/4388adc3-ddda-464a-89d9-a5ec287898f6-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-ddhwh\" (UID: \"4388adc3-ddda-464a-89d9-a5ec287898f6\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-ddhwh" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.822739 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e8c730f2-32d0-4cf6-b422-c89448a58aec-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-9btkh\" (UID: \"e8c730f2-32d0-4cf6-b422-c89448a58aec\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9btkh" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.830923 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4x89\" (UniqueName: \"kubernetes.io/projected/e8c730f2-32d0-4cf6-b422-c89448a58aec-kube-api-access-c4x89\") pod \"kube-state-metrics-777cb5bd5d-9btkh\" (UID: \"e8c730f2-32d0-4cf6-b422-c89448a58aec\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9btkh" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.842043 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn4pl\" (UniqueName: \"kubernetes.io/projected/4388adc3-ddda-464a-89d9-a5ec287898f6-kube-api-access-kn4pl\") pod \"openshift-state-metrics-566fddb674-ddhwh\" (UID: \"4388adc3-ddda-464a-89d9-a5ec287898f6\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-ddhwh" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.913954 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/b1d22f14-f473-4dc8-a538-201f37a8ae98-node-exporter-textfile\") pod \"node-exporter-fh2sl\" (UID: \"b1d22f14-f473-4dc8-a538-201f37a8ae98\") " pod="openshift-monitoring/node-exporter-fh2sl" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.914005 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b1d22f14-f473-4dc8-a538-201f37a8ae98-node-exporter-tls\") pod \"node-exporter-fh2sl\" (UID: \"b1d22f14-f473-4dc8-a538-201f37a8ae98\") " pod="openshift-monitoring/node-exporter-fh2sl" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.914026 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b1d22f14-f473-4dc8-a538-201f37a8ae98-node-exporter-wtmp\") pod \"node-exporter-fh2sl\" (UID: \"b1d22f14-f473-4dc8-a538-201f37a8ae98\") " pod="openshift-monitoring/node-exporter-fh2sl" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.914049 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b1d22f14-f473-4dc8-a538-201f37a8ae98-sys\") pod \"node-exporter-fh2sl\" (UID: \"b1d22f14-f473-4dc8-a538-201f37a8ae98\") " pod="openshift-monitoring/node-exporter-fh2sl" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.914068 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b1d22f14-f473-4dc8-a538-201f37a8ae98-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-fh2sl\" (UID: \"b1d22f14-f473-4dc8-a538-201f37a8ae98\") " pod="openshift-monitoring/node-exporter-fh2sl" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.914086 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b1d22f14-f473-4dc8-a538-201f37a8ae98-root\") pod \"node-exporter-fh2sl\" (UID: \"b1d22f14-f473-4dc8-a538-201f37a8ae98\") " pod="openshift-monitoring/node-exporter-fh2sl" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.914119 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b1d22f14-f473-4dc8-a538-201f37a8ae98-metrics-client-ca\") pod \"node-exporter-fh2sl\" (UID: \"b1d22f14-f473-4dc8-a538-201f37a8ae98\") " pod="openshift-monitoring/node-exporter-fh2sl" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.914138 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2sdj\" (UniqueName: \"kubernetes.io/projected/b1d22f14-f473-4dc8-a538-201f37a8ae98-kube-api-access-f2sdj\") pod \"node-exporter-fh2sl\" (UID: \"b1d22f14-f473-4dc8-a538-201f37a8ae98\") " pod="openshift-monitoring/node-exporter-fh2sl" Feb 16 17:06:23 crc kubenswrapper[4794]: I0216 17:06:23.941935 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-ddhwh" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.014955 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/b1d22f14-f473-4dc8-a538-201f37a8ae98-node-exporter-textfile\") pod \"node-exporter-fh2sl\" (UID: \"b1d22f14-f473-4dc8-a538-201f37a8ae98\") " pod="openshift-monitoring/node-exporter-fh2sl" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.015008 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b1d22f14-f473-4dc8-a538-201f37a8ae98-node-exporter-tls\") pod \"node-exporter-fh2sl\" (UID: \"b1d22f14-f473-4dc8-a538-201f37a8ae98\") " pod="openshift-monitoring/node-exporter-fh2sl" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.015029 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b1d22f14-f473-4dc8-a538-201f37a8ae98-node-exporter-wtmp\") pod \"node-exporter-fh2sl\" (UID: \"b1d22f14-f473-4dc8-a538-201f37a8ae98\") " pod="openshift-monitoring/node-exporter-fh2sl" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.015053 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b1d22f14-f473-4dc8-a538-201f37a8ae98-sys\") pod \"node-exporter-fh2sl\" (UID: \"b1d22f14-f473-4dc8-a538-201f37a8ae98\") " pod="openshift-monitoring/node-exporter-fh2sl" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.015074 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b1d22f14-f473-4dc8-a538-201f37a8ae98-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-fh2sl\" (UID: \"b1d22f14-f473-4dc8-a538-201f37a8ae98\") " pod="openshift-monitoring/node-exporter-fh2sl" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.015098 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b1d22f14-f473-4dc8-a538-201f37a8ae98-root\") pod \"node-exporter-fh2sl\" (UID: \"b1d22f14-f473-4dc8-a538-201f37a8ae98\") " pod="openshift-monitoring/node-exporter-fh2sl" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.015131 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b1d22f14-f473-4dc8-a538-201f37a8ae98-metrics-client-ca\") pod \"node-exporter-fh2sl\" (UID: \"b1d22f14-f473-4dc8-a538-201f37a8ae98\") " pod="openshift-monitoring/node-exporter-fh2sl" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.015151 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2sdj\" (UniqueName: \"kubernetes.io/projected/b1d22f14-f473-4dc8-a538-201f37a8ae98-kube-api-access-f2sdj\") pod \"node-exporter-fh2sl\" (UID: \"b1d22f14-f473-4dc8-a538-201f37a8ae98\") " pod="openshift-monitoring/node-exporter-fh2sl" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.015159 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b1d22f14-f473-4dc8-a538-201f37a8ae98-sys\") pod \"node-exporter-fh2sl\" (UID: \"b1d22f14-f473-4dc8-a538-201f37a8ae98\") " pod="openshift-monitoring/node-exporter-fh2sl" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.015213 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b1d22f14-f473-4dc8-a538-201f37a8ae98-root\") pod \"node-exporter-fh2sl\" (UID: \"b1d22f14-f473-4dc8-a538-201f37a8ae98\") " pod="openshift-monitoring/node-exporter-fh2sl" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.015417 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/b1d22f14-f473-4dc8-a538-201f37a8ae98-node-exporter-textfile\") pod \"node-exporter-fh2sl\" (UID: \"b1d22f14-f473-4dc8-a538-201f37a8ae98\") " pod="openshift-monitoring/node-exporter-fh2sl" Feb 16 17:06:24 crc kubenswrapper[4794]: E0216 17:06:24.015504 4794 secret.go:188] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.015520 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b1d22f14-f473-4dc8-a538-201f37a8ae98-node-exporter-wtmp\") pod \"node-exporter-fh2sl\" (UID: \"b1d22f14-f473-4dc8-a538-201f37a8ae98\") " pod="openshift-monitoring/node-exporter-fh2sl" Feb 16 17:06:24 crc kubenswrapper[4794]: E0216 17:06:24.015547 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1d22f14-f473-4dc8-a538-201f37a8ae98-node-exporter-tls podName:b1d22f14-f473-4dc8-a538-201f37a8ae98 nodeName:}" failed. No retries permitted until 2026-02-16 17:06:24.515530973 +0000 UTC m=+410.463625680 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/b1d22f14-f473-4dc8-a538-201f37a8ae98-node-exporter-tls") pod "node-exporter-fh2sl" (UID: "b1d22f14-f473-4dc8-a538-201f37a8ae98") : secret "node-exporter-tls" not found Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.015809 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b1d22f14-f473-4dc8-a538-201f37a8ae98-metrics-client-ca\") pod \"node-exporter-fh2sl\" (UID: \"b1d22f14-f473-4dc8-a538-201f37a8ae98\") " pod="openshift-monitoring/node-exporter-fh2sl" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.022233 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b1d22f14-f473-4dc8-a538-201f37a8ae98-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-fh2sl\" (UID: \"b1d22f14-f473-4dc8-a538-201f37a8ae98\") " pod="openshift-monitoring/node-exporter-fh2sl" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.030105 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2sdj\" (UniqueName: \"kubernetes.io/projected/b1d22f14-f473-4dc8-a538-201f37a8ae98-kube-api-access-f2sdj\") pod \"node-exporter-fh2sl\" (UID: \"b1d22f14-f473-4dc8-a538-201f37a8ae98\") " pod="openshift-monitoring/node-exporter-fh2sl" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.318820 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/e8c730f2-32d0-4cf6-b422-c89448a58aec-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-9btkh\" (UID: \"e8c730f2-32d0-4cf6-b422-c89448a58aec\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9btkh" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.329019 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/e8c730f2-32d0-4cf6-b422-c89448a58aec-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-9btkh\" (UID: \"e8c730f2-32d0-4cf6-b422-c89448a58aec\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9btkh" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.354419 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-ddhwh"] Feb 16 17:06:24 crc kubenswrapper[4794]: W0216 17:06:24.369584 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4388adc3_ddda_464a_89d9_a5ec287898f6.slice/crio-294f77f57ca681cd5094e8f3d4dfd8330daa9c1bc4277eefd30a96cf544477eb WatchSource:0}: Error finding container 294f77f57ca681cd5094e8f3d4dfd8330daa9c1bc4277eefd30a96cf544477eb: Status 404 returned error can't find the container with id 294f77f57ca681cd5094e8f3d4dfd8330daa9c1bc4277eefd30a96cf544477eb Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.522356 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b1d22f14-f473-4dc8-a538-201f37a8ae98-node-exporter-tls\") pod \"node-exporter-fh2sl\" (UID: \"b1d22f14-f473-4dc8-a538-201f37a8ae98\") " pod="openshift-monitoring/node-exporter-fh2sl" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.526338 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b1d22f14-f473-4dc8-a538-201f37a8ae98-node-exporter-tls\") pod \"node-exporter-fh2sl\" (UID: \"b1d22f14-f473-4dc8-a538-201f37a8ae98\") " pod="openshift-monitoring/node-exporter-fh2sl" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.559477 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9btkh" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.700269 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-fh2sl" Feb 16 17:06:24 crc kubenswrapper[4794]: W0216 17:06:24.720072 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1d22f14_f473_4dc8_a538_201f37a8ae98.slice/crio-b089f25ebf4a184f63a3a75d3649d27b68328232ef1fce90fa09755ea9cb3772 WatchSource:0}: Error finding container b089f25ebf4a184f63a3a75d3649d27b68328232ef1fce90fa09755ea9cb3772: Status 404 returned error can't find the container with id b089f25ebf4a184f63a3a75d3649d27b68328232ef1fce90fa09755ea9cb3772 Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.733966 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.735597 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.742319 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.742347 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.742500 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.742562 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.742908 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.743231 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.743322 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.745228 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.746355 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-h92lv" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.758835 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.828061 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/12167c1d-a63f-4593-96c4-31d4f2bbd004-tls-assets\") pod \"alertmanager-main-0\" (UID: \"12167c1d-a63f-4593-96c4-31d4f2bbd004\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.828115 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/12167c1d-a63f-4593-96c4-31d4f2bbd004-config-volume\") pod \"alertmanager-main-0\" (UID: \"12167c1d-a63f-4593-96c4-31d4f2bbd004\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.828145 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/12167c1d-a63f-4593-96c4-31d4f2bbd004-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"12167c1d-a63f-4593-96c4-31d4f2bbd004\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.828162 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/12167c1d-a63f-4593-96c4-31d4f2bbd004-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"12167c1d-a63f-4593-96c4-31d4f2bbd004\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.828180 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwwlb\" (UniqueName: \"kubernetes.io/projected/12167c1d-a63f-4593-96c4-31d4f2bbd004-kube-api-access-kwwlb\") pod \"alertmanager-main-0\" (UID: \"12167c1d-a63f-4593-96c4-31d4f2bbd004\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.828197 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/12167c1d-a63f-4593-96c4-31d4f2bbd004-config-out\") pod \"alertmanager-main-0\" (UID: \"12167c1d-a63f-4593-96c4-31d4f2bbd004\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.828217 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/12167c1d-a63f-4593-96c4-31d4f2bbd004-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"12167c1d-a63f-4593-96c4-31d4f2bbd004\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.828252 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/12167c1d-a63f-4593-96c4-31d4f2bbd004-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"12167c1d-a63f-4593-96c4-31d4f2bbd004\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.828269 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/12167c1d-a63f-4593-96c4-31d4f2bbd004-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"12167c1d-a63f-4593-96c4-31d4f2bbd004\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.828288 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/12167c1d-a63f-4593-96c4-31d4f2bbd004-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"12167c1d-a63f-4593-96c4-31d4f2bbd004\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.828321 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/12167c1d-a63f-4593-96c4-31d4f2bbd004-web-config\") pod \"alertmanager-main-0\" (UID: \"12167c1d-a63f-4593-96c4-31d4f2bbd004\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.828341 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12167c1d-a63f-4593-96c4-31d4f2bbd004-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"12167c1d-a63f-4593-96c4-31d4f2bbd004\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.889367 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-9btkh"] Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.929590 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/12167c1d-a63f-4593-96c4-31d4f2bbd004-web-config\") pod \"alertmanager-main-0\" (UID: \"12167c1d-a63f-4593-96c4-31d4f2bbd004\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.929635 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12167c1d-a63f-4593-96c4-31d4f2bbd004-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"12167c1d-a63f-4593-96c4-31d4f2bbd004\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.929668 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/12167c1d-a63f-4593-96c4-31d4f2bbd004-tls-assets\") pod \"alertmanager-main-0\" (UID: \"12167c1d-a63f-4593-96c4-31d4f2bbd004\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.929698 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/12167c1d-a63f-4593-96c4-31d4f2bbd004-config-volume\") pod \"alertmanager-main-0\" (UID: \"12167c1d-a63f-4593-96c4-31d4f2bbd004\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.929736 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/12167c1d-a63f-4593-96c4-31d4f2bbd004-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"12167c1d-a63f-4593-96c4-31d4f2bbd004\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.929758 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/12167c1d-a63f-4593-96c4-31d4f2bbd004-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"12167c1d-a63f-4593-96c4-31d4f2bbd004\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.929786 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwwlb\" (UniqueName: \"kubernetes.io/projected/12167c1d-a63f-4593-96c4-31d4f2bbd004-kube-api-access-kwwlb\") pod \"alertmanager-main-0\" (UID: \"12167c1d-a63f-4593-96c4-31d4f2bbd004\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.929812 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/12167c1d-a63f-4593-96c4-31d4f2bbd004-config-out\") pod \"alertmanager-main-0\" (UID: \"12167c1d-a63f-4593-96c4-31d4f2bbd004\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.929835 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/12167c1d-a63f-4593-96c4-31d4f2bbd004-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"12167c1d-a63f-4593-96c4-31d4f2bbd004\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.929880 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/12167c1d-a63f-4593-96c4-31d4f2bbd004-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"12167c1d-a63f-4593-96c4-31d4f2bbd004\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.929905 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/12167c1d-a63f-4593-96c4-31d4f2bbd004-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"12167c1d-a63f-4593-96c4-31d4f2bbd004\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.929934 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/12167c1d-a63f-4593-96c4-31d4f2bbd004-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"12167c1d-a63f-4593-96c4-31d4f2bbd004\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:24 crc kubenswrapper[4794]: E0216 17:06:24.930374 4794 secret.go:188] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Feb 16 17:06:24 crc kubenswrapper[4794]: E0216 17:06:24.930437 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12167c1d-a63f-4593-96c4-31d4f2bbd004-secret-alertmanager-main-tls podName:12167c1d-a63f-4593-96c4-31d4f2bbd004 nodeName:}" failed. No retries permitted until 2026-02-16 17:06:25.430420194 +0000 UTC m=+411.378514841 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/12167c1d-a63f-4593-96c4-31d4f2bbd004-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "12167c1d-a63f-4593-96c4-31d4f2bbd004") : secret "alertmanager-main-tls" not found Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.930452 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/12167c1d-a63f-4593-96c4-31d4f2bbd004-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"12167c1d-a63f-4593-96c4-31d4f2bbd004\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.930810 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/12167c1d-a63f-4593-96c4-31d4f2bbd004-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"12167c1d-a63f-4593-96c4-31d4f2bbd004\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.930928 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12167c1d-a63f-4593-96c4-31d4f2bbd004-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"12167c1d-a63f-4593-96c4-31d4f2bbd004\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.934506 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/12167c1d-a63f-4593-96c4-31d4f2bbd004-config-volume\") pod \"alertmanager-main-0\" (UID: \"12167c1d-a63f-4593-96c4-31d4f2bbd004\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.934525 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/12167c1d-a63f-4593-96c4-31d4f2bbd004-web-config\") pod \"alertmanager-main-0\" (UID: \"12167c1d-a63f-4593-96c4-31d4f2bbd004\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.934559 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/12167c1d-a63f-4593-96c4-31d4f2bbd004-tls-assets\") pod \"alertmanager-main-0\" (UID: \"12167c1d-a63f-4593-96c4-31d4f2bbd004\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.934996 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/12167c1d-a63f-4593-96c4-31d4f2bbd004-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"12167c1d-a63f-4593-96c4-31d4f2bbd004\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.935798 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/12167c1d-a63f-4593-96c4-31d4f2bbd004-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"12167c1d-a63f-4593-96c4-31d4f2bbd004\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.936444 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/12167c1d-a63f-4593-96c4-31d4f2bbd004-config-out\") pod \"alertmanager-main-0\" (UID: \"12167c1d-a63f-4593-96c4-31d4f2bbd004\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.940810 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/12167c1d-a63f-4593-96c4-31d4f2bbd004-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"12167c1d-a63f-4593-96c4-31d4f2bbd004\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:24 crc kubenswrapper[4794]: I0216 17:06:24.946222 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwwlb\" (UniqueName: \"kubernetes.io/projected/12167c1d-a63f-4593-96c4-31d4f2bbd004-kube-api-access-kwwlb\") pod \"alertmanager-main-0\" (UID: \"12167c1d-a63f-4593-96c4-31d4f2bbd004\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.060672 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-fh2sl" event={"ID":"b1d22f14-f473-4dc8-a538-201f37a8ae98","Type":"ContainerStarted","Data":"b089f25ebf4a184f63a3a75d3649d27b68328232ef1fce90fa09755ea9cb3772"} Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.061841 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9btkh" event={"ID":"e8c730f2-32d0-4cf6-b422-c89448a58aec","Type":"ContainerStarted","Data":"f7bda10529ce9acfdd6785f18c95c2a6c5c387803179618175b7f170dc72ba02"} Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.062640 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-ddhwh" event={"ID":"4388adc3-ddda-464a-89d9-a5ec287898f6","Type":"ContainerStarted","Data":"294f77f57ca681cd5094e8f3d4dfd8330daa9c1bc4277eefd30a96cf544477eb"} Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.436751 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/12167c1d-a63f-4593-96c4-31d4f2bbd004-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"12167c1d-a63f-4593-96c4-31d4f2bbd004\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.440906 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/12167c1d-a63f-4593-96c4-31d4f2bbd004-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"12167c1d-a63f-4593-96c4-31d4f2bbd004\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.670719 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.734891 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd"] Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.736990 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd" Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.740846 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.741091 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.741104 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.741321 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.741360 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd"] Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.741397 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-w5ffr" Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.741474 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-49g0f6n4k6oac" Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.741561 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.842040 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/edfc3cbf-076e-4229-a7af-451ed8f2673c-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-97c7cdc9f-d8vrd\" (UID: \"edfc3cbf-076e-4229-a7af-451ed8f2673c\") " pod="openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd" Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.842325 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/edfc3cbf-076e-4229-a7af-451ed8f2673c-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-97c7cdc9f-d8vrd\" (UID: \"edfc3cbf-076e-4229-a7af-451ed8f2673c\") " pod="openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd" Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.842353 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7jgk\" (UniqueName: \"kubernetes.io/projected/edfc3cbf-076e-4229-a7af-451ed8f2673c-kube-api-access-k7jgk\") pod \"thanos-querier-97c7cdc9f-d8vrd\" (UID: \"edfc3cbf-076e-4229-a7af-451ed8f2673c\") " pod="openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd" Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.842386 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/edfc3cbf-076e-4229-a7af-451ed8f2673c-secret-thanos-querier-tls\") pod \"thanos-querier-97c7cdc9f-d8vrd\" (UID: \"edfc3cbf-076e-4229-a7af-451ed8f2673c\") " pod="openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd" Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.842423 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/edfc3cbf-076e-4229-a7af-451ed8f2673c-secret-grpc-tls\") pod \"thanos-querier-97c7cdc9f-d8vrd\" (UID: \"edfc3cbf-076e-4229-a7af-451ed8f2673c\") " pod="openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd" Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.842456 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/edfc3cbf-076e-4229-a7af-451ed8f2673c-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-97c7cdc9f-d8vrd\" (UID: \"edfc3cbf-076e-4229-a7af-451ed8f2673c\") " pod="openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd" Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.842476 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/edfc3cbf-076e-4229-a7af-451ed8f2673c-metrics-client-ca\") pod \"thanos-querier-97c7cdc9f-d8vrd\" (UID: \"edfc3cbf-076e-4229-a7af-451ed8f2673c\") " pod="openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd" Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.842499 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/edfc3cbf-076e-4229-a7af-451ed8f2673c-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-97c7cdc9f-d8vrd\" (UID: \"edfc3cbf-076e-4229-a7af-451ed8f2673c\") " pod="openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd" Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.943791 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/edfc3cbf-076e-4229-a7af-451ed8f2673c-secret-grpc-tls\") pod \"thanos-querier-97c7cdc9f-d8vrd\" (UID: \"edfc3cbf-076e-4229-a7af-451ed8f2673c\") " pod="openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd" Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.943904 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/edfc3cbf-076e-4229-a7af-451ed8f2673c-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-97c7cdc9f-d8vrd\" (UID: \"edfc3cbf-076e-4229-a7af-451ed8f2673c\") " pod="openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd" Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.943936 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/edfc3cbf-076e-4229-a7af-451ed8f2673c-metrics-client-ca\") pod \"thanos-querier-97c7cdc9f-d8vrd\" (UID: \"edfc3cbf-076e-4229-a7af-451ed8f2673c\") " pod="openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd" Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.943971 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/edfc3cbf-076e-4229-a7af-451ed8f2673c-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-97c7cdc9f-d8vrd\" (UID: \"edfc3cbf-076e-4229-a7af-451ed8f2673c\") " pod="openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd" Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.944010 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/edfc3cbf-076e-4229-a7af-451ed8f2673c-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-97c7cdc9f-d8vrd\" (UID: \"edfc3cbf-076e-4229-a7af-451ed8f2673c\") " pod="openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd" Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.944035 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/edfc3cbf-076e-4229-a7af-451ed8f2673c-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-97c7cdc9f-d8vrd\" (UID: \"edfc3cbf-076e-4229-a7af-451ed8f2673c\") " pod="openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd" Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.944065 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7jgk\" (UniqueName: \"kubernetes.io/projected/edfc3cbf-076e-4229-a7af-451ed8f2673c-kube-api-access-k7jgk\") pod \"thanos-querier-97c7cdc9f-d8vrd\" (UID: \"edfc3cbf-076e-4229-a7af-451ed8f2673c\") " pod="openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd" Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.944110 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/edfc3cbf-076e-4229-a7af-451ed8f2673c-secret-thanos-querier-tls\") pod \"thanos-querier-97c7cdc9f-d8vrd\" (UID: \"edfc3cbf-076e-4229-a7af-451ed8f2673c\") " pod="openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd" Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.945611 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/edfc3cbf-076e-4229-a7af-451ed8f2673c-metrics-client-ca\") pod \"thanos-querier-97c7cdc9f-d8vrd\" (UID: \"edfc3cbf-076e-4229-a7af-451ed8f2673c\") " pod="openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd" Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.949741 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/edfc3cbf-076e-4229-a7af-451ed8f2673c-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-97c7cdc9f-d8vrd\" (UID: \"edfc3cbf-076e-4229-a7af-451ed8f2673c\") " pod="openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd" Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.950166 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/edfc3cbf-076e-4229-a7af-451ed8f2673c-secret-grpc-tls\") pod \"thanos-querier-97c7cdc9f-d8vrd\" (UID: \"edfc3cbf-076e-4229-a7af-451ed8f2673c\") " pod="openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd" Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.950830 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/edfc3cbf-076e-4229-a7af-451ed8f2673c-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-97c7cdc9f-d8vrd\" (UID: \"edfc3cbf-076e-4229-a7af-451ed8f2673c\") " pod="openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd" Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.954246 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/edfc3cbf-076e-4229-a7af-451ed8f2673c-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-97c7cdc9f-d8vrd\" (UID: \"edfc3cbf-076e-4229-a7af-451ed8f2673c\") " pod="openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd" Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.954771 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/edfc3cbf-076e-4229-a7af-451ed8f2673c-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-97c7cdc9f-d8vrd\" (UID: \"edfc3cbf-076e-4229-a7af-451ed8f2673c\") " pod="openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd" Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.965098 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/edfc3cbf-076e-4229-a7af-451ed8f2673c-secret-thanos-querier-tls\") pod \"thanos-querier-97c7cdc9f-d8vrd\" (UID: \"edfc3cbf-076e-4229-a7af-451ed8f2673c\") " pod="openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd" Feb 16 17:06:25 crc kubenswrapper[4794]: I0216 17:06:25.972287 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7jgk\" (UniqueName: \"kubernetes.io/projected/edfc3cbf-076e-4229-a7af-451ed8f2673c-kube-api-access-k7jgk\") pod \"thanos-querier-97c7cdc9f-d8vrd\" (UID: \"edfc3cbf-076e-4229-a7af-451ed8f2673c\") " pod="openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd" Feb 16 17:06:26 crc kubenswrapper[4794]: I0216 17:06:26.062276 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd" Feb 16 17:06:26 crc kubenswrapper[4794]: I0216 17:06:26.069452 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-fh2sl" event={"ID":"b1d22f14-f473-4dc8-a538-201f37a8ae98","Type":"ContainerStarted","Data":"d2ef7b0e65f41970960efa98f85ee7d50e2eabf83fccef0ff679bbe5d6747104"} Feb 16 17:06:26 crc kubenswrapper[4794]: I0216 17:06:26.073708 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-ddhwh" event={"ID":"4388adc3-ddda-464a-89d9-a5ec287898f6","Type":"ContainerStarted","Data":"67d2c544f4f4730fa122e353144f8568b1c45060867d0e09bfafea23fe3d8415"} Feb 16 17:06:26 crc kubenswrapper[4794]: I0216 17:06:26.073760 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-ddhwh" event={"ID":"4388adc3-ddda-464a-89d9-a5ec287898f6","Type":"ContainerStarted","Data":"0670fdcb1786d0ded01a2277022fcb072ab7055396d8d4387a0653f8e3dd9ab9"} Feb 16 17:06:26 crc kubenswrapper[4794]: I0216 17:06:26.154207 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 16 17:06:28 crc kubenswrapper[4794]: I0216 17:06:27.015267 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd"] Feb 16 17:06:28 crc kubenswrapper[4794]: W0216 17:06:27.024725 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedfc3cbf_076e_4229_a7af_451ed8f2673c.slice/crio-04dd71260cc00ee54a438a257cc9963baa52e2ec49f60cd832fd63944848ffea WatchSource:0}: Error finding container 04dd71260cc00ee54a438a257cc9963baa52e2ec49f60cd832fd63944848ffea: Status 404 returned error can't find the container with id 04dd71260cc00ee54a438a257cc9963baa52e2ec49f60cd832fd63944848ffea Feb 16 17:06:28 crc kubenswrapper[4794]: I0216 17:06:27.081335 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd" event={"ID":"edfc3cbf-076e-4229-a7af-451ed8f2673c","Type":"ContainerStarted","Data":"04dd71260cc00ee54a438a257cc9963baa52e2ec49f60cd832fd63944848ffea"} Feb 16 17:06:28 crc kubenswrapper[4794]: I0216 17:06:27.086692 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"12167c1d-a63f-4593-96c4-31d4f2bbd004","Type":"ContainerStarted","Data":"ce433bebc818bae5aabca3cefce83881ef3a8f11bb5955b6a155a5d9132c397b"} Feb 16 17:06:28 crc kubenswrapper[4794]: I0216 17:06:27.088195 4794 generic.go:334] "Generic (PLEG): container finished" podID="b1d22f14-f473-4dc8-a538-201f37a8ae98" containerID="d2ef7b0e65f41970960efa98f85ee7d50e2eabf83fccef0ff679bbe5d6747104" exitCode=0 Feb 16 17:06:28 crc kubenswrapper[4794]: I0216 17:06:27.088253 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-fh2sl" event={"ID":"b1d22f14-f473-4dc8-a538-201f37a8ae98","Type":"ContainerDied","Data":"d2ef7b0e65f41970960efa98f85ee7d50e2eabf83fccef0ff679bbe5d6747104"} Feb 16 17:06:28 crc kubenswrapper[4794]: I0216 17:06:27.089841 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9btkh" event={"ID":"e8c730f2-32d0-4cf6-b422-c89448a58aec","Type":"ContainerStarted","Data":"c3924e9efd6df77dc00226f7176a3466edc31b93ff5e14032dc81ab18f76d992"} Feb 16 17:06:28 crc kubenswrapper[4794]: I0216 17:06:27.091777 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-ddhwh" event={"ID":"4388adc3-ddda-464a-89d9-a5ec287898f6","Type":"ContainerStarted","Data":"a127a17d6b74493376e5242e99b14395ec4be596c04b3611995546d95984fa1c"} Feb 16 17:06:28 crc kubenswrapper[4794]: I0216 17:06:27.127819 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-ddhwh" podStartSLOduration=2.6151185420000003 podStartE2EDuration="4.127797678s" podCreationTimestamp="2026-02-16 17:06:23 +0000 UTC" firstStartedPulling="2026-02-16 17:06:25.304459445 +0000 UTC m=+411.252554092" lastFinishedPulling="2026-02-16 17:06:26.817138561 +0000 UTC m=+412.765233228" observedRunningTime="2026-02-16 17:06:27.126462712 +0000 UTC m=+413.074557379" watchObservedRunningTime="2026-02-16 17:06:27.127797678 +0000 UTC m=+413.075892325" Feb 16 17:06:28 crc kubenswrapper[4794]: I0216 17:06:28.444965 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7677c68b7-g7nj2"] Feb 16 17:06:28 crc kubenswrapper[4794]: I0216 17:06:28.446100 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7677c68b7-g7nj2" Feb 16 17:06:28 crc kubenswrapper[4794]: I0216 17:06:28.516890 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7677c68b7-g7nj2"] Feb 16 17:06:28 crc kubenswrapper[4794]: I0216 17:06:28.597791 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1143b1ad-4f22-43ed-9a85-e9abfe207481-console-oauth-config\") pod \"console-7677c68b7-g7nj2\" (UID: \"1143b1ad-4f22-43ed-9a85-e9abfe207481\") " pod="openshift-console/console-7677c68b7-g7nj2" Feb 16 17:06:28 crc kubenswrapper[4794]: I0216 17:06:28.598179 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1143b1ad-4f22-43ed-9a85-e9abfe207481-service-ca\") pod \"console-7677c68b7-g7nj2\" (UID: \"1143b1ad-4f22-43ed-9a85-e9abfe207481\") " pod="openshift-console/console-7677c68b7-g7nj2" Feb 16 17:06:28 crc kubenswrapper[4794]: I0216 17:06:28.598218 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1143b1ad-4f22-43ed-9a85-e9abfe207481-oauth-serving-cert\") pod \"console-7677c68b7-g7nj2\" (UID: \"1143b1ad-4f22-43ed-9a85-e9abfe207481\") " pod="openshift-console/console-7677c68b7-g7nj2" Feb 16 17:06:28 crc kubenswrapper[4794]: I0216 17:06:28.598253 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4c6n\" (UniqueName: \"kubernetes.io/projected/1143b1ad-4f22-43ed-9a85-e9abfe207481-kube-api-access-l4c6n\") pod \"console-7677c68b7-g7nj2\" (UID: \"1143b1ad-4f22-43ed-9a85-e9abfe207481\") " pod="openshift-console/console-7677c68b7-g7nj2" Feb 16 17:06:28 crc kubenswrapper[4794]: I0216 17:06:28.598283 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1143b1ad-4f22-43ed-9a85-e9abfe207481-console-config\") pod \"console-7677c68b7-g7nj2\" (UID: \"1143b1ad-4f22-43ed-9a85-e9abfe207481\") " pod="openshift-console/console-7677c68b7-g7nj2" Feb 16 17:06:28 crc kubenswrapper[4794]: I0216 17:06:28.598325 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1143b1ad-4f22-43ed-9a85-e9abfe207481-trusted-ca-bundle\") pod \"console-7677c68b7-g7nj2\" (UID: \"1143b1ad-4f22-43ed-9a85-e9abfe207481\") " pod="openshift-console/console-7677c68b7-g7nj2" Feb 16 17:06:28 crc kubenswrapper[4794]: I0216 17:06:28.598364 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1143b1ad-4f22-43ed-9a85-e9abfe207481-console-serving-cert\") pod \"console-7677c68b7-g7nj2\" (UID: \"1143b1ad-4f22-43ed-9a85-e9abfe207481\") " pod="openshift-console/console-7677c68b7-g7nj2" Feb 16 17:06:28 crc kubenswrapper[4794]: I0216 17:06:28.699858 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1143b1ad-4f22-43ed-9a85-e9abfe207481-console-oauth-config\") pod \"console-7677c68b7-g7nj2\" (UID: \"1143b1ad-4f22-43ed-9a85-e9abfe207481\") " pod="openshift-console/console-7677c68b7-g7nj2" Feb 16 17:06:28 crc kubenswrapper[4794]: I0216 17:06:28.699931 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1143b1ad-4f22-43ed-9a85-e9abfe207481-service-ca\") pod \"console-7677c68b7-g7nj2\" (UID: \"1143b1ad-4f22-43ed-9a85-e9abfe207481\") " pod="openshift-console/console-7677c68b7-g7nj2" Feb 16 17:06:28 crc kubenswrapper[4794]: I0216 17:06:28.700025 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1143b1ad-4f22-43ed-9a85-e9abfe207481-oauth-serving-cert\") pod \"console-7677c68b7-g7nj2\" (UID: \"1143b1ad-4f22-43ed-9a85-e9abfe207481\") " pod="openshift-console/console-7677c68b7-g7nj2" Feb 16 17:06:28 crc kubenswrapper[4794]: I0216 17:06:28.700073 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4c6n\" (UniqueName: \"kubernetes.io/projected/1143b1ad-4f22-43ed-9a85-e9abfe207481-kube-api-access-l4c6n\") pod \"console-7677c68b7-g7nj2\" (UID: \"1143b1ad-4f22-43ed-9a85-e9abfe207481\") " pod="openshift-console/console-7677c68b7-g7nj2" Feb 16 17:06:28 crc kubenswrapper[4794]: I0216 17:06:28.700104 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1143b1ad-4f22-43ed-9a85-e9abfe207481-console-config\") pod \"console-7677c68b7-g7nj2\" (UID: \"1143b1ad-4f22-43ed-9a85-e9abfe207481\") " pod="openshift-console/console-7677c68b7-g7nj2" Feb 16 17:06:28 crc kubenswrapper[4794]: I0216 17:06:28.700125 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1143b1ad-4f22-43ed-9a85-e9abfe207481-trusted-ca-bundle\") pod \"console-7677c68b7-g7nj2\" (UID: \"1143b1ad-4f22-43ed-9a85-e9abfe207481\") " pod="openshift-console/console-7677c68b7-g7nj2" Feb 16 17:06:28 crc kubenswrapper[4794]: I0216 17:06:28.700162 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1143b1ad-4f22-43ed-9a85-e9abfe207481-console-serving-cert\") pod \"console-7677c68b7-g7nj2\" (UID: \"1143b1ad-4f22-43ed-9a85-e9abfe207481\") " pod="openshift-console/console-7677c68b7-g7nj2" Feb 16 17:06:28 crc kubenswrapper[4794]: I0216 17:06:28.702108 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1143b1ad-4f22-43ed-9a85-e9abfe207481-console-config\") pod \"console-7677c68b7-g7nj2\" (UID: \"1143b1ad-4f22-43ed-9a85-e9abfe207481\") " pod="openshift-console/console-7677c68b7-g7nj2" Feb 16 17:06:28 crc kubenswrapper[4794]: I0216 17:06:28.702662 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1143b1ad-4f22-43ed-9a85-e9abfe207481-trusted-ca-bundle\") pod \"console-7677c68b7-g7nj2\" (UID: \"1143b1ad-4f22-43ed-9a85-e9abfe207481\") " pod="openshift-console/console-7677c68b7-g7nj2" Feb 16 17:06:28 crc kubenswrapper[4794]: I0216 17:06:28.703287 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1143b1ad-4f22-43ed-9a85-e9abfe207481-oauth-serving-cert\") pod \"console-7677c68b7-g7nj2\" (UID: \"1143b1ad-4f22-43ed-9a85-e9abfe207481\") " pod="openshift-console/console-7677c68b7-g7nj2" Feb 16 17:06:28 crc kubenswrapper[4794]: I0216 17:06:28.701126 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1143b1ad-4f22-43ed-9a85-e9abfe207481-service-ca\") pod \"console-7677c68b7-g7nj2\" (UID: \"1143b1ad-4f22-43ed-9a85-e9abfe207481\") " pod="openshift-console/console-7677c68b7-g7nj2" Feb 16 17:06:28 crc kubenswrapper[4794]: I0216 17:06:28.706203 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1143b1ad-4f22-43ed-9a85-e9abfe207481-console-oauth-config\") pod \"console-7677c68b7-g7nj2\" (UID: \"1143b1ad-4f22-43ed-9a85-e9abfe207481\") " pod="openshift-console/console-7677c68b7-g7nj2" Feb 16 17:06:28 crc kubenswrapper[4794]: I0216 17:06:28.706292 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1143b1ad-4f22-43ed-9a85-e9abfe207481-console-serving-cert\") pod \"console-7677c68b7-g7nj2\" (UID: \"1143b1ad-4f22-43ed-9a85-e9abfe207481\") " pod="openshift-console/console-7677c68b7-g7nj2" Feb 16 17:06:28 crc kubenswrapper[4794]: I0216 17:06:28.716005 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4c6n\" (UniqueName: \"kubernetes.io/projected/1143b1ad-4f22-43ed-9a85-e9abfe207481-kube-api-access-l4c6n\") pod \"console-7677c68b7-g7nj2\" (UID: \"1143b1ad-4f22-43ed-9a85-e9abfe207481\") " pod="openshift-console/console-7677c68b7-g7nj2" Feb 16 17:06:28 crc kubenswrapper[4794]: I0216 17:06:28.764086 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7677c68b7-g7nj2" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.046032 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-565c65954d-lb4z8"] Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.047645 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-565c65954d-lb4z8" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.051626 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.051725 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-90ql84o152u53" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.051741 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.051994 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.052152 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-xl85d" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.052286 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.056101 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-565c65954d-lb4z8"] Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.111944 4794 generic.go:334] "Generic (PLEG): container finished" podID="12167c1d-a63f-4593-96c4-31d4f2bbd004" containerID="ec89b7d8ec1038263addaa0355692bbabc79567bc1f84f30100e8ff3c9c24f86" exitCode=0 Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.112011 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"12167c1d-a63f-4593-96c4-31d4f2bbd004","Type":"ContainerDied","Data":"ec89b7d8ec1038263addaa0355692bbabc79567bc1f84f30100e8ff3c9c24f86"} Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.126283 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-fh2sl" event={"ID":"b1d22f14-f473-4dc8-a538-201f37a8ae98","Type":"ContainerStarted","Data":"a2a56cc6ffb08bdcd111db25a10fc805a233fc0a2f31f37dc1fca445d7148b17"} Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.126348 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-fh2sl" event={"ID":"b1d22f14-f473-4dc8-a538-201f37a8ae98","Type":"ContainerStarted","Data":"10e0068e410513c5453746d5ac09e3aed085cab06e7143d1b920c3627f9219e7"} Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.129201 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9btkh" event={"ID":"e8c730f2-32d0-4cf6-b422-c89448a58aec","Type":"ContainerStarted","Data":"5d3dcb248b97478197131b089d2f4377ecb93688d415ab12dc60fdca970cbdd8"} Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.129246 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9btkh" event={"ID":"e8c730f2-32d0-4cf6-b422-c89448a58aec","Type":"ContainerStarted","Data":"34ef6aee8a2e58766e5b6a02ed1368d5208869ddd11e22fd4252b07625e3f0da"} Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.180070 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-fh2sl" podStartSLOduration=5.114391101 podStartE2EDuration="6.18005342s" podCreationTimestamp="2026-02-16 17:06:23 +0000 UTC" firstStartedPulling="2026-02-16 17:06:24.722388394 +0000 UTC m=+410.670483041" lastFinishedPulling="2026-02-16 17:06:25.788050703 +0000 UTC m=+411.736145360" observedRunningTime="2026-02-16 17:06:29.176185316 +0000 UTC m=+415.124279963" watchObservedRunningTime="2026-02-16 17:06:29.18005342 +0000 UTC m=+415.128148067" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.201794 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7677c68b7-g7nj2"] Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.203372 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9btkh" podStartSLOduration=4.287552453 podStartE2EDuration="6.203352303s" podCreationTimestamp="2026-02-16 17:06:23 +0000 UTC" firstStartedPulling="2026-02-16 17:06:24.899933393 +0000 UTC m=+410.848028040" lastFinishedPulling="2026-02-16 17:06:26.815733223 +0000 UTC m=+412.763827890" observedRunningTime="2026-02-16 17:06:29.19717323 +0000 UTC m=+415.145267877" watchObservedRunningTime="2026-02-16 17:06:29.203352303 +0000 UTC m=+415.151446960" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.208143 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/86b5fc57-5479-4483-bbc1-95ea454a5294-secret-metrics-server-tls\") pod \"metrics-server-565c65954d-lb4z8\" (UID: \"86b5fc57-5479-4483-bbc1-95ea454a5294\") " pod="openshift-monitoring/metrics-server-565c65954d-lb4z8" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.208197 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/86b5fc57-5479-4483-bbc1-95ea454a5294-metrics-server-audit-profiles\") pod \"metrics-server-565c65954d-lb4z8\" (UID: \"86b5fc57-5479-4483-bbc1-95ea454a5294\") " pod="openshift-monitoring/metrics-server-565c65954d-lb4z8" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.208293 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/86b5fc57-5479-4483-bbc1-95ea454a5294-secret-metrics-client-certs\") pod \"metrics-server-565c65954d-lb4z8\" (UID: \"86b5fc57-5479-4483-bbc1-95ea454a5294\") " pod="openshift-monitoring/metrics-server-565c65954d-lb4z8" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.208365 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/86b5fc57-5479-4483-bbc1-95ea454a5294-audit-log\") pod \"metrics-server-565c65954d-lb4z8\" (UID: \"86b5fc57-5479-4483-bbc1-95ea454a5294\") " pod="openshift-monitoring/metrics-server-565c65954d-lb4z8" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.208506 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86b5fc57-5479-4483-bbc1-95ea454a5294-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-565c65954d-lb4z8\" (UID: \"86b5fc57-5479-4483-bbc1-95ea454a5294\") " pod="openshift-monitoring/metrics-server-565c65954d-lb4z8" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.208581 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b5fc57-5479-4483-bbc1-95ea454a5294-client-ca-bundle\") pod \"metrics-server-565c65954d-lb4z8\" (UID: \"86b5fc57-5479-4483-bbc1-95ea454a5294\") " pod="openshift-monitoring/metrics-server-565c65954d-lb4z8" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.208672 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ffwx\" (UniqueName: \"kubernetes.io/projected/86b5fc57-5479-4483-bbc1-95ea454a5294-kube-api-access-5ffwx\") pod \"metrics-server-565c65954d-lb4z8\" (UID: \"86b5fc57-5479-4483-bbc1-95ea454a5294\") " pod="openshift-monitoring/metrics-server-565c65954d-lb4z8" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.310231 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/86b5fc57-5479-4483-bbc1-95ea454a5294-secret-metrics-server-tls\") pod \"metrics-server-565c65954d-lb4z8\" (UID: \"86b5fc57-5479-4483-bbc1-95ea454a5294\") " pod="openshift-monitoring/metrics-server-565c65954d-lb4z8" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.310270 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/86b5fc57-5479-4483-bbc1-95ea454a5294-metrics-server-audit-profiles\") pod \"metrics-server-565c65954d-lb4z8\" (UID: \"86b5fc57-5479-4483-bbc1-95ea454a5294\") " pod="openshift-monitoring/metrics-server-565c65954d-lb4z8" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.310342 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/86b5fc57-5479-4483-bbc1-95ea454a5294-secret-metrics-client-certs\") pod \"metrics-server-565c65954d-lb4z8\" (UID: \"86b5fc57-5479-4483-bbc1-95ea454a5294\") " pod="openshift-monitoring/metrics-server-565c65954d-lb4z8" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.310372 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/86b5fc57-5479-4483-bbc1-95ea454a5294-audit-log\") pod \"metrics-server-565c65954d-lb4z8\" (UID: \"86b5fc57-5479-4483-bbc1-95ea454a5294\") " pod="openshift-monitoring/metrics-server-565c65954d-lb4z8" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.310445 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86b5fc57-5479-4483-bbc1-95ea454a5294-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-565c65954d-lb4z8\" (UID: \"86b5fc57-5479-4483-bbc1-95ea454a5294\") " pod="openshift-monitoring/metrics-server-565c65954d-lb4z8" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.310472 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b5fc57-5479-4483-bbc1-95ea454a5294-client-ca-bundle\") pod \"metrics-server-565c65954d-lb4z8\" (UID: \"86b5fc57-5479-4483-bbc1-95ea454a5294\") " pod="openshift-monitoring/metrics-server-565c65954d-lb4z8" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.311009 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ffwx\" (UniqueName: \"kubernetes.io/projected/86b5fc57-5479-4483-bbc1-95ea454a5294-kube-api-access-5ffwx\") pod \"metrics-server-565c65954d-lb4z8\" (UID: \"86b5fc57-5479-4483-bbc1-95ea454a5294\") " pod="openshift-monitoring/metrics-server-565c65954d-lb4z8" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.311365 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/86b5fc57-5479-4483-bbc1-95ea454a5294-audit-log\") pod \"metrics-server-565c65954d-lb4z8\" (UID: \"86b5fc57-5479-4483-bbc1-95ea454a5294\") " pod="openshift-monitoring/metrics-server-565c65954d-lb4z8" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.311884 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/86b5fc57-5479-4483-bbc1-95ea454a5294-metrics-server-audit-profiles\") pod \"metrics-server-565c65954d-lb4z8\" (UID: \"86b5fc57-5479-4483-bbc1-95ea454a5294\") " pod="openshift-monitoring/metrics-server-565c65954d-lb4z8" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.312176 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86b5fc57-5479-4483-bbc1-95ea454a5294-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-565c65954d-lb4z8\" (UID: \"86b5fc57-5479-4483-bbc1-95ea454a5294\") " pod="openshift-monitoring/metrics-server-565c65954d-lb4z8" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.318339 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/86b5fc57-5479-4483-bbc1-95ea454a5294-secret-metrics-client-certs\") pod \"metrics-server-565c65954d-lb4z8\" (UID: \"86b5fc57-5479-4483-bbc1-95ea454a5294\") " pod="openshift-monitoring/metrics-server-565c65954d-lb4z8" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.320933 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/86b5fc57-5479-4483-bbc1-95ea454a5294-secret-metrics-server-tls\") pod \"metrics-server-565c65954d-lb4z8\" (UID: \"86b5fc57-5479-4483-bbc1-95ea454a5294\") " pod="openshift-monitoring/metrics-server-565c65954d-lb4z8" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.322318 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b5fc57-5479-4483-bbc1-95ea454a5294-client-ca-bundle\") pod \"metrics-server-565c65954d-lb4z8\" (UID: \"86b5fc57-5479-4483-bbc1-95ea454a5294\") " pod="openshift-monitoring/metrics-server-565c65954d-lb4z8" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.334160 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ffwx\" (UniqueName: \"kubernetes.io/projected/86b5fc57-5479-4483-bbc1-95ea454a5294-kube-api-access-5ffwx\") pod \"metrics-server-565c65954d-lb4z8\" (UID: \"86b5fc57-5479-4483-bbc1-95ea454a5294\") " pod="openshift-monitoring/metrics-server-565c65954d-lb4z8" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.379485 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-565c65954d-lb4z8" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.429846 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-7f988585c4-jhdfj"] Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.430999 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-7f988585c4-jhdfj" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.432965 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.433217 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.433594 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-7f988585c4-jhdfj"] Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.513039 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/4d3884b2-7fc8-4ef6-b0bb-39572f0d4d8b-monitoring-plugin-cert\") pod \"monitoring-plugin-7f988585c4-jhdfj\" (UID: \"4d3884b2-7fc8-4ef6-b0bb-39572f0d4d8b\") " pod="openshift-monitoring/monitoring-plugin-7f988585c4-jhdfj" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.615043 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/4d3884b2-7fc8-4ef6-b0bb-39572f0d4d8b-monitoring-plugin-cert\") pod \"monitoring-plugin-7f988585c4-jhdfj\" (UID: \"4d3884b2-7fc8-4ef6-b0bb-39572f0d4d8b\") " pod="openshift-monitoring/monitoring-plugin-7f988585c4-jhdfj" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.620467 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/4d3884b2-7fc8-4ef6-b0bb-39572f0d4d8b-monitoring-plugin-cert\") pod \"monitoring-plugin-7f988585c4-jhdfj\" (UID: \"4d3884b2-7fc8-4ef6-b0bb-39572f0d4d8b\") " pod="openshift-monitoring/monitoring-plugin-7f988585c4-jhdfj" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.755760 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-7f988585c4-jhdfj" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.794850 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-565c65954d-lb4z8"] Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.990206 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.992149 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.994835 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-a9gcoslfjq428" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.995060 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.995180 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-w5q5w" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.995342 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.995936 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.996283 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.996809 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.996988 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.997602 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.997820 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.997859 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 16 17:06:29 crc kubenswrapper[4794]: I0216 17:06:29.999802 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.012691 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.013234 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.122076 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-web-config\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.122129 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.122227 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.122294 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.122355 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.122399 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.122493 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.122582 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.122655 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lcq4\" (UniqueName: \"kubernetes.io/projected/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-kube-api-access-4lcq4\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.122688 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.122749 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.122783 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-config\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.122813 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.122839 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.122899 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.122933 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.123001 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-config-out\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.123030 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.138492 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7677c68b7-g7nj2" event={"ID":"1143b1ad-4f22-43ed-9a85-e9abfe207481","Type":"ContainerStarted","Data":"acefbae5c0ff15c4c9daaa2739706730bff32ffb304b074e42da63c56b64d471"} Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.138586 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7677c68b7-g7nj2" event={"ID":"1143b1ad-4f22-43ed-9a85-e9abfe207481","Type":"ContainerStarted","Data":"06b2ec181718ed5908d05015442683db44e1e2e5d1bbe10218a4dfedacad2748"} Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.161681 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7677c68b7-g7nj2" podStartSLOduration=2.161664212 podStartE2EDuration="2.161664212s" podCreationTimestamp="2026-02-16 17:06:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:06:30.152071591 +0000 UTC m=+416.100166258" watchObservedRunningTime="2026-02-16 17:06:30.161664212 +0000 UTC m=+416.109758859" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.224845 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.224930 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.224974 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lcq4\" (UniqueName: \"kubernetes.io/projected/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-kube-api-access-4lcq4\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.225018 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.225049 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.225065 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-config\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.225107 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.225127 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.225159 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.225202 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.225237 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-config-out\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.225257 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.225290 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-web-config\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.225334 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.225350 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.225367 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.225392 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.225409 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.226525 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.229738 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.232946 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.232986 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.233231 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.233803 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.234153 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.234387 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-config-out\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.234417 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.235024 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-web-config\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.235038 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.235613 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.236816 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.238291 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.239081 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-config\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.239527 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.248844 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.251010 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lcq4\" (UniqueName: \"kubernetes.io/projected/6e407be3-b395-46f8-a2d8-5ae9b5bf398f-kube-api-access-4lcq4\") pod \"prometheus-k8s-0\" (UID: \"6e407be3-b395-46f8-a2d8-5ae9b5bf398f\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: I0216 17:06:30.313564 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:30 crc kubenswrapper[4794]: W0216 17:06:30.572542 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86b5fc57_5479_4483_bbc1_95ea454a5294.slice/crio-86c9f9a6d7d60bfd3cb19e87444d2a5f562876b51a9545a1dfb3423d02e82e95 WatchSource:0}: Error finding container 86c9f9a6d7d60bfd3cb19e87444d2a5f562876b51a9545a1dfb3423d02e82e95: Status 404 returned error can't find the container with id 86c9f9a6d7d60bfd3cb19e87444d2a5f562876b51a9545a1dfb3423d02e82e95 Feb 16 17:06:31 crc kubenswrapper[4794]: I0216 17:06:31.145353 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-565c65954d-lb4z8" event={"ID":"86b5fc57-5479-4483-bbc1-95ea454a5294","Type":"ContainerStarted","Data":"86c9f9a6d7d60bfd3cb19e87444d2a5f562876b51a9545a1dfb3423d02e82e95"} Feb 16 17:06:31 crc kubenswrapper[4794]: I0216 17:06:31.357570 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-7f988585c4-jhdfj"] Feb 16 17:06:31 crc kubenswrapper[4794]: W0216 17:06:31.367799 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d3884b2_7fc8_4ef6_b0bb_39572f0d4d8b.slice/crio-1605903155f2428919e1d506d2747a8463e8f5073bbbd0364be11696063dc40a WatchSource:0}: Error finding container 1605903155f2428919e1d506d2747a8463e8f5073bbbd0364be11696063dc40a: Status 404 returned error can't find the container with id 1605903155f2428919e1d506d2747a8463e8f5073bbbd0364be11696063dc40a Feb 16 17:06:31 crc kubenswrapper[4794]: I0216 17:06:31.416567 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 16 17:06:31 crc kubenswrapper[4794]: W0216 17:06:31.422090 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e407be3_b395_46f8_a2d8_5ae9b5bf398f.slice/crio-b89f3754cfe4cf420ccfaa32013fa8ef54e12a2c63d091506923bbdc9927d87c WatchSource:0}: Error finding container b89f3754cfe4cf420ccfaa32013fa8ef54e12a2c63d091506923bbdc9927d87c: Status 404 returned error can't find the container with id b89f3754cfe4cf420ccfaa32013fa8ef54e12a2c63d091506923bbdc9927d87c Feb 16 17:06:32 crc kubenswrapper[4794]: I0216 17:06:32.154206 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-7f988585c4-jhdfj" event={"ID":"4d3884b2-7fc8-4ef6-b0bb-39572f0d4d8b","Type":"ContainerStarted","Data":"1605903155f2428919e1d506d2747a8463e8f5073bbbd0364be11696063dc40a"} Feb 16 17:06:32 crc kubenswrapper[4794]: I0216 17:06:32.157401 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd" event={"ID":"edfc3cbf-076e-4229-a7af-451ed8f2673c","Type":"ContainerStarted","Data":"ed91d2b3437807fb2066708c940fa0d503e38512acba8d905b53fc505acdafb4"} Feb 16 17:06:32 crc kubenswrapper[4794]: I0216 17:06:32.157455 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd" event={"ID":"edfc3cbf-076e-4229-a7af-451ed8f2673c","Type":"ContainerStarted","Data":"367736f39b7ede206e6d69be4d411520af5f17ba3a1fe69f41938539ff881b3f"} Feb 16 17:06:32 crc kubenswrapper[4794]: I0216 17:06:32.157471 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd" event={"ID":"edfc3cbf-076e-4229-a7af-451ed8f2673c","Type":"ContainerStarted","Data":"7888f8320dc02b2eb37b99ebc9a58d198b2cccc89a6e14d2df437613ccc97da1"} Feb 16 17:06:32 crc kubenswrapper[4794]: I0216 17:06:32.160584 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"12167c1d-a63f-4593-96c4-31d4f2bbd004","Type":"ContainerStarted","Data":"994868674fda4185832bd503f3c4db57e03f063aabf492a1d7cad51229d38686"} Feb 16 17:06:32 crc kubenswrapper[4794]: I0216 17:06:32.160619 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"12167c1d-a63f-4593-96c4-31d4f2bbd004","Type":"ContainerStarted","Data":"5b34eff6e02bef888815bf6b4d503c0f248b859e907e6abd90285d5696b08be8"} Feb 16 17:06:32 crc kubenswrapper[4794]: I0216 17:06:32.160634 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"12167c1d-a63f-4593-96c4-31d4f2bbd004","Type":"ContainerStarted","Data":"872305d75639c1f999a9f4c782aac6fa9c59f3f362069aeff74566a062851e36"} Feb 16 17:06:32 crc kubenswrapper[4794]: I0216 17:06:32.160648 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"12167c1d-a63f-4593-96c4-31d4f2bbd004","Type":"ContainerStarted","Data":"f45102cea4b21af59e384e129bf462e75630551726ee646c174408ad5e760171"} Feb 16 17:06:32 crc kubenswrapper[4794]: I0216 17:06:32.160660 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"12167c1d-a63f-4593-96c4-31d4f2bbd004","Type":"ContainerStarted","Data":"8a90b59ab45968c414a727c77f82df3214aa25c4e34bacffe6ded2a177494e90"} Feb 16 17:06:32 crc kubenswrapper[4794]: I0216 17:06:32.162432 4794 generic.go:334] "Generic (PLEG): container finished" podID="6e407be3-b395-46f8-a2d8-5ae9b5bf398f" containerID="6fe4559c92d26d67a2f3ba0b25234c534d5d187ec6db1cf0e8b773c5e91df47c" exitCode=0 Feb 16 17:06:32 crc kubenswrapper[4794]: I0216 17:06:32.162455 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6e407be3-b395-46f8-a2d8-5ae9b5bf398f","Type":"ContainerDied","Data":"6fe4559c92d26d67a2f3ba0b25234c534d5d187ec6db1cf0e8b773c5e91df47c"} Feb 16 17:06:32 crc kubenswrapper[4794]: I0216 17:06:32.162477 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6e407be3-b395-46f8-a2d8-5ae9b5bf398f","Type":"ContainerStarted","Data":"b89f3754cfe4cf420ccfaa32013fa8ef54e12a2c63d091506923bbdc9927d87c"} Feb 16 17:06:33 crc kubenswrapper[4794]: I0216 17:06:33.175910 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd" event={"ID":"edfc3cbf-076e-4229-a7af-451ed8f2673c","Type":"ContainerStarted","Data":"1a63981d74323d2387d49a070a6edf3c717fa50b77f8fe2d918d67a8f5d44ba4"} Feb 16 17:06:33 crc kubenswrapper[4794]: I0216 17:06:33.180821 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"12167c1d-a63f-4593-96c4-31d4f2bbd004","Type":"ContainerStarted","Data":"f52f4d807dc678c7ec2d6390ed1b06302033622921213b060fe5c6b0deac7745"} Feb 16 17:06:33 crc kubenswrapper[4794]: I0216 17:06:33.182360 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-565c65954d-lb4z8" event={"ID":"86b5fc57-5479-4483-bbc1-95ea454a5294","Type":"ContainerStarted","Data":"d6b3efb5bc0899b968a931faad42570feb26a406b3d48f4589a3a089b5d64d40"} Feb 16 17:06:33 crc kubenswrapper[4794]: I0216 17:06:33.184823 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-7f988585c4-jhdfj" event={"ID":"4d3884b2-7fc8-4ef6-b0bb-39572f0d4d8b","Type":"ContainerStarted","Data":"d6c722e6ecac3fc1536eed6906d5e1eeaff67e342029e55514f02366e4b2083f"} Feb 16 17:06:33 crc kubenswrapper[4794]: I0216 17:06:33.185233 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-7f988585c4-jhdfj" Feb 16 17:06:33 crc kubenswrapper[4794]: I0216 17:06:33.190537 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-7f988585c4-jhdfj" Feb 16 17:06:33 crc kubenswrapper[4794]: I0216 17:06:33.220231 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=2.690961052 podStartE2EDuration="9.220213246s" podCreationTimestamp="2026-02-16 17:06:24 +0000 UTC" firstStartedPulling="2026-02-16 17:06:26.390871489 +0000 UTC m=+412.338966136" lastFinishedPulling="2026-02-16 17:06:32.920123663 +0000 UTC m=+418.868218330" observedRunningTime="2026-02-16 17:06:33.210972817 +0000 UTC m=+419.159067474" watchObservedRunningTime="2026-02-16 17:06:33.220213246 +0000 UTC m=+419.168307883" Feb 16 17:06:33 crc kubenswrapper[4794]: I0216 17:06:33.240331 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-7f988585c4-jhdfj" podStartSLOduration=2.613133166 podStartE2EDuration="4.240296918s" podCreationTimestamp="2026-02-16 17:06:29 +0000 UTC" firstStartedPulling="2026-02-16 17:06:31.37039309 +0000 UTC m=+417.318487737" lastFinishedPulling="2026-02-16 17:06:32.997556842 +0000 UTC m=+418.945651489" observedRunningTime="2026-02-16 17:06:33.229173195 +0000 UTC m=+419.177267842" watchObservedRunningTime="2026-02-16 17:06:33.240296918 +0000 UTC m=+419.188391565" Feb 16 17:06:33 crc kubenswrapper[4794]: I0216 17:06:33.261195 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-565c65954d-lb4z8" podStartSLOduration=2.663243784 podStartE2EDuration="4.261170367s" podCreationTimestamp="2026-02-16 17:06:29 +0000 UTC" firstStartedPulling="2026-02-16 17:06:30.582636881 +0000 UTC m=+416.530731528" lastFinishedPulling="2026-02-16 17:06:32.180563464 +0000 UTC m=+418.128658111" observedRunningTime="2026-02-16 17:06:33.254909842 +0000 UTC m=+419.203004509" watchObservedRunningTime="2026-02-16 17:06:33.261170367 +0000 UTC m=+419.209265014" Feb 16 17:06:34 crc kubenswrapper[4794]: I0216 17:06:34.197433 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd" event={"ID":"edfc3cbf-076e-4229-a7af-451ed8f2673c","Type":"ContainerStarted","Data":"bbe55e35be845a72bac53fe1162d67298a16e5721d879867aeade984a557b00e"} Feb 16 17:06:34 crc kubenswrapper[4794]: I0216 17:06:34.197817 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd" event={"ID":"edfc3cbf-076e-4229-a7af-451ed8f2673c","Type":"ContainerStarted","Data":"376fce2f903d68edcf5f86e555c14027aec7c776843299e5132c8ca3f843bbea"} Feb 16 17:06:34 crc kubenswrapper[4794]: I0216 17:06:34.198194 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd" Feb 16 17:06:34 crc kubenswrapper[4794]: I0216 17:06:34.223654 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd" podStartSLOduration=3.33801345 podStartE2EDuration="9.22363755s" podCreationTimestamp="2026-02-16 17:06:25 +0000 UTC" firstStartedPulling="2026-02-16 17:06:27.034480832 +0000 UTC m=+412.982575479" lastFinishedPulling="2026-02-16 17:06:32.920104932 +0000 UTC m=+418.868199579" observedRunningTime="2026-02-16 17:06:34.220762091 +0000 UTC m=+420.168856738" watchObservedRunningTime="2026-02-16 17:06:34.22363755 +0000 UTC m=+420.171732197" Feb 16 17:06:36 crc kubenswrapper[4794]: I0216 17:06:36.076292 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-97c7cdc9f-d8vrd" Feb 16 17:06:36 crc kubenswrapper[4794]: I0216 17:06:36.219829 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6e407be3-b395-46f8-a2d8-5ae9b5bf398f","Type":"ContainerStarted","Data":"30869d951455e0d9c362aa29be1f22101257365131477220cc32abe1c689f498"} Feb 16 17:06:36 crc kubenswrapper[4794]: I0216 17:06:36.219887 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6e407be3-b395-46f8-a2d8-5ae9b5bf398f","Type":"ContainerStarted","Data":"982d90b19c85be982d0372cbfe450776212b37f94222afe78860ff9bd2ad7b2a"} Feb 16 17:06:36 crc kubenswrapper[4794]: I0216 17:06:36.219902 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6e407be3-b395-46f8-a2d8-5ae9b5bf398f","Type":"ContainerStarted","Data":"133099ba9936afd21bb53e3b4c1cb09fdbc6525c089f65afd72a954525d6f8d2"} Feb 16 17:06:36 crc kubenswrapper[4794]: I0216 17:06:36.219913 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6e407be3-b395-46f8-a2d8-5ae9b5bf398f","Type":"ContainerStarted","Data":"c7c428f6d339752970fc654b1c6477e4e1850edad603548c84530423ac50afac"} Feb 16 17:06:36 crc kubenswrapper[4794]: I0216 17:06:36.219924 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6e407be3-b395-46f8-a2d8-5ae9b5bf398f","Type":"ContainerStarted","Data":"c8d865a8b0757a159f900738a56f9b36633b2e50eb6b6290cd2bc37845556604"} Feb 16 17:06:37 crc kubenswrapper[4794]: I0216 17:06:37.229856 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6e407be3-b395-46f8-a2d8-5ae9b5bf398f","Type":"ContainerStarted","Data":"fd09b41897220638ea557f6c1d214ffbc2b3c9af88240a5e0d34d299d5930b75"} Feb 16 17:06:37 crc kubenswrapper[4794]: I0216 17:06:37.261612 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=4.968819937 podStartE2EDuration="8.261593294s" podCreationTimestamp="2026-02-16 17:06:29 +0000 UTC" firstStartedPulling="2026-02-16 17:06:32.173752329 +0000 UTC m=+418.121846976" lastFinishedPulling="2026-02-16 17:06:35.466525686 +0000 UTC m=+421.414620333" observedRunningTime="2026-02-16 17:06:37.260164235 +0000 UTC m=+423.208258882" watchObservedRunningTime="2026-02-16 17:06:37.261593294 +0000 UTC m=+423.209687941" Feb 16 17:06:38 crc kubenswrapper[4794]: I0216 17:06:38.764809 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7677c68b7-g7nj2" Feb 16 17:06:38 crc kubenswrapper[4794]: I0216 17:06:38.765192 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7677c68b7-g7nj2" Feb 16 17:06:38 crc kubenswrapper[4794]: I0216 17:06:38.769146 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7677c68b7-g7nj2" Feb 16 17:06:39 crc kubenswrapper[4794]: I0216 17:06:39.244446 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7677c68b7-g7nj2" Feb 16 17:06:39 crc kubenswrapper[4794]: I0216 17:06:39.300766 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-zwsbc"] Feb 16 17:06:40 crc kubenswrapper[4794]: I0216 17:06:40.314413 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:49 crc kubenswrapper[4794]: I0216 17:06:49.380536 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-565c65954d-lb4z8" Feb 16 17:06:49 crc kubenswrapper[4794]: I0216 17:06:49.381522 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-565c65954d-lb4z8" Feb 16 17:07:04 crc kubenswrapper[4794]: I0216 17:07:04.350507 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-zwsbc" podUID="f3fa8c07-9947-4f5c-8295-bdec401113b0" containerName="console" containerID="cri-o://920b53d1bf849546d05cda8efc0685b0a211289a07c9e4dcd4802b3251e52136" gracePeriod=15 Feb 16 17:07:04 crc kubenswrapper[4794]: I0216 17:07:04.699324 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-zwsbc_f3fa8c07-9947-4f5c-8295-bdec401113b0/console/0.log" Feb 16 17:07:04 crc kubenswrapper[4794]: I0216 17:07:04.699608 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zwsbc" Feb 16 17:07:04 crc kubenswrapper[4794]: I0216 17:07:04.777412 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f3fa8c07-9947-4f5c-8295-bdec401113b0-service-ca\") pod \"f3fa8c07-9947-4f5c-8295-bdec401113b0\" (UID: \"f3fa8c07-9947-4f5c-8295-bdec401113b0\") " Feb 16 17:07:04 crc kubenswrapper[4794]: I0216 17:07:04.777489 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3fa8c07-9947-4f5c-8295-bdec401113b0-trusted-ca-bundle\") pod \"f3fa8c07-9947-4f5c-8295-bdec401113b0\" (UID: \"f3fa8c07-9947-4f5c-8295-bdec401113b0\") " Feb 16 17:07:04 crc kubenswrapper[4794]: I0216 17:07:04.777515 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3fa8c07-9947-4f5c-8295-bdec401113b0-console-serving-cert\") pod \"f3fa8c07-9947-4f5c-8295-bdec401113b0\" (UID: \"f3fa8c07-9947-4f5c-8295-bdec401113b0\") " Feb 16 17:07:04 crc kubenswrapper[4794]: I0216 17:07:04.777560 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f3fa8c07-9947-4f5c-8295-bdec401113b0-console-config\") pod \"f3fa8c07-9947-4f5c-8295-bdec401113b0\" (UID: \"f3fa8c07-9947-4f5c-8295-bdec401113b0\") " Feb 16 17:07:04 crc kubenswrapper[4794]: I0216 17:07:04.777597 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btpks\" (UniqueName: \"kubernetes.io/projected/f3fa8c07-9947-4f5c-8295-bdec401113b0-kube-api-access-btpks\") pod \"f3fa8c07-9947-4f5c-8295-bdec401113b0\" (UID: \"f3fa8c07-9947-4f5c-8295-bdec401113b0\") " Feb 16 17:07:04 crc kubenswrapper[4794]: I0216 17:07:04.777637 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f3fa8c07-9947-4f5c-8295-bdec401113b0-console-oauth-config\") pod \"f3fa8c07-9947-4f5c-8295-bdec401113b0\" (UID: \"f3fa8c07-9947-4f5c-8295-bdec401113b0\") " Feb 16 17:07:04 crc kubenswrapper[4794]: I0216 17:07:04.777680 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f3fa8c07-9947-4f5c-8295-bdec401113b0-oauth-serving-cert\") pod \"f3fa8c07-9947-4f5c-8295-bdec401113b0\" (UID: \"f3fa8c07-9947-4f5c-8295-bdec401113b0\") " Feb 16 17:07:04 crc kubenswrapper[4794]: I0216 17:07:04.778848 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3fa8c07-9947-4f5c-8295-bdec401113b0-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "f3fa8c07-9947-4f5c-8295-bdec401113b0" (UID: "f3fa8c07-9947-4f5c-8295-bdec401113b0"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:07:04 crc kubenswrapper[4794]: I0216 17:07:04.778860 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3fa8c07-9947-4f5c-8295-bdec401113b0-service-ca" (OuterVolumeSpecName: "service-ca") pod "f3fa8c07-9947-4f5c-8295-bdec401113b0" (UID: "f3fa8c07-9947-4f5c-8295-bdec401113b0"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:07:04 crc kubenswrapper[4794]: I0216 17:07:04.778892 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3fa8c07-9947-4f5c-8295-bdec401113b0-console-config" (OuterVolumeSpecName: "console-config") pod "f3fa8c07-9947-4f5c-8295-bdec401113b0" (UID: "f3fa8c07-9947-4f5c-8295-bdec401113b0"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:07:04 crc kubenswrapper[4794]: I0216 17:07:04.778936 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3fa8c07-9947-4f5c-8295-bdec401113b0-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f3fa8c07-9947-4f5c-8295-bdec401113b0" (UID: "f3fa8c07-9947-4f5c-8295-bdec401113b0"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:07:04 crc kubenswrapper[4794]: I0216 17:07:04.783580 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3fa8c07-9947-4f5c-8295-bdec401113b0-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "f3fa8c07-9947-4f5c-8295-bdec401113b0" (UID: "f3fa8c07-9947-4f5c-8295-bdec401113b0"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:07:04 crc kubenswrapper[4794]: I0216 17:07:04.790005 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3fa8c07-9947-4f5c-8295-bdec401113b0-kube-api-access-btpks" (OuterVolumeSpecName: "kube-api-access-btpks") pod "f3fa8c07-9947-4f5c-8295-bdec401113b0" (UID: "f3fa8c07-9947-4f5c-8295-bdec401113b0"). InnerVolumeSpecName "kube-api-access-btpks". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:07:04 crc kubenswrapper[4794]: I0216 17:07:04.790156 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3fa8c07-9947-4f5c-8295-bdec401113b0-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "f3fa8c07-9947-4f5c-8295-bdec401113b0" (UID: "f3fa8c07-9947-4f5c-8295-bdec401113b0"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:07:04 crc kubenswrapper[4794]: I0216 17:07:04.879072 4794 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f3fa8c07-9947-4f5c-8295-bdec401113b0-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:07:04 crc kubenswrapper[4794]: I0216 17:07:04.879423 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btpks\" (UniqueName: \"kubernetes.io/projected/f3fa8c07-9947-4f5c-8295-bdec401113b0-kube-api-access-btpks\") on node \"crc\" DevicePath \"\"" Feb 16 17:07:04 crc kubenswrapper[4794]: I0216 17:07:04.879495 4794 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f3fa8c07-9947-4f5c-8295-bdec401113b0-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:07:04 crc kubenswrapper[4794]: I0216 17:07:04.879550 4794 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f3fa8c07-9947-4f5c-8295-bdec401113b0-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:07:04 crc kubenswrapper[4794]: I0216 17:07:04.879600 4794 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f3fa8c07-9947-4f5c-8295-bdec401113b0-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:07:04 crc kubenswrapper[4794]: I0216 17:07:04.879656 4794 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3fa8c07-9947-4f5c-8295-bdec401113b0-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:07:04 crc kubenswrapper[4794]: I0216 17:07:04.879706 4794 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3fa8c07-9947-4f5c-8295-bdec401113b0-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:07:05 crc kubenswrapper[4794]: I0216 17:07:05.412711 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-zwsbc_f3fa8c07-9947-4f5c-8295-bdec401113b0/console/0.log" Feb 16 17:07:05 crc kubenswrapper[4794]: I0216 17:07:05.413098 4794 generic.go:334] "Generic (PLEG): container finished" podID="f3fa8c07-9947-4f5c-8295-bdec401113b0" containerID="920b53d1bf849546d05cda8efc0685b0a211289a07c9e4dcd4802b3251e52136" exitCode=2 Feb 16 17:07:05 crc kubenswrapper[4794]: I0216 17:07:05.413133 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zwsbc" event={"ID":"f3fa8c07-9947-4f5c-8295-bdec401113b0","Type":"ContainerDied","Data":"920b53d1bf849546d05cda8efc0685b0a211289a07c9e4dcd4802b3251e52136"} Feb 16 17:07:05 crc kubenswrapper[4794]: I0216 17:07:05.413166 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zwsbc" event={"ID":"f3fa8c07-9947-4f5c-8295-bdec401113b0","Type":"ContainerDied","Data":"8394e321ba83aa666298dfb95a4ffa24ba1962f848e7aaa0ba51dea48acd3aa3"} Feb 16 17:07:05 crc kubenswrapper[4794]: I0216 17:07:05.413187 4794 scope.go:117] "RemoveContainer" containerID="920b53d1bf849546d05cda8efc0685b0a211289a07c9e4dcd4802b3251e52136" Feb 16 17:07:05 crc kubenswrapper[4794]: I0216 17:07:05.413218 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zwsbc" Feb 16 17:07:05 crc kubenswrapper[4794]: I0216 17:07:05.443495 4794 scope.go:117] "RemoveContainer" containerID="920b53d1bf849546d05cda8efc0685b0a211289a07c9e4dcd4802b3251e52136" Feb 16 17:07:05 crc kubenswrapper[4794]: E0216 17:07:05.444070 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"920b53d1bf849546d05cda8efc0685b0a211289a07c9e4dcd4802b3251e52136\": container with ID starting with 920b53d1bf849546d05cda8efc0685b0a211289a07c9e4dcd4802b3251e52136 not found: ID does not exist" containerID="920b53d1bf849546d05cda8efc0685b0a211289a07c9e4dcd4802b3251e52136" Feb 16 17:07:05 crc kubenswrapper[4794]: I0216 17:07:05.444140 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"920b53d1bf849546d05cda8efc0685b0a211289a07c9e4dcd4802b3251e52136"} err="failed to get container status \"920b53d1bf849546d05cda8efc0685b0a211289a07c9e4dcd4802b3251e52136\": rpc error: code = NotFound desc = could not find container \"920b53d1bf849546d05cda8efc0685b0a211289a07c9e4dcd4802b3251e52136\": container with ID starting with 920b53d1bf849546d05cda8efc0685b0a211289a07c9e4dcd4802b3251e52136 not found: ID does not exist" Feb 16 17:07:05 crc kubenswrapper[4794]: I0216 17:07:05.447647 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-zwsbc"] Feb 16 17:07:05 crc kubenswrapper[4794]: I0216 17:07:05.456524 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-zwsbc"] Feb 16 17:07:06 crc kubenswrapper[4794]: I0216 17:07:06.813220 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3fa8c07-9947-4f5c-8295-bdec401113b0" path="/var/lib/kubelet/pods/f3fa8c07-9947-4f5c-8295-bdec401113b0/volumes" Feb 16 17:07:09 crc kubenswrapper[4794]: I0216 17:07:09.387830 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-565c65954d-lb4z8" Feb 16 17:07:09 crc kubenswrapper[4794]: I0216 17:07:09.392140 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-565c65954d-lb4z8" Feb 16 17:07:30 crc kubenswrapper[4794]: I0216 17:07:30.314383 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:07:30 crc kubenswrapper[4794]: I0216 17:07:30.359537 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:07:30 crc kubenswrapper[4794]: I0216 17:07:30.617314 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:07:46 crc kubenswrapper[4794]: I0216 17:07:46.389713 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-d58d8d689-ppcq9"] Feb 16 17:07:46 crc kubenswrapper[4794]: E0216 17:07:46.390513 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3fa8c07-9947-4f5c-8295-bdec401113b0" containerName="console" Feb 16 17:07:46 crc kubenswrapper[4794]: I0216 17:07:46.390530 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3fa8c07-9947-4f5c-8295-bdec401113b0" containerName="console" Feb 16 17:07:46 crc kubenswrapper[4794]: I0216 17:07:46.390687 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3fa8c07-9947-4f5c-8295-bdec401113b0" containerName="console" Feb 16 17:07:46 crc kubenswrapper[4794]: I0216 17:07:46.391176 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-d58d8d689-ppcq9" Feb 16 17:07:46 crc kubenswrapper[4794]: I0216 17:07:46.415911 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-d58d8d689-ppcq9"] Feb 16 17:07:46 crc kubenswrapper[4794]: I0216 17:07:46.502572 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9bcab709-93c7-484e-b7f3-1bcdb808dd45-console-oauth-config\") pod \"console-d58d8d689-ppcq9\" (UID: \"9bcab709-93c7-484e-b7f3-1bcdb808dd45\") " pod="openshift-console/console-d58d8d689-ppcq9" Feb 16 17:07:46 crc kubenswrapper[4794]: I0216 17:07:46.502655 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9bcab709-93c7-484e-b7f3-1bcdb808dd45-console-serving-cert\") pod \"console-d58d8d689-ppcq9\" (UID: \"9bcab709-93c7-484e-b7f3-1bcdb808dd45\") " pod="openshift-console/console-d58d8d689-ppcq9" Feb 16 17:07:46 crc kubenswrapper[4794]: I0216 17:07:46.502698 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9bcab709-93c7-484e-b7f3-1bcdb808dd45-service-ca\") pod \"console-d58d8d689-ppcq9\" (UID: \"9bcab709-93c7-484e-b7f3-1bcdb808dd45\") " pod="openshift-console/console-d58d8d689-ppcq9" Feb 16 17:07:46 crc kubenswrapper[4794]: I0216 17:07:46.502722 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9bcab709-93c7-484e-b7f3-1bcdb808dd45-console-config\") pod \"console-d58d8d689-ppcq9\" (UID: \"9bcab709-93c7-484e-b7f3-1bcdb808dd45\") " pod="openshift-console/console-d58d8d689-ppcq9" Feb 16 17:07:46 crc kubenswrapper[4794]: I0216 17:07:46.502738 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9bcab709-93c7-484e-b7f3-1bcdb808dd45-oauth-serving-cert\") pod \"console-d58d8d689-ppcq9\" (UID: \"9bcab709-93c7-484e-b7f3-1bcdb808dd45\") " pod="openshift-console/console-d58d8d689-ppcq9" Feb 16 17:07:46 crc kubenswrapper[4794]: I0216 17:07:46.502759 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bcab709-93c7-484e-b7f3-1bcdb808dd45-trusted-ca-bundle\") pod \"console-d58d8d689-ppcq9\" (UID: \"9bcab709-93c7-484e-b7f3-1bcdb808dd45\") " pod="openshift-console/console-d58d8d689-ppcq9" Feb 16 17:07:46 crc kubenswrapper[4794]: I0216 17:07:46.502918 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57dbd\" (UniqueName: \"kubernetes.io/projected/9bcab709-93c7-484e-b7f3-1bcdb808dd45-kube-api-access-57dbd\") pod \"console-d58d8d689-ppcq9\" (UID: \"9bcab709-93c7-484e-b7f3-1bcdb808dd45\") " pod="openshift-console/console-d58d8d689-ppcq9" Feb 16 17:07:46 crc kubenswrapper[4794]: I0216 17:07:46.603896 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9bcab709-93c7-484e-b7f3-1bcdb808dd45-console-oauth-config\") pod \"console-d58d8d689-ppcq9\" (UID: \"9bcab709-93c7-484e-b7f3-1bcdb808dd45\") " pod="openshift-console/console-d58d8d689-ppcq9" Feb 16 17:07:46 crc kubenswrapper[4794]: I0216 17:07:46.604016 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9bcab709-93c7-484e-b7f3-1bcdb808dd45-console-serving-cert\") pod \"console-d58d8d689-ppcq9\" (UID: \"9bcab709-93c7-484e-b7f3-1bcdb808dd45\") " pod="openshift-console/console-d58d8d689-ppcq9" Feb 16 17:07:46 crc kubenswrapper[4794]: I0216 17:07:46.604098 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9bcab709-93c7-484e-b7f3-1bcdb808dd45-service-ca\") pod \"console-d58d8d689-ppcq9\" (UID: \"9bcab709-93c7-484e-b7f3-1bcdb808dd45\") " pod="openshift-console/console-d58d8d689-ppcq9" Feb 16 17:07:46 crc kubenswrapper[4794]: I0216 17:07:46.604128 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9bcab709-93c7-484e-b7f3-1bcdb808dd45-console-config\") pod \"console-d58d8d689-ppcq9\" (UID: \"9bcab709-93c7-484e-b7f3-1bcdb808dd45\") " pod="openshift-console/console-d58d8d689-ppcq9" Feb 16 17:07:46 crc kubenswrapper[4794]: I0216 17:07:46.604150 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9bcab709-93c7-484e-b7f3-1bcdb808dd45-oauth-serving-cert\") pod \"console-d58d8d689-ppcq9\" (UID: \"9bcab709-93c7-484e-b7f3-1bcdb808dd45\") " pod="openshift-console/console-d58d8d689-ppcq9" Feb 16 17:07:46 crc kubenswrapper[4794]: I0216 17:07:46.604186 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bcab709-93c7-484e-b7f3-1bcdb808dd45-trusted-ca-bundle\") pod \"console-d58d8d689-ppcq9\" (UID: \"9bcab709-93c7-484e-b7f3-1bcdb808dd45\") " pod="openshift-console/console-d58d8d689-ppcq9" Feb 16 17:07:46 crc kubenswrapper[4794]: I0216 17:07:46.604257 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57dbd\" (UniqueName: \"kubernetes.io/projected/9bcab709-93c7-484e-b7f3-1bcdb808dd45-kube-api-access-57dbd\") pod \"console-d58d8d689-ppcq9\" (UID: \"9bcab709-93c7-484e-b7f3-1bcdb808dd45\") " pod="openshift-console/console-d58d8d689-ppcq9" Feb 16 17:07:46 crc kubenswrapper[4794]: I0216 17:07:46.605209 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9bcab709-93c7-484e-b7f3-1bcdb808dd45-service-ca\") pod \"console-d58d8d689-ppcq9\" (UID: \"9bcab709-93c7-484e-b7f3-1bcdb808dd45\") " pod="openshift-console/console-d58d8d689-ppcq9" Feb 16 17:07:46 crc kubenswrapper[4794]: I0216 17:07:46.605237 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9bcab709-93c7-484e-b7f3-1bcdb808dd45-console-config\") pod \"console-d58d8d689-ppcq9\" (UID: \"9bcab709-93c7-484e-b7f3-1bcdb808dd45\") " pod="openshift-console/console-d58d8d689-ppcq9" Feb 16 17:07:46 crc kubenswrapper[4794]: I0216 17:07:46.605333 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9bcab709-93c7-484e-b7f3-1bcdb808dd45-oauth-serving-cert\") pod \"console-d58d8d689-ppcq9\" (UID: \"9bcab709-93c7-484e-b7f3-1bcdb808dd45\") " pod="openshift-console/console-d58d8d689-ppcq9" Feb 16 17:07:46 crc kubenswrapper[4794]: I0216 17:07:46.605952 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bcab709-93c7-484e-b7f3-1bcdb808dd45-trusted-ca-bundle\") pod \"console-d58d8d689-ppcq9\" (UID: \"9bcab709-93c7-484e-b7f3-1bcdb808dd45\") " pod="openshift-console/console-d58d8d689-ppcq9" Feb 16 17:07:46 crc kubenswrapper[4794]: I0216 17:07:46.610011 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9bcab709-93c7-484e-b7f3-1bcdb808dd45-console-serving-cert\") pod \"console-d58d8d689-ppcq9\" (UID: \"9bcab709-93c7-484e-b7f3-1bcdb808dd45\") " pod="openshift-console/console-d58d8d689-ppcq9" Feb 16 17:07:46 crc kubenswrapper[4794]: I0216 17:07:46.610864 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9bcab709-93c7-484e-b7f3-1bcdb808dd45-console-oauth-config\") pod \"console-d58d8d689-ppcq9\" (UID: \"9bcab709-93c7-484e-b7f3-1bcdb808dd45\") " pod="openshift-console/console-d58d8d689-ppcq9" Feb 16 17:07:46 crc kubenswrapper[4794]: I0216 17:07:46.620065 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57dbd\" (UniqueName: \"kubernetes.io/projected/9bcab709-93c7-484e-b7f3-1bcdb808dd45-kube-api-access-57dbd\") pod \"console-d58d8d689-ppcq9\" (UID: \"9bcab709-93c7-484e-b7f3-1bcdb808dd45\") " pod="openshift-console/console-d58d8d689-ppcq9" Feb 16 17:07:46 crc kubenswrapper[4794]: I0216 17:07:46.709382 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-d58d8d689-ppcq9" Feb 16 17:07:46 crc kubenswrapper[4794]: I0216 17:07:46.912641 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-d58d8d689-ppcq9"] Feb 16 17:07:47 crc kubenswrapper[4794]: I0216 17:07:47.722493 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-d58d8d689-ppcq9" event={"ID":"9bcab709-93c7-484e-b7f3-1bcdb808dd45","Type":"ContainerStarted","Data":"b5f7e272d5ea88fb09c13744eb51c1a753aa4959926dda265e2610f23892805d"} Feb 16 17:07:47 crc kubenswrapper[4794]: I0216 17:07:47.722814 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-d58d8d689-ppcq9" event={"ID":"9bcab709-93c7-484e-b7f3-1bcdb808dd45","Type":"ContainerStarted","Data":"1e3b88aab234a8c0300fd0ad599fcad3be58fa95d6c909245bb22cba204b1b25"} Feb 16 17:07:47 crc kubenswrapper[4794]: I0216 17:07:47.755417 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-d58d8d689-ppcq9" podStartSLOduration=1.75539065 podStartE2EDuration="1.75539065s" podCreationTimestamp="2026-02-16 17:07:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:07:47.748604112 +0000 UTC m=+493.696698769" watchObservedRunningTime="2026-02-16 17:07:47.75539065 +0000 UTC m=+493.703485347" Feb 16 17:07:56 crc kubenswrapper[4794]: I0216 17:07:56.709820 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-d58d8d689-ppcq9" Feb 16 17:07:56 crc kubenswrapper[4794]: I0216 17:07:56.713825 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-d58d8d689-ppcq9" Feb 16 17:07:56 crc kubenswrapper[4794]: I0216 17:07:56.720559 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-d58d8d689-ppcq9" Feb 16 17:07:56 crc kubenswrapper[4794]: I0216 17:07:56.809035 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-d58d8d689-ppcq9" Feb 16 17:07:56 crc kubenswrapper[4794]: I0216 17:07:56.890138 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7677c68b7-g7nj2"] Feb 16 17:08:20 crc kubenswrapper[4794]: I0216 17:08:20.140892 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:08:20 crc kubenswrapper[4794]: I0216 17:08:20.141723 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:08:21 crc kubenswrapper[4794]: I0216 17:08:21.940541 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7677c68b7-g7nj2" podUID="1143b1ad-4f22-43ed-9a85-e9abfe207481" containerName="console" containerID="cri-o://acefbae5c0ff15c4c9daaa2739706730bff32ffb304b074e42da63c56b64d471" gracePeriod=15 Feb 16 17:08:22 crc kubenswrapper[4794]: I0216 17:08:22.317008 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7677c68b7-g7nj2_1143b1ad-4f22-43ed-9a85-e9abfe207481/console/0.log" Feb 16 17:08:22 crc kubenswrapper[4794]: I0216 17:08:22.317529 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7677c68b7-g7nj2" Feb 16 17:08:22 crc kubenswrapper[4794]: I0216 17:08:22.354052 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4c6n\" (UniqueName: \"kubernetes.io/projected/1143b1ad-4f22-43ed-9a85-e9abfe207481-kube-api-access-l4c6n\") pod \"1143b1ad-4f22-43ed-9a85-e9abfe207481\" (UID: \"1143b1ad-4f22-43ed-9a85-e9abfe207481\") " Feb 16 17:08:22 crc kubenswrapper[4794]: I0216 17:08:22.354122 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1143b1ad-4f22-43ed-9a85-e9abfe207481-service-ca\") pod \"1143b1ad-4f22-43ed-9a85-e9abfe207481\" (UID: \"1143b1ad-4f22-43ed-9a85-e9abfe207481\") " Feb 16 17:08:22 crc kubenswrapper[4794]: I0216 17:08:22.354203 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1143b1ad-4f22-43ed-9a85-e9abfe207481-console-config\") pod \"1143b1ad-4f22-43ed-9a85-e9abfe207481\" (UID: \"1143b1ad-4f22-43ed-9a85-e9abfe207481\") " Feb 16 17:08:22 crc kubenswrapper[4794]: I0216 17:08:22.354281 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1143b1ad-4f22-43ed-9a85-e9abfe207481-console-serving-cert\") pod \"1143b1ad-4f22-43ed-9a85-e9abfe207481\" (UID: \"1143b1ad-4f22-43ed-9a85-e9abfe207481\") " Feb 16 17:08:22 crc kubenswrapper[4794]: I0216 17:08:22.354394 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1143b1ad-4f22-43ed-9a85-e9abfe207481-trusted-ca-bundle\") pod \"1143b1ad-4f22-43ed-9a85-e9abfe207481\" (UID: \"1143b1ad-4f22-43ed-9a85-e9abfe207481\") " Feb 16 17:08:22 crc kubenswrapper[4794]: I0216 17:08:22.354474 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1143b1ad-4f22-43ed-9a85-e9abfe207481-oauth-serving-cert\") pod \"1143b1ad-4f22-43ed-9a85-e9abfe207481\" (UID: \"1143b1ad-4f22-43ed-9a85-e9abfe207481\") " Feb 16 17:08:22 crc kubenswrapper[4794]: I0216 17:08:22.354539 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1143b1ad-4f22-43ed-9a85-e9abfe207481-console-oauth-config\") pod \"1143b1ad-4f22-43ed-9a85-e9abfe207481\" (UID: \"1143b1ad-4f22-43ed-9a85-e9abfe207481\") " Feb 16 17:08:22 crc kubenswrapper[4794]: I0216 17:08:22.355237 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1143b1ad-4f22-43ed-9a85-e9abfe207481-service-ca" (OuterVolumeSpecName: "service-ca") pod "1143b1ad-4f22-43ed-9a85-e9abfe207481" (UID: "1143b1ad-4f22-43ed-9a85-e9abfe207481"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:08:22 crc kubenswrapper[4794]: I0216 17:08:22.355299 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1143b1ad-4f22-43ed-9a85-e9abfe207481-console-config" (OuterVolumeSpecName: "console-config") pod "1143b1ad-4f22-43ed-9a85-e9abfe207481" (UID: "1143b1ad-4f22-43ed-9a85-e9abfe207481"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:08:22 crc kubenswrapper[4794]: I0216 17:08:22.355818 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1143b1ad-4f22-43ed-9a85-e9abfe207481-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1143b1ad-4f22-43ed-9a85-e9abfe207481" (UID: "1143b1ad-4f22-43ed-9a85-e9abfe207481"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:08:22 crc kubenswrapper[4794]: I0216 17:08:22.356356 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1143b1ad-4f22-43ed-9a85-e9abfe207481-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "1143b1ad-4f22-43ed-9a85-e9abfe207481" (UID: "1143b1ad-4f22-43ed-9a85-e9abfe207481"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:08:22 crc kubenswrapper[4794]: I0216 17:08:22.359777 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1143b1ad-4f22-43ed-9a85-e9abfe207481-kube-api-access-l4c6n" (OuterVolumeSpecName: "kube-api-access-l4c6n") pod "1143b1ad-4f22-43ed-9a85-e9abfe207481" (UID: "1143b1ad-4f22-43ed-9a85-e9abfe207481"). InnerVolumeSpecName "kube-api-access-l4c6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:08:22 crc kubenswrapper[4794]: I0216 17:08:22.361555 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1143b1ad-4f22-43ed-9a85-e9abfe207481-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "1143b1ad-4f22-43ed-9a85-e9abfe207481" (UID: "1143b1ad-4f22-43ed-9a85-e9abfe207481"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:08:22 crc kubenswrapper[4794]: I0216 17:08:22.362253 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1143b1ad-4f22-43ed-9a85-e9abfe207481-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "1143b1ad-4f22-43ed-9a85-e9abfe207481" (UID: "1143b1ad-4f22-43ed-9a85-e9abfe207481"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:08:22 crc kubenswrapper[4794]: I0216 17:08:22.456012 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4c6n\" (UniqueName: \"kubernetes.io/projected/1143b1ad-4f22-43ed-9a85-e9abfe207481-kube-api-access-l4c6n\") on node \"crc\" DevicePath \"\"" Feb 16 17:08:22 crc kubenswrapper[4794]: I0216 17:08:22.456339 4794 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1143b1ad-4f22-43ed-9a85-e9abfe207481-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:08:22 crc kubenswrapper[4794]: I0216 17:08:22.456423 4794 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1143b1ad-4f22-43ed-9a85-e9abfe207481-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:08:22 crc kubenswrapper[4794]: I0216 17:08:22.456492 4794 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1143b1ad-4f22-43ed-9a85-e9abfe207481-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:08:22 crc kubenswrapper[4794]: I0216 17:08:22.456581 4794 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1143b1ad-4f22-43ed-9a85-e9abfe207481-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:08:22 crc kubenswrapper[4794]: I0216 17:08:22.456655 4794 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1143b1ad-4f22-43ed-9a85-e9abfe207481-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:08:22 crc kubenswrapper[4794]: I0216 17:08:22.456724 4794 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1143b1ad-4f22-43ed-9a85-e9abfe207481-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:08:22 crc kubenswrapper[4794]: I0216 17:08:22.977888 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7677c68b7-g7nj2_1143b1ad-4f22-43ed-9a85-e9abfe207481/console/0.log" Feb 16 17:08:22 crc kubenswrapper[4794]: I0216 17:08:22.977944 4794 generic.go:334] "Generic (PLEG): container finished" podID="1143b1ad-4f22-43ed-9a85-e9abfe207481" containerID="acefbae5c0ff15c4c9daaa2739706730bff32ffb304b074e42da63c56b64d471" exitCode=2 Feb 16 17:08:22 crc kubenswrapper[4794]: I0216 17:08:22.977979 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7677c68b7-g7nj2" event={"ID":"1143b1ad-4f22-43ed-9a85-e9abfe207481","Type":"ContainerDied","Data":"acefbae5c0ff15c4c9daaa2739706730bff32ffb304b074e42da63c56b64d471"} Feb 16 17:08:22 crc kubenswrapper[4794]: I0216 17:08:22.978006 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7677c68b7-g7nj2" event={"ID":"1143b1ad-4f22-43ed-9a85-e9abfe207481","Type":"ContainerDied","Data":"06b2ec181718ed5908d05015442683db44e1e2e5d1bbe10218a4dfedacad2748"} Feb 16 17:08:22 crc kubenswrapper[4794]: I0216 17:08:22.978022 4794 scope.go:117] "RemoveContainer" containerID="acefbae5c0ff15c4c9daaa2739706730bff32ffb304b074e42da63c56b64d471" Feb 16 17:08:22 crc kubenswrapper[4794]: I0216 17:08:22.978060 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7677c68b7-g7nj2" Feb 16 17:08:23 crc kubenswrapper[4794]: I0216 17:08:23.003398 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7677c68b7-g7nj2"] Feb 16 17:08:23 crc kubenswrapper[4794]: I0216 17:08:23.004452 4794 scope.go:117] "RemoveContainer" containerID="acefbae5c0ff15c4c9daaa2739706730bff32ffb304b074e42da63c56b64d471" Feb 16 17:08:23 crc kubenswrapper[4794]: E0216 17:08:23.005044 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acefbae5c0ff15c4c9daaa2739706730bff32ffb304b074e42da63c56b64d471\": container with ID starting with acefbae5c0ff15c4c9daaa2739706730bff32ffb304b074e42da63c56b64d471 not found: ID does not exist" containerID="acefbae5c0ff15c4c9daaa2739706730bff32ffb304b074e42da63c56b64d471" Feb 16 17:08:23 crc kubenswrapper[4794]: I0216 17:08:23.005111 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acefbae5c0ff15c4c9daaa2739706730bff32ffb304b074e42da63c56b64d471"} err="failed to get container status \"acefbae5c0ff15c4c9daaa2739706730bff32ffb304b074e42da63c56b64d471\": rpc error: code = NotFound desc = could not find container \"acefbae5c0ff15c4c9daaa2739706730bff32ffb304b074e42da63c56b64d471\": container with ID starting with acefbae5c0ff15c4c9daaa2739706730bff32ffb304b074e42da63c56b64d471 not found: ID does not exist" Feb 16 17:08:23 crc kubenswrapper[4794]: I0216 17:08:23.007971 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7677c68b7-g7nj2"] Feb 16 17:08:24 crc kubenswrapper[4794]: I0216 17:08:24.801379 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1143b1ad-4f22-43ed-9a85-e9abfe207481" path="/var/lib/kubelet/pods/1143b1ad-4f22-43ed-9a85-e9abfe207481/volumes" Feb 16 17:08:35 crc kubenswrapper[4794]: I0216 17:08:35.512197 4794 scope.go:117] "RemoveContainer" containerID="5d4245a21571624bce4be15cf67d676b937f28baa4a1c196cfd8ae9ea44134d2" Feb 16 17:08:35 crc kubenswrapper[4794]: I0216 17:08:35.531778 4794 scope.go:117] "RemoveContainer" containerID="7d87e64460cf050717ed51e0cf9c76e7d822398ef5991c59f28acde1e65235d3" Feb 16 17:08:50 crc kubenswrapper[4794]: I0216 17:08:50.140820 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:08:50 crc kubenswrapper[4794]: I0216 17:08:50.141726 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:09:20 crc kubenswrapper[4794]: I0216 17:09:20.140376 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:09:20 crc kubenswrapper[4794]: I0216 17:09:20.141089 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:09:20 crc kubenswrapper[4794]: I0216 17:09:20.141156 4794 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" Feb 16 17:09:20 crc kubenswrapper[4794]: I0216 17:09:20.142037 4794 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c272885df85830363f92a97efc1eb57e276b4a14b8042b5e60c9c53b0e8dd10b"} pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 17:09:20 crc kubenswrapper[4794]: I0216 17:09:20.142130 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" containerID="cri-o://c272885df85830363f92a97efc1eb57e276b4a14b8042b5e60c9c53b0e8dd10b" gracePeriod=600 Feb 16 17:09:20 crc kubenswrapper[4794]: I0216 17:09:20.437720 4794 generic.go:334] "Generic (PLEG): container finished" podID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerID="c272885df85830363f92a97efc1eb57e276b4a14b8042b5e60c9c53b0e8dd10b" exitCode=0 Feb 16 17:09:20 crc kubenswrapper[4794]: I0216 17:09:20.437778 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerDied","Data":"c272885df85830363f92a97efc1eb57e276b4a14b8042b5e60c9c53b0e8dd10b"} Feb 16 17:09:20 crc kubenswrapper[4794]: I0216 17:09:20.437820 4794 scope.go:117] "RemoveContainer" containerID="b2e80e5061d3d639e2192db6249af8300dc44db1cba1d8938a19b86cfdd0833f" Feb 16 17:09:21 crc kubenswrapper[4794]: I0216 17:09:21.447672 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerStarted","Data":"1242c1c0cd51c797081153357fc1a3afcbb8aac8f950b8ce178092b5638f56c5"} Feb 16 17:09:35 crc kubenswrapper[4794]: I0216 17:09:35.585233 4794 scope.go:117] "RemoveContainer" containerID="9603c865843a7abe69d329048e1905ee5512b76d43a2ffcd6d53c1644b780c09" Feb 16 17:10:53 crc kubenswrapper[4794]: I0216 17:10:53.391907 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qj8r5"] Feb 16 17:10:53 crc kubenswrapper[4794]: E0216 17:10:53.392774 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1143b1ad-4f22-43ed-9a85-e9abfe207481" containerName="console" Feb 16 17:10:53 crc kubenswrapper[4794]: I0216 17:10:53.392788 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="1143b1ad-4f22-43ed-9a85-e9abfe207481" containerName="console" Feb 16 17:10:53 crc kubenswrapper[4794]: I0216 17:10:53.392898 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="1143b1ad-4f22-43ed-9a85-e9abfe207481" containerName="console" Feb 16 17:10:53 crc kubenswrapper[4794]: I0216 17:10:53.393787 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qj8r5" Feb 16 17:10:53 crc kubenswrapper[4794]: I0216 17:10:53.396010 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 17:10:53 crc kubenswrapper[4794]: I0216 17:10:53.399906 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qj8r5"] Feb 16 17:10:53 crc kubenswrapper[4794]: I0216 17:10:53.576797 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sszzx\" (UniqueName: \"kubernetes.io/projected/476791fd-4f52-4366-87cd-1d1154726fa8-kube-api-access-sszzx\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qj8r5\" (UID: \"476791fd-4f52-4366-87cd-1d1154726fa8\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qj8r5" Feb 16 17:10:53 crc kubenswrapper[4794]: I0216 17:10:53.577157 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/476791fd-4f52-4366-87cd-1d1154726fa8-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qj8r5\" (UID: \"476791fd-4f52-4366-87cd-1d1154726fa8\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qj8r5" Feb 16 17:10:53 crc kubenswrapper[4794]: I0216 17:10:53.577293 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/476791fd-4f52-4366-87cd-1d1154726fa8-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qj8r5\" (UID: \"476791fd-4f52-4366-87cd-1d1154726fa8\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qj8r5" Feb 16 17:10:53 crc kubenswrapper[4794]: I0216 17:10:53.677755 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/476791fd-4f52-4366-87cd-1d1154726fa8-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qj8r5\" (UID: \"476791fd-4f52-4366-87cd-1d1154726fa8\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qj8r5" Feb 16 17:10:53 crc kubenswrapper[4794]: I0216 17:10:53.677840 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sszzx\" (UniqueName: \"kubernetes.io/projected/476791fd-4f52-4366-87cd-1d1154726fa8-kube-api-access-sszzx\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qj8r5\" (UID: \"476791fd-4f52-4366-87cd-1d1154726fa8\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qj8r5" Feb 16 17:10:53 crc kubenswrapper[4794]: I0216 17:10:53.677869 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/476791fd-4f52-4366-87cd-1d1154726fa8-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qj8r5\" (UID: \"476791fd-4f52-4366-87cd-1d1154726fa8\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qj8r5" Feb 16 17:10:53 crc kubenswrapper[4794]: I0216 17:10:53.678447 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/476791fd-4f52-4366-87cd-1d1154726fa8-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qj8r5\" (UID: \"476791fd-4f52-4366-87cd-1d1154726fa8\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qj8r5" Feb 16 17:10:53 crc kubenswrapper[4794]: I0216 17:10:53.678477 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/476791fd-4f52-4366-87cd-1d1154726fa8-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qj8r5\" (UID: \"476791fd-4f52-4366-87cd-1d1154726fa8\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qj8r5" Feb 16 17:10:53 crc kubenswrapper[4794]: I0216 17:10:53.703524 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sszzx\" (UniqueName: \"kubernetes.io/projected/476791fd-4f52-4366-87cd-1d1154726fa8-kube-api-access-sszzx\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qj8r5\" (UID: \"476791fd-4f52-4366-87cd-1d1154726fa8\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qj8r5" Feb 16 17:10:53 crc kubenswrapper[4794]: I0216 17:10:53.709980 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qj8r5" Feb 16 17:10:54 crc kubenswrapper[4794]: I0216 17:10:54.164006 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qj8r5"] Feb 16 17:10:55 crc kubenswrapper[4794]: I0216 17:10:55.110277 4794 generic.go:334] "Generic (PLEG): container finished" podID="476791fd-4f52-4366-87cd-1d1154726fa8" containerID="22c55ec160244398ee49b92407c73073d2388d91c9564b8e32d2b7df8fea2113" exitCode=0 Feb 16 17:10:55 crc kubenswrapper[4794]: I0216 17:10:55.110581 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qj8r5" event={"ID":"476791fd-4f52-4366-87cd-1d1154726fa8","Type":"ContainerDied","Data":"22c55ec160244398ee49b92407c73073d2388d91c9564b8e32d2b7df8fea2113"} Feb 16 17:10:55 crc kubenswrapper[4794]: I0216 17:10:55.110687 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qj8r5" event={"ID":"476791fd-4f52-4366-87cd-1d1154726fa8","Type":"ContainerStarted","Data":"d56205f209bb951399ee05a14c1a2a334757376beaa4604254592c17b82a523d"} Feb 16 17:10:55 crc kubenswrapper[4794]: I0216 17:10:55.114581 4794 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 17:10:58 crc kubenswrapper[4794]: I0216 17:10:58.131201 4794 generic.go:334] "Generic (PLEG): container finished" podID="476791fd-4f52-4366-87cd-1d1154726fa8" containerID="5af71c4684215973a70d24f0db44fc5e1c9d3c9ee261798847b7244825595647" exitCode=0 Feb 16 17:10:58 crc kubenswrapper[4794]: I0216 17:10:58.131284 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qj8r5" event={"ID":"476791fd-4f52-4366-87cd-1d1154726fa8","Type":"ContainerDied","Data":"5af71c4684215973a70d24f0db44fc5e1c9d3c9ee261798847b7244825595647"} Feb 16 17:10:59 crc kubenswrapper[4794]: I0216 17:10:59.145703 4794 generic.go:334] "Generic (PLEG): container finished" podID="476791fd-4f52-4366-87cd-1d1154726fa8" containerID="2870c714fd70a3a00c3ceed84d83b324e6bb300b59910d7fbc769dc0ff85ef00" exitCode=0 Feb 16 17:10:59 crc kubenswrapper[4794]: I0216 17:10:59.145764 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qj8r5" event={"ID":"476791fd-4f52-4366-87cd-1d1154726fa8","Type":"ContainerDied","Data":"2870c714fd70a3a00c3ceed84d83b324e6bb300b59910d7fbc769dc0ff85ef00"} Feb 16 17:11:00 crc kubenswrapper[4794]: I0216 17:11:00.439699 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qj8r5" Feb 16 17:11:00 crc kubenswrapper[4794]: I0216 17:11:00.474388 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/476791fd-4f52-4366-87cd-1d1154726fa8-util\") pod \"476791fd-4f52-4366-87cd-1d1154726fa8\" (UID: \"476791fd-4f52-4366-87cd-1d1154726fa8\") " Feb 16 17:11:00 crc kubenswrapper[4794]: I0216 17:11:00.474440 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sszzx\" (UniqueName: \"kubernetes.io/projected/476791fd-4f52-4366-87cd-1d1154726fa8-kube-api-access-sszzx\") pod \"476791fd-4f52-4366-87cd-1d1154726fa8\" (UID: \"476791fd-4f52-4366-87cd-1d1154726fa8\") " Feb 16 17:11:00 crc kubenswrapper[4794]: I0216 17:11:00.474459 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/476791fd-4f52-4366-87cd-1d1154726fa8-bundle\") pod \"476791fd-4f52-4366-87cd-1d1154726fa8\" (UID: \"476791fd-4f52-4366-87cd-1d1154726fa8\") " Feb 16 17:11:00 crc kubenswrapper[4794]: I0216 17:11:00.476930 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/476791fd-4f52-4366-87cd-1d1154726fa8-bundle" (OuterVolumeSpecName: "bundle") pod "476791fd-4f52-4366-87cd-1d1154726fa8" (UID: "476791fd-4f52-4366-87cd-1d1154726fa8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:11:00 crc kubenswrapper[4794]: I0216 17:11:00.486623 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/476791fd-4f52-4366-87cd-1d1154726fa8-kube-api-access-sszzx" (OuterVolumeSpecName: "kube-api-access-sszzx") pod "476791fd-4f52-4366-87cd-1d1154726fa8" (UID: "476791fd-4f52-4366-87cd-1d1154726fa8"). InnerVolumeSpecName "kube-api-access-sszzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:11:00 crc kubenswrapper[4794]: I0216 17:11:00.490191 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/476791fd-4f52-4366-87cd-1d1154726fa8-util" (OuterVolumeSpecName: "util") pod "476791fd-4f52-4366-87cd-1d1154726fa8" (UID: "476791fd-4f52-4366-87cd-1d1154726fa8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:11:00 crc kubenswrapper[4794]: I0216 17:11:00.575861 4794 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/476791fd-4f52-4366-87cd-1d1154726fa8-util\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:00 crc kubenswrapper[4794]: I0216 17:11:00.575911 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sszzx\" (UniqueName: \"kubernetes.io/projected/476791fd-4f52-4366-87cd-1d1154726fa8-kube-api-access-sszzx\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:00 crc kubenswrapper[4794]: I0216 17:11:00.575922 4794 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/476791fd-4f52-4366-87cd-1d1154726fa8-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:01 crc kubenswrapper[4794]: I0216 17:11:01.163916 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qj8r5" event={"ID":"476791fd-4f52-4366-87cd-1d1154726fa8","Type":"ContainerDied","Data":"d56205f209bb951399ee05a14c1a2a334757376beaa4604254592c17b82a523d"} Feb 16 17:11:01 crc kubenswrapper[4794]: I0216 17:11:01.163963 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qj8r5" Feb 16 17:11:01 crc kubenswrapper[4794]: I0216 17:11:01.163994 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d56205f209bb951399ee05a14c1a2a334757376beaa4604254592c17b82a523d" Feb 16 17:11:04 crc kubenswrapper[4794]: I0216 17:11:04.763930 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9krvl"] Feb 16 17:11:04 crc kubenswrapper[4794]: I0216 17:11:04.765457 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="ovn-controller" containerID="cri-o://0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0" gracePeriod=30 Feb 16 17:11:04 crc kubenswrapper[4794]: I0216 17:11:04.765722 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="northd" containerID="cri-o://fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1" gracePeriod=30 Feb 16 17:11:04 crc kubenswrapper[4794]: I0216 17:11:04.765822 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="kube-rbac-proxy-node" containerID="cri-o://69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184" gracePeriod=30 Feb 16 17:11:04 crc kubenswrapper[4794]: I0216 17:11:04.765957 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9" gracePeriod=30 Feb 16 17:11:04 crc kubenswrapper[4794]: I0216 17:11:04.765847 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="ovn-acl-logging" containerID="cri-o://c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb" gracePeriod=30 Feb 16 17:11:04 crc kubenswrapper[4794]: I0216 17:11:04.766028 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="sbdb" containerID="cri-o://ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0" gracePeriod=30 Feb 16 17:11:04 crc kubenswrapper[4794]: I0216 17:11:04.766056 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="nbdb" containerID="cri-o://bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0" gracePeriod=30 Feb 16 17:11:04 crc kubenswrapper[4794]: I0216 17:11:04.824589 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="ovnkube-controller" containerID="cri-o://725d53506d1041fde56a67b4b413ded09a8b73fe4cced1bd1199e1b99c1ed3e1" gracePeriod=30 Feb 16 17:11:05 crc kubenswrapper[4794]: I0216 17:11:05.192267 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zwhdn_f6f074ad-d6ce-4c47-aa3c-196e4ad30e64/kube-multus/1.log" Feb 16 17:11:05 crc kubenswrapper[4794]: I0216 17:11:05.193004 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zwhdn_f6f074ad-d6ce-4c47-aa3c-196e4ad30e64/kube-multus/0.log" Feb 16 17:11:05 crc kubenswrapper[4794]: I0216 17:11:05.193124 4794 generic.go:334] "Generic (PLEG): container finished" podID="f6f074ad-d6ce-4c47-aa3c-196e4ad30e64" containerID="1a81814b182e8628b21c89d613668a46a0be932629aacc121699a0775ddc225d" exitCode=2 Feb 16 17:11:05 crc kubenswrapper[4794]: I0216 17:11:05.193210 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zwhdn" event={"ID":"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64","Type":"ContainerDied","Data":"1a81814b182e8628b21c89d613668a46a0be932629aacc121699a0775ddc225d"} Feb 16 17:11:05 crc kubenswrapper[4794]: I0216 17:11:05.193275 4794 scope.go:117] "RemoveContainer" containerID="9edf6c17e1dbdcd4944300dbae136ac68127ab3f145a2e3f3a9e87edc49b1757" Feb 16 17:11:05 crc kubenswrapper[4794]: I0216 17:11:05.193933 4794 scope.go:117] "RemoveContainer" containerID="1a81814b182e8628b21c89d613668a46a0be932629aacc121699a0775ddc225d" Feb 16 17:11:05 crc kubenswrapper[4794]: I0216 17:11:05.195941 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9krvl_d985e4f1-78bb-43f9-b86c-cd47831d602c/ovnkube-controller/3.log" Feb 16 17:11:05 crc kubenswrapper[4794]: I0216 17:11:05.199382 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9krvl_d985e4f1-78bb-43f9-b86c-cd47831d602c/ovn-acl-logging/0.log" Feb 16 17:11:05 crc kubenswrapper[4794]: I0216 17:11:05.199869 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9krvl_d985e4f1-78bb-43f9-b86c-cd47831d602c/ovn-controller/0.log" Feb 16 17:11:05 crc kubenswrapper[4794]: I0216 17:11:05.200243 4794 generic.go:334] "Generic (PLEG): container finished" podID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerID="725d53506d1041fde56a67b4b413ded09a8b73fe4cced1bd1199e1b99c1ed3e1" exitCode=0 Feb 16 17:11:05 crc kubenswrapper[4794]: I0216 17:11:05.200262 4794 generic.go:334] "Generic (PLEG): container finished" podID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerID="ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0" exitCode=0 Feb 16 17:11:05 crc kubenswrapper[4794]: I0216 17:11:05.200270 4794 generic.go:334] "Generic (PLEG): container finished" podID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerID="bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0" exitCode=0 Feb 16 17:11:05 crc kubenswrapper[4794]: I0216 17:11:05.200278 4794 generic.go:334] "Generic (PLEG): container finished" podID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerID="fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1" exitCode=0 Feb 16 17:11:05 crc kubenswrapper[4794]: I0216 17:11:05.200286 4794 generic.go:334] "Generic (PLEG): container finished" podID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerID="c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb" exitCode=143 Feb 16 17:11:05 crc kubenswrapper[4794]: I0216 17:11:05.200294 4794 generic.go:334] "Generic (PLEG): container finished" podID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerID="0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0" exitCode=143 Feb 16 17:11:05 crc kubenswrapper[4794]: I0216 17:11:05.200344 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" event={"ID":"d985e4f1-78bb-43f9-b86c-cd47831d602c","Type":"ContainerDied","Data":"725d53506d1041fde56a67b4b413ded09a8b73fe4cced1bd1199e1b99c1ed3e1"} Feb 16 17:11:05 crc kubenswrapper[4794]: I0216 17:11:05.200398 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" event={"ID":"d985e4f1-78bb-43f9-b86c-cd47831d602c","Type":"ContainerDied","Data":"ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0"} Feb 16 17:11:05 crc kubenswrapper[4794]: I0216 17:11:05.200410 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" event={"ID":"d985e4f1-78bb-43f9-b86c-cd47831d602c","Type":"ContainerDied","Data":"bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0"} Feb 16 17:11:05 crc kubenswrapper[4794]: I0216 17:11:05.200421 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" event={"ID":"d985e4f1-78bb-43f9-b86c-cd47831d602c","Type":"ContainerDied","Data":"fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1"} Feb 16 17:11:05 crc kubenswrapper[4794]: I0216 17:11:05.200433 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" event={"ID":"d985e4f1-78bb-43f9-b86c-cd47831d602c","Type":"ContainerDied","Data":"c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb"} Feb 16 17:11:05 crc kubenswrapper[4794]: I0216 17:11:05.200453 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" event={"ID":"d985e4f1-78bb-43f9-b86c-cd47831d602c","Type":"ContainerDied","Data":"0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0"} Feb 16 17:11:05 crc kubenswrapper[4794]: I0216 17:11:05.223709 4794 scope.go:117] "RemoveContainer" containerID="6a9b07055fd16bf9dde792f372b5a19f7faf37d643ae4986f169c85fdcfe27d9" Feb 16 17:11:05 crc kubenswrapper[4794]: I0216 17:11:05.954958 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9krvl_d985e4f1-78bb-43f9-b86c-cd47831d602c/ovn-acl-logging/0.log" Feb 16 17:11:05 crc kubenswrapper[4794]: I0216 17:11:05.955679 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9krvl_d985e4f1-78bb-43f9-b86c-cd47831d602c/ovn-controller/0.log" Feb 16 17:11:05 crc kubenswrapper[4794]: I0216 17:11:05.956021 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.009872 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pvz6h"] Feb 16 17:11:06 crc kubenswrapper[4794]: E0216 17:11:06.010120 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="ovnkube-controller" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.010140 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="ovnkube-controller" Feb 16 17:11:06 crc kubenswrapper[4794]: E0216 17:11:06.010150 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="sbdb" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.010158 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="sbdb" Feb 16 17:11:06 crc kubenswrapper[4794]: E0216 17:11:06.010164 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="ovnkube-controller" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.010172 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="ovnkube-controller" Feb 16 17:11:06 crc kubenswrapper[4794]: E0216 17:11:06.010183 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="ovnkube-controller" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.010190 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="ovnkube-controller" Feb 16 17:11:06 crc kubenswrapper[4794]: E0216 17:11:06.010201 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="476791fd-4f52-4366-87cd-1d1154726fa8" containerName="pull" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.010208 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="476791fd-4f52-4366-87cd-1d1154726fa8" containerName="pull" Feb 16 17:11:06 crc kubenswrapper[4794]: E0216 17:11:06.010217 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="ovn-acl-logging" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.010224 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="ovn-acl-logging" Feb 16 17:11:06 crc kubenswrapper[4794]: E0216 17:11:06.010235 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="476791fd-4f52-4366-87cd-1d1154726fa8" containerName="util" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.010243 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="476791fd-4f52-4366-87cd-1d1154726fa8" containerName="util" Feb 16 17:11:06 crc kubenswrapper[4794]: E0216 17:11:06.010256 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.010264 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 17:11:06 crc kubenswrapper[4794]: E0216 17:11:06.010274 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="ovnkube-controller" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.010281 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="ovnkube-controller" Feb 16 17:11:06 crc kubenswrapper[4794]: E0216 17:11:06.010292 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="kubecfg-setup" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.010318 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="kubecfg-setup" Feb 16 17:11:06 crc kubenswrapper[4794]: E0216 17:11:06.010327 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="nbdb" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.010334 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="nbdb" Feb 16 17:11:06 crc kubenswrapper[4794]: E0216 17:11:06.010348 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="northd" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.010355 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="northd" Feb 16 17:11:06 crc kubenswrapper[4794]: E0216 17:11:06.010370 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="kube-rbac-proxy-node" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.010379 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="kube-rbac-proxy-node" Feb 16 17:11:06 crc kubenswrapper[4794]: E0216 17:11:06.010388 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="476791fd-4f52-4366-87cd-1d1154726fa8" containerName="extract" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.010395 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="476791fd-4f52-4366-87cd-1d1154726fa8" containerName="extract" Feb 16 17:11:06 crc kubenswrapper[4794]: E0216 17:11:06.010405 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="ovn-controller" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.010412 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="ovn-controller" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.010529 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="kube-rbac-proxy-node" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.010543 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="ovnkube-controller" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.010553 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="sbdb" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.010561 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="ovnkube-controller" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.010570 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="ovn-acl-logging" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.010580 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.010592 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="476791fd-4f52-4366-87cd-1d1154726fa8" containerName="extract" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.010602 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="ovnkube-controller" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.010611 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="ovn-controller" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.010644 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="northd" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.010657 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="nbdb" Feb 16 17:11:06 crc kubenswrapper[4794]: E0216 17:11:06.010777 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="ovnkube-controller" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.010786 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="ovnkube-controller" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.010904 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="ovnkube-controller" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.011160 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerName="ovnkube-controller" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.013101 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.147564 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpr45\" (UniqueName: \"kubernetes.io/projected/d985e4f1-78bb-43f9-b86c-cd47831d602c-kube-api-access-dpr45\") pod \"d985e4f1-78bb-43f9-b86c-cd47831d602c\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.147973 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-node-log\") pod \"d985e4f1-78bb-43f9-b86c-cd47831d602c\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148021 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-var-lib-openvswitch\") pod \"d985e4f1-78bb-43f9-b86c-cd47831d602c\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148059 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-cni-bin\") pod \"d985e4f1-78bb-43f9-b86c-cd47831d602c\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148095 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-systemd-units\") pod \"d985e4f1-78bb-43f9-b86c-cd47831d602c\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148117 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-run-systemd\") pod \"d985e4f1-78bb-43f9-b86c-cd47831d602c\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148109 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-node-log" (OuterVolumeSpecName: "node-log") pod "d985e4f1-78bb-43f9-b86c-cd47831d602c" (UID: "d985e4f1-78bb-43f9-b86c-cd47831d602c"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148144 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-run-netns\") pod \"d985e4f1-78bb-43f9-b86c-cd47831d602c\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148167 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "d985e4f1-78bb-43f9-b86c-cd47831d602c" (UID: "d985e4f1-78bb-43f9-b86c-cd47831d602c"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148172 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-slash\") pod \"d985e4f1-78bb-43f9-b86c-cd47831d602c\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148181 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "d985e4f1-78bb-43f9-b86c-cd47831d602c" (UID: "d985e4f1-78bb-43f9-b86c-cd47831d602c"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148222 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-slash" (OuterVolumeSpecName: "host-slash") pod "d985e4f1-78bb-43f9-b86c-cd47831d602c" (UID: "d985e4f1-78bb-43f9-b86c-cd47831d602c"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148230 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d985e4f1-78bb-43f9-b86c-cd47831d602c-ovnkube-script-lib\") pod \"d985e4f1-78bb-43f9-b86c-cd47831d602c\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148181 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "d985e4f1-78bb-43f9-b86c-cd47831d602c" (UID: "d985e4f1-78bb-43f9-b86c-cd47831d602c"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148267 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d985e4f1-78bb-43f9-b86c-cd47831d602c-env-overrides\") pod \"d985e4f1-78bb-43f9-b86c-cd47831d602c\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148293 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-run-ovn\") pod \"d985e4f1-78bb-43f9-b86c-cd47831d602c\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148363 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"d985e4f1-78bb-43f9-b86c-cd47831d602c\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148392 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-cni-netd\") pod \"d985e4f1-78bb-43f9-b86c-cd47831d602c\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148416 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-run-ovn-kubernetes\") pod \"d985e4f1-78bb-43f9-b86c-cd47831d602c\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148441 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d985e4f1-78bb-43f9-b86c-cd47831d602c-ovnkube-config\") pod \"d985e4f1-78bb-43f9-b86c-cd47831d602c\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148458 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-run-openvswitch\") pod \"d985e4f1-78bb-43f9-b86c-cd47831d602c\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148482 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-kubelet\") pod \"d985e4f1-78bb-43f9-b86c-cd47831d602c\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148509 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d985e4f1-78bb-43f9-b86c-cd47831d602c-ovn-node-metrics-cert\") pod \"d985e4f1-78bb-43f9-b86c-cd47831d602c\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148539 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-log-socket\") pod \"d985e4f1-78bb-43f9-b86c-cd47831d602c\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148561 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-etc-openvswitch\") pod \"d985e4f1-78bb-43f9-b86c-cd47831d602c\" (UID: \"d985e4f1-78bb-43f9-b86c-cd47831d602c\") " Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148251 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "d985e4f1-78bb-43f9-b86c-cd47831d602c" (UID: "d985e4f1-78bb-43f9-b86c-cd47831d602c"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148581 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d985e4f1-78bb-43f9-b86c-cd47831d602c-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "d985e4f1-78bb-43f9-b86c-cd47831d602c" (UID: "d985e4f1-78bb-43f9-b86c-cd47831d602c"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148641 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d985e4f1-78bb-43f9-b86c-cd47831d602c-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "d985e4f1-78bb-43f9-b86c-cd47831d602c" (UID: "d985e4f1-78bb-43f9-b86c-cd47831d602c"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148589 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "d985e4f1-78bb-43f9-b86c-cd47831d602c" (UID: "d985e4f1-78bb-43f9-b86c-cd47831d602c"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148607 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "d985e4f1-78bb-43f9-b86c-cd47831d602c" (UID: "d985e4f1-78bb-43f9-b86c-cd47831d602c"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148681 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "d985e4f1-78bb-43f9-b86c-cd47831d602c" (UID: "d985e4f1-78bb-43f9-b86c-cd47831d602c"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148622 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "d985e4f1-78bb-43f9-b86c-cd47831d602c" (UID: "d985e4f1-78bb-43f9-b86c-cd47831d602c"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148694 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e1822e10-f774-4e1c-b717-b052c24fef8c-ovnkube-config\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148717 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "d985e4f1-78bb-43f9-b86c-cd47831d602c" (UID: "d985e4f1-78bb-43f9-b86c-cd47831d602c"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148728 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-var-lib-openvswitch\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148759 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "d985e4f1-78bb-43f9-b86c-cd47831d602c" (UID: "d985e4f1-78bb-43f9-b86c-cd47831d602c"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148759 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkgpw\" (UniqueName: \"kubernetes.io/projected/e1822e10-f774-4e1c-b717-b052c24fef8c-kube-api-access-pkgpw\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148798 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "d985e4f1-78bb-43f9-b86c-cd47831d602c" (UID: "d985e4f1-78bb-43f9-b86c-cd47831d602c"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148801 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-node-log\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148823 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-log-socket" (OuterVolumeSpecName: "log-socket") pod "d985e4f1-78bb-43f9-b86c-cd47831d602c" (UID: "d985e4f1-78bb-43f9-b86c-cd47831d602c"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148854 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d985e4f1-78bb-43f9-b86c-cd47831d602c-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "d985e4f1-78bb-43f9-b86c-cd47831d602c" (UID: "d985e4f1-78bb-43f9-b86c-cd47831d602c"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148904 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-systemd-units\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148948 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e1822e10-f774-4e1c-b717-b052c24fef8c-env-overrides\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.148980 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-etc-openvswitch\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.149007 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-host-cni-netd\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.149025 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-host-kubelet\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.149040 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-host-slash\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.149057 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e1822e10-f774-4e1c-b717-b052c24fef8c-ovn-node-metrics-cert\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.149088 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.149114 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-run-openvswitch\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.149128 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e1822e10-f774-4e1c-b717-b052c24fef8c-ovnkube-script-lib\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.149147 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-log-socket\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.149161 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-host-run-netns\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.149183 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-host-run-ovn-kubernetes\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.149207 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-run-ovn\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.149232 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-run-systemd\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.149249 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-host-cni-bin\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.149392 4794 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.149403 4794 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-slash\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.149412 4794 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d985e4f1-78bb-43f9-b86c-cd47831d602c-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.149420 4794 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d985e4f1-78bb-43f9-b86c-cd47831d602c-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.149429 4794 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.149439 4794 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.149449 4794 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.149459 4794 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.149469 4794 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.149477 4794 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.149488 4794 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d985e4f1-78bb-43f9-b86c-cd47831d602c-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.149499 4794 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-log-socket\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.149510 4794 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.149520 4794 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-node-log\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.149529 4794 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.149540 4794 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.149549 4794 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.165526 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d985e4f1-78bb-43f9-b86c-cd47831d602c-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "d985e4f1-78bb-43f9-b86c-cd47831d602c" (UID: "d985e4f1-78bb-43f9-b86c-cd47831d602c"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.166120 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d985e4f1-78bb-43f9-b86c-cd47831d602c-kube-api-access-dpr45" (OuterVolumeSpecName: "kube-api-access-dpr45") pod "d985e4f1-78bb-43f9-b86c-cd47831d602c" (UID: "d985e4f1-78bb-43f9-b86c-cd47831d602c"). InnerVolumeSpecName "kube-api-access-dpr45". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.179897 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "d985e4f1-78bb-43f9-b86c-cd47831d602c" (UID: "d985e4f1-78bb-43f9-b86c-cd47831d602c"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.207252 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zwhdn_f6f074ad-d6ce-4c47-aa3c-196e4ad30e64/kube-multus/1.log" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.207599 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zwhdn" event={"ID":"f6f074ad-d6ce-4c47-aa3c-196e4ad30e64","Type":"ContainerStarted","Data":"0bc48339a3101432b1d0b0342d40f10f52732db7dc3cb3aef06ae86d095e2c1c"} Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.211332 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9krvl_d985e4f1-78bb-43f9-b86c-cd47831d602c/ovn-acl-logging/0.log" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.211912 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9krvl_d985e4f1-78bb-43f9-b86c-cd47831d602c/ovn-controller/0.log" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.212371 4794 generic.go:334] "Generic (PLEG): container finished" podID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerID="ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9" exitCode=0 Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.212399 4794 generic.go:334] "Generic (PLEG): container finished" podID="d985e4f1-78bb-43f9-b86c-cd47831d602c" containerID="69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184" exitCode=0 Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.212424 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" event={"ID":"d985e4f1-78bb-43f9-b86c-cd47831d602c","Type":"ContainerDied","Data":"ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9"} Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.212449 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" event={"ID":"d985e4f1-78bb-43f9-b86c-cd47831d602c","Type":"ContainerDied","Data":"69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184"} Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.212462 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" event={"ID":"d985e4f1-78bb-43f9-b86c-cd47831d602c","Type":"ContainerDied","Data":"dfe3c1a24efa8b004629e7b97cbe7e033c0465c2275173213104298f4abc7c5b"} Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.212490 4794 scope.go:117] "RemoveContainer" containerID="725d53506d1041fde56a67b4b413ded09a8b73fe4cced1bd1199e1b99c1ed3e1" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.212662 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9krvl" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.228252 4794 scope.go:117] "RemoveContainer" containerID="ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.243840 4794 scope.go:117] "RemoveContainer" containerID="bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.252241 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-run-systemd\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.253149 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-host-cni-bin\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.253264 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e1822e10-f774-4e1c-b717-b052c24fef8c-ovnkube-config\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.253397 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-var-lib-openvswitch\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.253505 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkgpw\" (UniqueName: \"kubernetes.io/projected/e1822e10-f774-4e1c-b717-b052c24fef8c-kube-api-access-pkgpw\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.253610 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-node-log\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.253724 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-systemd-units\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.253808 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e1822e10-f774-4e1c-b717-b052c24fef8c-env-overrides\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.253900 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-etc-openvswitch\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.253998 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-host-cni-netd\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.254091 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-host-kubelet\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.254181 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-host-slash\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.254265 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e1822e10-f774-4e1c-b717-b052c24fef8c-ovn-node-metrics-cert\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.254381 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.254489 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-run-openvswitch\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.254575 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e1822e10-f774-4e1c-b717-b052c24fef8c-ovnkube-script-lib\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.254665 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e1822e10-f774-4e1c-b717-b052c24fef8c-ovnkube-config\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.254677 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-log-socket\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.254836 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-host-run-netns\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.254961 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-host-run-ovn-kubernetes\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.255062 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-run-ovn\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.255171 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-host-kubelet\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.255363 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-node-log\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.255529 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-systemd-units\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.255812 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e1822e10-f774-4e1c-b717-b052c24fef8c-env-overrides\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.255832 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-etc-openvswitch\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.253928 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-var-lib-openvswitch\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.255859 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-host-cni-netd\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.253112 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-run-systemd\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.253952 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-host-cni-bin\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.256016 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-host-slash\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.257579 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-log-socket\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.257596 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-run-openvswitch\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.257622 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-host-run-netns\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.257641 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-host-run-ovn-kubernetes\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.257633 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.257662 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1822e10-f774-4e1c-b717-b052c24fef8c-run-ovn\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.258189 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e1822e10-f774-4e1c-b717-b052c24fef8c-ovnkube-script-lib\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.264767 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e1822e10-f774-4e1c-b717-b052c24fef8c-ovn-node-metrics-cert\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.266446 4794 scope.go:117] "RemoveContainer" containerID="fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.273239 4794 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d985e4f1-78bb-43f9-b86c-cd47831d602c-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.276284 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpr45\" (UniqueName: \"kubernetes.io/projected/d985e4f1-78bb-43f9-b86c-cd47831d602c-kube-api-access-dpr45\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.276384 4794 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d985e4f1-78bb-43f9-b86c-cd47831d602c-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.274644 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkgpw\" (UniqueName: \"kubernetes.io/projected/e1822e10-f774-4e1c-b717-b052c24fef8c-kube-api-access-pkgpw\") pod \"ovnkube-node-pvz6h\" (UID: \"e1822e10-f774-4e1c-b717-b052c24fef8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.277096 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9krvl"] Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.289504 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9krvl"] Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.315775 4794 scope.go:117] "RemoveContainer" containerID="ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.327143 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.329032 4794 scope.go:117] "RemoveContainer" containerID="69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.357009 4794 scope.go:117] "RemoveContainer" containerID="c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb" Feb 16 17:11:06 crc kubenswrapper[4794]: W0216 17:11:06.381665 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1822e10_f774_4e1c_b717_b052c24fef8c.slice/crio-65c158a0a18be534241dd74f02f6684710b09f9c9a0bfbd0a621970f449f5002 WatchSource:0}: Error finding container 65c158a0a18be534241dd74f02f6684710b09f9c9a0bfbd0a621970f449f5002: Status 404 returned error can't find the container with id 65c158a0a18be534241dd74f02f6684710b09f9c9a0bfbd0a621970f449f5002 Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.394558 4794 scope.go:117] "RemoveContainer" containerID="0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.423207 4794 scope.go:117] "RemoveContainer" containerID="f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.447202 4794 scope.go:117] "RemoveContainer" containerID="725d53506d1041fde56a67b4b413ded09a8b73fe4cced1bd1199e1b99c1ed3e1" Feb 16 17:11:06 crc kubenswrapper[4794]: E0216 17:11:06.447629 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"725d53506d1041fde56a67b4b413ded09a8b73fe4cced1bd1199e1b99c1ed3e1\": container with ID starting with 725d53506d1041fde56a67b4b413ded09a8b73fe4cced1bd1199e1b99c1ed3e1 not found: ID does not exist" containerID="725d53506d1041fde56a67b4b413ded09a8b73fe4cced1bd1199e1b99c1ed3e1" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.447654 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"725d53506d1041fde56a67b4b413ded09a8b73fe4cced1bd1199e1b99c1ed3e1"} err="failed to get container status \"725d53506d1041fde56a67b4b413ded09a8b73fe4cced1bd1199e1b99c1ed3e1\": rpc error: code = NotFound desc = could not find container \"725d53506d1041fde56a67b4b413ded09a8b73fe4cced1bd1199e1b99c1ed3e1\": container with ID starting with 725d53506d1041fde56a67b4b413ded09a8b73fe4cced1bd1199e1b99c1ed3e1 not found: ID does not exist" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.447675 4794 scope.go:117] "RemoveContainer" containerID="ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0" Feb 16 17:11:06 crc kubenswrapper[4794]: E0216 17:11:06.447994 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0\": container with ID starting with ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0 not found: ID does not exist" containerID="ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.448023 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0"} err="failed to get container status \"ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0\": rpc error: code = NotFound desc = could not find container \"ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0\": container with ID starting with ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0 not found: ID does not exist" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.448037 4794 scope.go:117] "RemoveContainer" containerID="bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0" Feb 16 17:11:06 crc kubenswrapper[4794]: E0216 17:11:06.448242 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0\": container with ID starting with bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0 not found: ID does not exist" containerID="bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.448268 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0"} err="failed to get container status \"bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0\": rpc error: code = NotFound desc = could not find container \"bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0\": container with ID starting with bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0 not found: ID does not exist" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.448282 4794 scope.go:117] "RemoveContainer" containerID="fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1" Feb 16 17:11:06 crc kubenswrapper[4794]: E0216 17:11:06.448492 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1\": container with ID starting with fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1 not found: ID does not exist" containerID="fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.448509 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1"} err="failed to get container status \"fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1\": rpc error: code = NotFound desc = could not find container \"fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1\": container with ID starting with fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1 not found: ID does not exist" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.448520 4794 scope.go:117] "RemoveContainer" containerID="ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9" Feb 16 17:11:06 crc kubenswrapper[4794]: E0216 17:11:06.448676 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9\": container with ID starting with ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9 not found: ID does not exist" containerID="ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.448692 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9"} err="failed to get container status \"ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9\": rpc error: code = NotFound desc = could not find container \"ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9\": container with ID starting with ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9 not found: ID does not exist" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.448704 4794 scope.go:117] "RemoveContainer" containerID="69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184" Feb 16 17:11:06 crc kubenswrapper[4794]: E0216 17:11:06.448850 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184\": container with ID starting with 69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184 not found: ID does not exist" containerID="69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.448874 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184"} err="failed to get container status \"69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184\": rpc error: code = NotFound desc = could not find container \"69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184\": container with ID starting with 69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184 not found: ID does not exist" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.448886 4794 scope.go:117] "RemoveContainer" containerID="c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb" Feb 16 17:11:06 crc kubenswrapper[4794]: E0216 17:11:06.449059 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb\": container with ID starting with c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb not found: ID does not exist" containerID="c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.449079 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb"} err="failed to get container status \"c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb\": rpc error: code = NotFound desc = could not find container \"c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb\": container with ID starting with c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb not found: ID does not exist" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.449093 4794 scope.go:117] "RemoveContainer" containerID="0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0" Feb 16 17:11:06 crc kubenswrapper[4794]: E0216 17:11:06.449251 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0\": container with ID starting with 0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0 not found: ID does not exist" containerID="0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.449271 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0"} err="failed to get container status \"0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0\": rpc error: code = NotFound desc = could not find container \"0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0\": container with ID starting with 0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0 not found: ID does not exist" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.449284 4794 scope.go:117] "RemoveContainer" containerID="f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592" Feb 16 17:11:06 crc kubenswrapper[4794]: E0216 17:11:06.449683 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\": container with ID starting with f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592 not found: ID does not exist" containerID="f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.449702 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592"} err="failed to get container status \"f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\": rpc error: code = NotFound desc = could not find container \"f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\": container with ID starting with f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592 not found: ID does not exist" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.449714 4794 scope.go:117] "RemoveContainer" containerID="725d53506d1041fde56a67b4b413ded09a8b73fe4cced1bd1199e1b99c1ed3e1" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.450404 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"725d53506d1041fde56a67b4b413ded09a8b73fe4cced1bd1199e1b99c1ed3e1"} err="failed to get container status \"725d53506d1041fde56a67b4b413ded09a8b73fe4cced1bd1199e1b99c1ed3e1\": rpc error: code = NotFound desc = could not find container \"725d53506d1041fde56a67b4b413ded09a8b73fe4cced1bd1199e1b99c1ed3e1\": container with ID starting with 725d53506d1041fde56a67b4b413ded09a8b73fe4cced1bd1199e1b99c1ed3e1 not found: ID does not exist" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.450421 4794 scope.go:117] "RemoveContainer" containerID="ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.450686 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0"} err="failed to get container status \"ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0\": rpc error: code = NotFound desc = could not find container \"ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0\": container with ID starting with ac0d97691ff1260245292d208003cddbdf60a690ffb0ac41b6eb1294339f9af0 not found: ID does not exist" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.450706 4794 scope.go:117] "RemoveContainer" containerID="bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.451063 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0"} err="failed to get container status \"bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0\": rpc error: code = NotFound desc = could not find container \"bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0\": container with ID starting with bacbec1cec1d652cbf60410e5ac2ccdbbe5bd1cac23e9659468465fa2d1a79a0 not found: ID does not exist" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.451103 4794 scope.go:117] "RemoveContainer" containerID="fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.451521 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1"} err="failed to get container status \"fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1\": rpc error: code = NotFound desc = could not find container \"fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1\": container with ID starting with fb2a07eb9c58a25a63a73ae0098d2bdfebdb02fb775b4a2d2b7eae53cfc93cc1 not found: ID does not exist" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.451552 4794 scope.go:117] "RemoveContainer" containerID="ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.452018 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9"} err="failed to get container status \"ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9\": rpc error: code = NotFound desc = could not find container \"ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9\": container with ID starting with ac39123127c72317d7617d6e82e55583c11b5d19cbfd9b194c2204335c47b9d9 not found: ID does not exist" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.452046 4794 scope.go:117] "RemoveContainer" containerID="69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.452413 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184"} err="failed to get container status \"69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184\": rpc error: code = NotFound desc = could not find container \"69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184\": container with ID starting with 69e84ea1027f2654d474358f45391ab9a600670c82c70d2bd1672af86a7d3184 not found: ID does not exist" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.452437 4794 scope.go:117] "RemoveContainer" containerID="c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.452678 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb"} err="failed to get container status \"c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb\": rpc error: code = NotFound desc = could not find container \"c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb\": container with ID starting with c34c5e4544f5a1f499abdb46d62ebc65dc3761153b5609f31269049f769aa4eb not found: ID does not exist" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.452695 4794 scope.go:117] "RemoveContainer" containerID="0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.452943 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0"} err="failed to get container status \"0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0\": rpc error: code = NotFound desc = could not find container \"0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0\": container with ID starting with 0f2f324fc11fc78d8d0b6d30189bef4d6867b0674e1fa02bf043e389a49906a0 not found: ID does not exist" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.452981 4794 scope.go:117] "RemoveContainer" containerID="f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.453383 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592"} err="failed to get container status \"f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\": rpc error: code = NotFound desc = could not find container \"f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592\": container with ID starting with f55ddfc2be6fa054d7713b1ac2e6dd960cf35f20c812a718f5bd10ad2ee7f592 not found: ID does not exist" Feb 16 17:11:06 crc kubenswrapper[4794]: I0216 17:11:06.800464 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d985e4f1-78bb-43f9-b86c-cd47831d602c" path="/var/lib/kubelet/pods/d985e4f1-78bb-43f9-b86c-cd47831d602c/volumes" Feb 16 17:11:07 crc kubenswrapper[4794]: I0216 17:11:07.219596 4794 generic.go:334] "Generic (PLEG): container finished" podID="e1822e10-f774-4e1c-b717-b052c24fef8c" containerID="213964540f9ba4a09ede75cf66cad1965f4583b1bf800c2d9128d2b38a6a4d90" exitCode=0 Feb 16 17:11:07 crc kubenswrapper[4794]: I0216 17:11:07.219644 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" event={"ID":"e1822e10-f774-4e1c-b717-b052c24fef8c","Type":"ContainerDied","Data":"213964540f9ba4a09ede75cf66cad1965f4583b1bf800c2d9128d2b38a6a4d90"} Feb 16 17:11:07 crc kubenswrapper[4794]: I0216 17:11:07.219670 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" event={"ID":"e1822e10-f774-4e1c-b717-b052c24fef8c","Type":"ContainerStarted","Data":"65c158a0a18be534241dd74f02f6684710b09f9c9a0bfbd0a621970f449f5002"} Feb 16 17:11:08 crc kubenswrapper[4794]: I0216 17:11:08.227956 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" event={"ID":"e1822e10-f774-4e1c-b717-b052c24fef8c","Type":"ContainerStarted","Data":"8a8e70fd49c68739f53991552e77a7c633830abb2dc6472e1521d3cad22c4368"} Feb 16 17:11:08 crc kubenswrapper[4794]: I0216 17:11:08.228293 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" event={"ID":"e1822e10-f774-4e1c-b717-b052c24fef8c","Type":"ContainerStarted","Data":"f5b5a3d5ddf102da4c1bf9099ad3b03920c186cd9d255586175bfc29328204a4"} Feb 16 17:11:08 crc kubenswrapper[4794]: I0216 17:11:08.228329 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" event={"ID":"e1822e10-f774-4e1c-b717-b052c24fef8c","Type":"ContainerStarted","Data":"bd82de71d72a782ec54cbfa01bd9d8a436d154c891cee64cb16168e1502fa6ae"} Feb 16 17:11:08 crc kubenswrapper[4794]: I0216 17:11:08.228339 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" event={"ID":"e1822e10-f774-4e1c-b717-b052c24fef8c","Type":"ContainerStarted","Data":"dda856b51f8c49fb8a680c13653c252918249210b4ee177caf0b7d6ebf0611a5"} Feb 16 17:11:08 crc kubenswrapper[4794]: I0216 17:11:08.228348 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" event={"ID":"e1822e10-f774-4e1c-b717-b052c24fef8c","Type":"ContainerStarted","Data":"47196c9884744362780242dd25e68028bc63f6506ef4dabc92680e941f1331ee"} Feb 16 17:11:08 crc kubenswrapper[4794]: I0216 17:11:08.228358 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" event={"ID":"e1822e10-f774-4e1c-b717-b052c24fef8c","Type":"ContainerStarted","Data":"f1ed9b83c081c68aa3231e6f2c114dbb7c97556c9c335ce1a36279b17e0c4ab6"} Feb 16 17:11:10 crc kubenswrapper[4794]: I0216 17:11:10.242185 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" event={"ID":"e1822e10-f774-4e1c-b717-b052c24fef8c","Type":"ContainerStarted","Data":"a5e06f15b8cf07f673c5050b49b25324890b42f8c8fb3d19d6c6fd57c2ce8f60"} Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.338580 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-v7bg9"] Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.339646 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v7bg9" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.342223 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-rq52d" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.347497 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.349503 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.433568 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568"] Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.434231 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.437094 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-2qrz2" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.437269 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.440661 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf"] Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.442018 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.462429 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3a52f6b4-d7fb-4eab-92f6-8dee7c495f6a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568\" (UID: \"3a52f6b4-d7fb-4eab-92f6-8dee7c495f6a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.462495 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3a52f6b4-d7fb-4eab-92f6-8dee7c495f6a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568\" (UID: \"3a52f6b4-d7fb-4eab-92f6-8dee7c495f6a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.462528 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxcq5\" (UniqueName: \"kubernetes.io/projected/d26ffaf2-c9d0-459c-8ec3-6cc3a72b0cd4-kube-api-access-hxcq5\") pod \"obo-prometheus-operator-68bc856cb9-v7bg9\" (UID: \"d26ffaf2-c9d0-459c-8ec3-6cc3a72b0cd4\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v7bg9" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.560948 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-d85pd"] Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.562051 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-d85pd" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.563217 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxcq5\" (UniqueName: \"kubernetes.io/projected/d26ffaf2-c9d0-459c-8ec3-6cc3a72b0cd4-kube-api-access-hxcq5\") pod \"obo-prometheus-operator-68bc856cb9-v7bg9\" (UID: \"d26ffaf2-c9d0-459c-8ec3-6cc3a72b0cd4\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v7bg9" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.563276 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0ca9bb6d-4f89-469a-aff2-3ecb9dcc814b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf\" (UID: \"0ca9bb6d-4f89-469a-aff2-3ecb9dcc814b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.563366 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0ca9bb6d-4f89-469a-aff2-3ecb9dcc814b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf\" (UID: \"0ca9bb6d-4f89-469a-aff2-3ecb9dcc814b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.563404 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3a52f6b4-d7fb-4eab-92f6-8dee7c495f6a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568\" (UID: \"3a52f6b4-d7fb-4eab-92f6-8dee7c495f6a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.563439 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3a52f6b4-d7fb-4eab-92f6-8dee7c495f6a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568\" (UID: \"3a52f6b4-d7fb-4eab-92f6-8dee7c495f6a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.567156 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3a52f6b4-d7fb-4eab-92f6-8dee7c495f6a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568\" (UID: \"3a52f6b4-d7fb-4eab-92f6-8dee7c495f6a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.569698 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-fkl6l" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.570077 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.570386 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3a52f6b4-d7fb-4eab-92f6-8dee7c495f6a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568\" (UID: \"3a52f6b4-d7fb-4eab-92f6-8dee7c495f6a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.588353 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxcq5\" (UniqueName: \"kubernetes.io/projected/d26ffaf2-c9d0-459c-8ec3-6cc3a72b0cd4-kube-api-access-hxcq5\") pod \"obo-prometheus-operator-68bc856cb9-v7bg9\" (UID: \"d26ffaf2-c9d0-459c-8ec3-6cc3a72b0cd4\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v7bg9" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.660703 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v7bg9" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.664159 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0ca9bb6d-4f89-469a-aff2-3ecb9dcc814b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf\" (UID: \"0ca9bb6d-4f89-469a-aff2-3ecb9dcc814b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.664265 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnghf\" (UniqueName: \"kubernetes.io/projected/bf8a1703-ef5d-4314-92ff-0a4f21d863ca-kube-api-access-xnghf\") pod \"observability-operator-59bdc8b94-d85pd\" (UID: \"bf8a1703-ef5d-4314-92ff-0a4f21d863ca\") " pod="openshift-operators/observability-operator-59bdc8b94-d85pd" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.664335 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0ca9bb6d-4f89-469a-aff2-3ecb9dcc814b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf\" (UID: \"0ca9bb6d-4f89-469a-aff2-3ecb9dcc814b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.664371 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf8a1703-ef5d-4314-92ff-0a4f21d863ca-observability-operator-tls\") pod \"observability-operator-59bdc8b94-d85pd\" (UID: \"bf8a1703-ef5d-4314-92ff-0a4f21d863ca\") " pod="openshift-operators/observability-operator-59bdc8b94-d85pd" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.667606 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0ca9bb6d-4f89-469a-aff2-3ecb9dcc814b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf\" (UID: \"0ca9bb6d-4f89-469a-aff2-3ecb9dcc814b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.667613 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0ca9bb6d-4f89-469a-aff2-3ecb9dcc814b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf\" (UID: \"0ca9bb6d-4f89-469a-aff2-3ecb9dcc814b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf" Feb 16 17:11:12 crc kubenswrapper[4794]: E0216 17:11:12.685539 4794 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-v7bg9_openshift-operators_d26ffaf2-c9d0-459c-8ec3-6cc3a72b0cd4_0(aca240fd41a3f113e55385ca9adadae7d4daf53d6da08d30c1b5e0d2310173e2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 17:11:12 crc kubenswrapper[4794]: E0216 17:11:12.685600 4794 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-v7bg9_openshift-operators_d26ffaf2-c9d0-459c-8ec3-6cc3a72b0cd4_0(aca240fd41a3f113e55385ca9adadae7d4daf53d6da08d30c1b5e0d2310173e2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v7bg9" Feb 16 17:11:12 crc kubenswrapper[4794]: E0216 17:11:12.685621 4794 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-v7bg9_openshift-operators_d26ffaf2-c9d0-459c-8ec3-6cc3a72b0cd4_0(aca240fd41a3f113e55385ca9adadae7d4daf53d6da08d30c1b5e0d2310173e2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v7bg9" Feb 16 17:11:12 crc kubenswrapper[4794]: E0216 17:11:12.685666 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-v7bg9_openshift-operators(d26ffaf2-c9d0-459c-8ec3-6cc3a72b0cd4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-v7bg9_openshift-operators(d26ffaf2-c9d0-459c-8ec3-6cc3a72b0cd4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-v7bg9_openshift-operators_d26ffaf2-c9d0-459c-8ec3-6cc3a72b0cd4_0(aca240fd41a3f113e55385ca9adadae7d4daf53d6da08d30c1b5e0d2310173e2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v7bg9" podUID="d26ffaf2-c9d0-459c-8ec3-6cc3a72b0cd4" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.752737 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.755580 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-tq9qc"] Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.756329 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tq9qc" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.759146 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-bmhwl" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.764257 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.770730 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf8a1703-ef5d-4314-92ff-0a4f21d863ca-observability-operator-tls\") pod \"observability-operator-59bdc8b94-d85pd\" (UID: \"bf8a1703-ef5d-4314-92ff-0a4f21d863ca\") " pod="openshift-operators/observability-operator-59bdc8b94-d85pd" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.770965 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnghf\" (UniqueName: \"kubernetes.io/projected/bf8a1703-ef5d-4314-92ff-0a4f21d863ca-kube-api-access-xnghf\") pod \"observability-operator-59bdc8b94-d85pd\" (UID: \"bf8a1703-ef5d-4314-92ff-0a4f21d863ca\") " pod="openshift-operators/observability-operator-59bdc8b94-d85pd" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.778509 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf8a1703-ef5d-4314-92ff-0a4f21d863ca-observability-operator-tls\") pod \"observability-operator-59bdc8b94-d85pd\" (UID: \"bf8a1703-ef5d-4314-92ff-0a4f21d863ca\") " pod="openshift-operators/observability-operator-59bdc8b94-d85pd" Feb 16 17:11:12 crc kubenswrapper[4794]: E0216 17:11:12.793995 4794 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568_openshift-operators_3a52f6b4-d7fb-4eab-92f6-8dee7c495f6a_0(491ca5e75066651c47054baf3a38eb8e2ca9ae42e879ea281147c3c2ccb41864): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 17:11:12 crc kubenswrapper[4794]: E0216 17:11:12.794130 4794 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568_openshift-operators_3a52f6b4-d7fb-4eab-92f6-8dee7c495f6a_0(491ca5e75066651c47054baf3a38eb8e2ca9ae42e879ea281147c3c2ccb41864): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568" Feb 16 17:11:12 crc kubenswrapper[4794]: E0216 17:11:12.794205 4794 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568_openshift-operators_3a52f6b4-d7fb-4eab-92f6-8dee7c495f6a_0(491ca5e75066651c47054baf3a38eb8e2ca9ae42e879ea281147c3c2ccb41864): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568" Feb 16 17:11:12 crc kubenswrapper[4794]: E0216 17:11:12.794321 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568_openshift-operators(3a52f6b4-d7fb-4eab-92f6-8dee7c495f6a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568_openshift-operators(3a52f6b4-d7fb-4eab-92f6-8dee7c495f6a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568_openshift-operators_3a52f6b4-d7fb-4eab-92f6-8dee7c495f6a_0(491ca5e75066651c47054baf3a38eb8e2ca9ae42e879ea281147c3c2ccb41864): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568" podUID="3a52f6b4-d7fb-4eab-92f6-8dee7c495f6a" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.794550 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnghf\" (UniqueName: \"kubernetes.io/projected/bf8a1703-ef5d-4314-92ff-0a4f21d863ca-kube-api-access-xnghf\") pod \"observability-operator-59bdc8b94-d85pd\" (UID: \"bf8a1703-ef5d-4314-92ff-0a4f21d863ca\") " pod="openshift-operators/observability-operator-59bdc8b94-d85pd" Feb 16 17:11:12 crc kubenswrapper[4794]: E0216 17:11:12.812484 4794 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf_openshift-operators_0ca9bb6d-4f89-469a-aff2-3ecb9dcc814b_0(2f6cdd7fab02ebdda0d654ccdaf5c71ec05adb8cd13ecf9e09de45cf39a5f797): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 17:11:12 crc kubenswrapper[4794]: E0216 17:11:12.812543 4794 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf_openshift-operators_0ca9bb6d-4f89-469a-aff2-3ecb9dcc814b_0(2f6cdd7fab02ebdda0d654ccdaf5c71ec05adb8cd13ecf9e09de45cf39a5f797): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf" Feb 16 17:11:12 crc kubenswrapper[4794]: E0216 17:11:12.812566 4794 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf_openshift-operators_0ca9bb6d-4f89-469a-aff2-3ecb9dcc814b_0(2f6cdd7fab02ebdda0d654ccdaf5c71ec05adb8cd13ecf9e09de45cf39a5f797): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf" Feb 16 17:11:12 crc kubenswrapper[4794]: E0216 17:11:12.812615 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf_openshift-operators(0ca9bb6d-4f89-469a-aff2-3ecb9dcc814b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf_openshift-operators(0ca9bb6d-4f89-469a-aff2-3ecb9dcc814b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf_openshift-operators_0ca9bb6d-4f89-469a-aff2-3ecb9dcc814b_0(2f6cdd7fab02ebdda0d654ccdaf5c71ec05adb8cd13ecf9e09de45cf39a5f797): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf" podUID="0ca9bb6d-4f89-469a-aff2-3ecb9dcc814b" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.872193 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twb58\" (UniqueName: \"kubernetes.io/projected/33908c91-9542-47cd-9530-dfe7b104e79e-kube-api-access-twb58\") pod \"perses-operator-5bf474d74f-tq9qc\" (UID: \"33908c91-9542-47cd-9530-dfe7b104e79e\") " pod="openshift-operators/perses-operator-5bf474d74f-tq9qc" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.872250 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/33908c91-9542-47cd-9530-dfe7b104e79e-openshift-service-ca\") pod \"perses-operator-5bf474d74f-tq9qc\" (UID: \"33908c91-9542-47cd-9530-dfe7b104e79e\") " pod="openshift-operators/perses-operator-5bf474d74f-tq9qc" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.926614 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-d85pd" Feb 16 17:11:12 crc kubenswrapper[4794]: E0216 17:11:12.949845 4794 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-d85pd_openshift-operators_bf8a1703-ef5d-4314-92ff-0a4f21d863ca_0(0ea56bc46fa806b4fab6b4b11753d2f90f653beed37c8c559cd0a40306fd95ee): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 17:11:12 crc kubenswrapper[4794]: E0216 17:11:12.949921 4794 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-d85pd_openshift-operators_bf8a1703-ef5d-4314-92ff-0a4f21d863ca_0(0ea56bc46fa806b4fab6b4b11753d2f90f653beed37c8c559cd0a40306fd95ee): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-d85pd" Feb 16 17:11:12 crc kubenswrapper[4794]: E0216 17:11:12.949947 4794 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-d85pd_openshift-operators_bf8a1703-ef5d-4314-92ff-0a4f21d863ca_0(0ea56bc46fa806b4fab6b4b11753d2f90f653beed37c8c559cd0a40306fd95ee): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-d85pd" Feb 16 17:11:12 crc kubenswrapper[4794]: E0216 17:11:12.949998 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-d85pd_openshift-operators(bf8a1703-ef5d-4314-92ff-0a4f21d863ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-d85pd_openshift-operators(bf8a1703-ef5d-4314-92ff-0a4f21d863ca)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-d85pd_openshift-operators_bf8a1703-ef5d-4314-92ff-0a4f21d863ca_0(0ea56bc46fa806b4fab6b4b11753d2f90f653beed37c8c559cd0a40306fd95ee): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-d85pd" podUID="bf8a1703-ef5d-4314-92ff-0a4f21d863ca" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.973029 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twb58\" (UniqueName: \"kubernetes.io/projected/33908c91-9542-47cd-9530-dfe7b104e79e-kube-api-access-twb58\") pod \"perses-operator-5bf474d74f-tq9qc\" (UID: \"33908c91-9542-47cd-9530-dfe7b104e79e\") " pod="openshift-operators/perses-operator-5bf474d74f-tq9qc" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.973097 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/33908c91-9542-47cd-9530-dfe7b104e79e-openshift-service-ca\") pod \"perses-operator-5bf474d74f-tq9qc\" (UID: \"33908c91-9542-47cd-9530-dfe7b104e79e\") " pod="openshift-operators/perses-operator-5bf474d74f-tq9qc" Feb 16 17:11:12 crc kubenswrapper[4794]: I0216 17:11:12.974462 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/33908c91-9542-47cd-9530-dfe7b104e79e-openshift-service-ca\") pod \"perses-operator-5bf474d74f-tq9qc\" (UID: \"33908c91-9542-47cd-9530-dfe7b104e79e\") " pod="openshift-operators/perses-operator-5bf474d74f-tq9qc" Feb 16 17:11:13 crc kubenswrapper[4794]: I0216 17:11:13.003143 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twb58\" (UniqueName: \"kubernetes.io/projected/33908c91-9542-47cd-9530-dfe7b104e79e-kube-api-access-twb58\") pod \"perses-operator-5bf474d74f-tq9qc\" (UID: \"33908c91-9542-47cd-9530-dfe7b104e79e\") " pod="openshift-operators/perses-operator-5bf474d74f-tq9qc" Feb 16 17:11:13 crc kubenswrapper[4794]: I0216 17:11:13.079997 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tq9qc" Feb 16 17:11:13 crc kubenswrapper[4794]: E0216 17:11:13.100685 4794 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tq9qc_openshift-operators_33908c91-9542-47cd-9530-dfe7b104e79e_0(c22bfbc069cedd7bf04a96fbf7646b06e250957310b1924e4fe0e73242dcdde4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 17:11:13 crc kubenswrapper[4794]: E0216 17:11:13.100749 4794 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tq9qc_openshift-operators_33908c91-9542-47cd-9530-dfe7b104e79e_0(c22bfbc069cedd7bf04a96fbf7646b06e250957310b1924e4fe0e73242dcdde4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-tq9qc" Feb 16 17:11:13 crc kubenswrapper[4794]: E0216 17:11:13.100770 4794 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tq9qc_openshift-operators_33908c91-9542-47cd-9530-dfe7b104e79e_0(c22bfbc069cedd7bf04a96fbf7646b06e250957310b1924e4fe0e73242dcdde4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-tq9qc" Feb 16 17:11:13 crc kubenswrapper[4794]: E0216 17:11:13.100814 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-tq9qc_openshift-operators(33908c91-9542-47cd-9530-dfe7b104e79e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-tq9qc_openshift-operators(33908c91-9542-47cd-9530-dfe7b104e79e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tq9qc_openshift-operators_33908c91-9542-47cd-9530-dfe7b104e79e_0(c22bfbc069cedd7bf04a96fbf7646b06e250957310b1924e4fe0e73242dcdde4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-tq9qc" podUID="33908c91-9542-47cd-9530-dfe7b104e79e" Feb 16 17:11:13 crc kubenswrapper[4794]: I0216 17:11:13.263558 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" event={"ID":"e1822e10-f774-4e1c-b717-b052c24fef8c","Type":"ContainerStarted","Data":"2c10ff1f6ca0059e6d49b040d3cd25ca9fe923b6249a64db2555e0fe305b585c"} Feb 16 17:11:13 crc kubenswrapper[4794]: I0216 17:11:13.263875 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:13 crc kubenswrapper[4794]: I0216 17:11:13.264052 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:13 crc kubenswrapper[4794]: I0216 17:11:13.300547 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" podStartSLOduration=8.300527485 podStartE2EDuration="8.300527485s" podCreationTimestamp="2026-02-16 17:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:11:13.296995766 +0000 UTC m=+699.245090413" watchObservedRunningTime="2026-02-16 17:11:13.300527485 +0000 UTC m=+699.248622132" Feb 16 17:11:13 crc kubenswrapper[4794]: I0216 17:11:13.322250 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:13 crc kubenswrapper[4794]: I0216 17:11:13.842535 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568"] Feb 16 17:11:13 crc kubenswrapper[4794]: I0216 17:11:13.842687 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568" Feb 16 17:11:13 crc kubenswrapper[4794]: I0216 17:11:13.843164 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568" Feb 16 17:11:13 crc kubenswrapper[4794]: I0216 17:11:13.857726 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf"] Feb 16 17:11:13 crc kubenswrapper[4794]: I0216 17:11:13.857865 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf" Feb 16 17:11:13 crc kubenswrapper[4794]: I0216 17:11:13.858367 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf" Feb 16 17:11:13 crc kubenswrapper[4794]: I0216 17:11:13.862552 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-v7bg9"] Feb 16 17:11:13 crc kubenswrapper[4794]: I0216 17:11:13.862812 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v7bg9" Feb 16 17:11:13 crc kubenswrapper[4794]: I0216 17:11:13.863380 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v7bg9" Feb 16 17:11:13 crc kubenswrapper[4794]: E0216 17:11:13.881506 4794 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568_openshift-operators_3a52f6b4-d7fb-4eab-92f6-8dee7c495f6a_0(b0e2290f65d1c2e7e0d9afb5ad7268a4117fe89c13a33185e79edd99b88df72f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 17:11:13 crc kubenswrapper[4794]: E0216 17:11:13.881569 4794 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568_openshift-operators_3a52f6b4-d7fb-4eab-92f6-8dee7c495f6a_0(b0e2290f65d1c2e7e0d9afb5ad7268a4117fe89c13a33185e79edd99b88df72f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568" Feb 16 17:11:13 crc kubenswrapper[4794]: E0216 17:11:13.881594 4794 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568_openshift-operators_3a52f6b4-d7fb-4eab-92f6-8dee7c495f6a_0(b0e2290f65d1c2e7e0d9afb5ad7268a4117fe89c13a33185e79edd99b88df72f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568" Feb 16 17:11:13 crc kubenswrapper[4794]: E0216 17:11:13.881632 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568_openshift-operators(3a52f6b4-d7fb-4eab-92f6-8dee7c495f6a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568_openshift-operators(3a52f6b4-d7fb-4eab-92f6-8dee7c495f6a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568_openshift-operators_3a52f6b4-d7fb-4eab-92f6-8dee7c495f6a_0(b0e2290f65d1c2e7e0d9afb5ad7268a4117fe89c13a33185e79edd99b88df72f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568" podUID="3a52f6b4-d7fb-4eab-92f6-8dee7c495f6a" Feb 16 17:11:13 crc kubenswrapper[4794]: I0216 17:11:13.886170 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-d85pd"] Feb 16 17:11:13 crc kubenswrapper[4794]: I0216 17:11:13.886283 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-d85pd" Feb 16 17:11:13 crc kubenswrapper[4794]: I0216 17:11:13.886699 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-d85pd" Feb 16 17:11:13 crc kubenswrapper[4794]: I0216 17:11:13.912460 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-tq9qc"] Feb 16 17:11:13 crc kubenswrapper[4794]: I0216 17:11:13.912588 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tq9qc" Feb 16 17:11:13 crc kubenswrapper[4794]: I0216 17:11:13.913043 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tq9qc" Feb 16 17:11:13 crc kubenswrapper[4794]: E0216 17:11:13.995489 4794 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf_openshift-operators_0ca9bb6d-4f89-469a-aff2-3ecb9dcc814b_0(69448a31b7666f783b7054fda6eb81da954542503b027e08bfd0a2912f66228f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 17:11:13 crc kubenswrapper[4794]: E0216 17:11:13.995552 4794 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf_openshift-operators_0ca9bb6d-4f89-469a-aff2-3ecb9dcc814b_0(69448a31b7666f783b7054fda6eb81da954542503b027e08bfd0a2912f66228f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf" Feb 16 17:11:13 crc kubenswrapper[4794]: E0216 17:11:13.995575 4794 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf_openshift-operators_0ca9bb6d-4f89-469a-aff2-3ecb9dcc814b_0(69448a31b7666f783b7054fda6eb81da954542503b027e08bfd0a2912f66228f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf" Feb 16 17:11:13 crc kubenswrapper[4794]: E0216 17:11:13.995621 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf_openshift-operators(0ca9bb6d-4f89-469a-aff2-3ecb9dcc814b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf_openshift-operators(0ca9bb6d-4f89-469a-aff2-3ecb9dcc814b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf_openshift-operators_0ca9bb6d-4f89-469a-aff2-3ecb9dcc814b_0(69448a31b7666f783b7054fda6eb81da954542503b027e08bfd0a2912f66228f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf" podUID="0ca9bb6d-4f89-469a-aff2-3ecb9dcc814b" Feb 16 17:11:14 crc kubenswrapper[4794]: E0216 17:11:14.057743 4794 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-v7bg9_openshift-operators_d26ffaf2-c9d0-459c-8ec3-6cc3a72b0cd4_0(6f5e2d241960c5e6d5beee8176ac4af1d027221d1913758214d0c5ba6e9f7b18): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 17:11:14 crc kubenswrapper[4794]: E0216 17:11:14.057800 4794 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-v7bg9_openshift-operators_d26ffaf2-c9d0-459c-8ec3-6cc3a72b0cd4_0(6f5e2d241960c5e6d5beee8176ac4af1d027221d1913758214d0c5ba6e9f7b18): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v7bg9" Feb 16 17:11:14 crc kubenswrapper[4794]: E0216 17:11:14.057821 4794 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-v7bg9_openshift-operators_d26ffaf2-c9d0-459c-8ec3-6cc3a72b0cd4_0(6f5e2d241960c5e6d5beee8176ac4af1d027221d1913758214d0c5ba6e9f7b18): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v7bg9" Feb 16 17:11:14 crc kubenswrapper[4794]: E0216 17:11:14.057861 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-v7bg9_openshift-operators(d26ffaf2-c9d0-459c-8ec3-6cc3a72b0cd4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-v7bg9_openshift-operators(d26ffaf2-c9d0-459c-8ec3-6cc3a72b0cd4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-v7bg9_openshift-operators_d26ffaf2-c9d0-459c-8ec3-6cc3a72b0cd4_0(6f5e2d241960c5e6d5beee8176ac4af1d027221d1913758214d0c5ba6e9f7b18): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v7bg9" podUID="d26ffaf2-c9d0-459c-8ec3-6cc3a72b0cd4" Feb 16 17:11:14 crc kubenswrapper[4794]: E0216 17:11:14.073494 4794 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-d85pd_openshift-operators_bf8a1703-ef5d-4314-92ff-0a4f21d863ca_0(ac7a7e5fa4d9535983caaa669019ffd24887f5ab03399410139d208c82bdf736): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 17:11:14 crc kubenswrapper[4794]: E0216 17:11:14.073563 4794 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-d85pd_openshift-operators_bf8a1703-ef5d-4314-92ff-0a4f21d863ca_0(ac7a7e5fa4d9535983caaa669019ffd24887f5ab03399410139d208c82bdf736): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-d85pd" Feb 16 17:11:14 crc kubenswrapper[4794]: E0216 17:11:14.073590 4794 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-d85pd_openshift-operators_bf8a1703-ef5d-4314-92ff-0a4f21d863ca_0(ac7a7e5fa4d9535983caaa669019ffd24887f5ab03399410139d208c82bdf736): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-d85pd" Feb 16 17:11:14 crc kubenswrapper[4794]: E0216 17:11:14.073634 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-d85pd_openshift-operators(bf8a1703-ef5d-4314-92ff-0a4f21d863ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-d85pd_openshift-operators(bf8a1703-ef5d-4314-92ff-0a4f21d863ca)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-d85pd_openshift-operators_bf8a1703-ef5d-4314-92ff-0a4f21d863ca_0(ac7a7e5fa4d9535983caaa669019ffd24887f5ab03399410139d208c82bdf736): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-d85pd" podUID="bf8a1703-ef5d-4314-92ff-0a4f21d863ca" Feb 16 17:11:14 crc kubenswrapper[4794]: E0216 17:11:14.087507 4794 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tq9qc_openshift-operators_33908c91-9542-47cd-9530-dfe7b104e79e_0(391a3360b2f2af91c6590f1827c5be08fa18092c5a6c2d38c48e97312a6080e0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 17:11:14 crc kubenswrapper[4794]: E0216 17:11:14.087574 4794 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tq9qc_openshift-operators_33908c91-9542-47cd-9530-dfe7b104e79e_0(391a3360b2f2af91c6590f1827c5be08fa18092c5a6c2d38c48e97312a6080e0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-tq9qc" Feb 16 17:11:14 crc kubenswrapper[4794]: E0216 17:11:14.087597 4794 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tq9qc_openshift-operators_33908c91-9542-47cd-9530-dfe7b104e79e_0(391a3360b2f2af91c6590f1827c5be08fa18092c5a6c2d38c48e97312a6080e0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-tq9qc" Feb 16 17:11:14 crc kubenswrapper[4794]: E0216 17:11:14.087641 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-tq9qc_openshift-operators(33908c91-9542-47cd-9530-dfe7b104e79e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-tq9qc_openshift-operators(33908c91-9542-47cd-9530-dfe7b104e79e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tq9qc_openshift-operators_33908c91-9542-47cd-9530-dfe7b104e79e_0(391a3360b2f2af91c6590f1827c5be08fa18092c5a6c2d38c48e97312a6080e0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-tq9qc" podUID="33908c91-9542-47cd-9530-dfe7b104e79e" Feb 16 17:11:14 crc kubenswrapper[4794]: I0216 17:11:14.268564 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:14 crc kubenswrapper[4794]: I0216 17:11:14.298940 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:20 crc kubenswrapper[4794]: I0216 17:11:20.140828 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:11:20 crc kubenswrapper[4794]: I0216 17:11:20.141450 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:11:24 crc kubenswrapper[4794]: I0216 17:11:24.790653 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v7bg9" Feb 16 17:11:24 crc kubenswrapper[4794]: I0216 17:11:24.790653 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-d85pd" Feb 16 17:11:24 crc kubenswrapper[4794]: I0216 17:11:24.794457 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v7bg9" Feb 16 17:11:24 crc kubenswrapper[4794]: I0216 17:11:24.795331 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-d85pd" Feb 16 17:11:25 crc kubenswrapper[4794]: I0216 17:11:25.204791 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-d85pd"] Feb 16 17:11:25 crc kubenswrapper[4794]: I0216 17:11:25.255508 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-v7bg9"] Feb 16 17:11:25 crc kubenswrapper[4794]: I0216 17:11:25.346898 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-d85pd" event={"ID":"bf8a1703-ef5d-4314-92ff-0a4f21d863ca","Type":"ContainerStarted","Data":"0bf6c05203891ccca24fff03b827099d259a46980323fa55c1863af7b473cfb6"} Feb 16 17:11:25 crc kubenswrapper[4794]: I0216 17:11:25.348422 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v7bg9" event={"ID":"d26ffaf2-c9d0-459c-8ec3-6cc3a72b0cd4","Type":"ContainerStarted","Data":"8b9ba589c74a63a00f65339315db89003ed67542a5169d3cf8b4b6b3027cc874"} Feb 16 17:11:25 crc kubenswrapper[4794]: I0216 17:11:25.790691 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568" Feb 16 17:11:25 crc kubenswrapper[4794]: I0216 17:11:25.791469 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568" Feb 16 17:11:26 crc kubenswrapper[4794]: I0216 17:11:26.231123 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568"] Feb 16 17:11:26 crc kubenswrapper[4794]: I0216 17:11:26.378007 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568" event={"ID":"3a52f6b4-d7fb-4eab-92f6-8dee7c495f6a","Type":"ContainerStarted","Data":"1239f878e4e83b180718624bacc2acd64c7a811e630a26889a670115467ecd4e"} Feb 16 17:11:26 crc kubenswrapper[4794]: I0216 17:11:26.790822 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tq9qc" Feb 16 17:11:26 crc kubenswrapper[4794]: I0216 17:11:26.791321 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tq9qc" Feb 16 17:11:27 crc kubenswrapper[4794]: I0216 17:11:27.388108 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-tq9qc"] Feb 16 17:11:27 crc kubenswrapper[4794]: W0216 17:11:27.428240 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33908c91_9542_47cd_9530_dfe7b104e79e.slice/crio-7022369f3cce19377ba4bf288ec1e763ca1636c7a7a0c632e5603ca9aa680119 WatchSource:0}: Error finding container 7022369f3cce19377ba4bf288ec1e763ca1636c7a7a0c632e5603ca9aa680119: Status 404 returned error can't find the container with id 7022369f3cce19377ba4bf288ec1e763ca1636c7a7a0c632e5603ca9aa680119 Feb 16 17:11:28 crc kubenswrapper[4794]: I0216 17:11:28.399444 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-tq9qc" event={"ID":"33908c91-9542-47cd-9530-dfe7b104e79e","Type":"ContainerStarted","Data":"7022369f3cce19377ba4bf288ec1e763ca1636c7a7a0c632e5603ca9aa680119"} Feb 16 17:11:28 crc kubenswrapper[4794]: I0216 17:11:28.799193 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf" Feb 16 17:11:28 crc kubenswrapper[4794]: I0216 17:11:28.800346 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf" Feb 16 17:11:35 crc kubenswrapper[4794]: I0216 17:11:35.273963 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf"] Feb 16 17:11:35 crc kubenswrapper[4794]: W0216 17:11:35.299517 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ca9bb6d_4f89_469a_aff2_3ecb9dcc814b.slice/crio-e118d1278cec5af826044ca0f07ea7953df729f857825b8932070c075f779a3b WatchSource:0}: Error finding container e118d1278cec5af826044ca0f07ea7953df729f857825b8932070c075f779a3b: Status 404 returned error can't find the container with id e118d1278cec5af826044ca0f07ea7953df729f857825b8932070c075f779a3b Feb 16 17:11:35 crc kubenswrapper[4794]: I0216 17:11:35.440044 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-d85pd" event={"ID":"bf8a1703-ef5d-4314-92ff-0a4f21d863ca","Type":"ContainerStarted","Data":"d67b8d7393d9379c073ffc5e42f974992ce0cb266229007a2ef4aa00c6d9e73a"} Feb 16 17:11:35 crc kubenswrapper[4794]: I0216 17:11:35.440288 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-d85pd" Feb 16 17:11:35 crc kubenswrapper[4794]: I0216 17:11:35.442079 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-d85pd" Feb 16 17:11:35 crc kubenswrapper[4794]: I0216 17:11:35.442363 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v7bg9" event={"ID":"d26ffaf2-c9d0-459c-8ec3-6cc3a72b0cd4","Type":"ContainerStarted","Data":"da47716bc4c222cabe9203cde08f7d717b09654acc3163177b4c3ccff8cd60bb"} Feb 16 17:11:35 crc kubenswrapper[4794]: I0216 17:11:35.444154 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf" event={"ID":"0ca9bb6d-4f89-469a-aff2-3ecb9dcc814b","Type":"ContainerStarted","Data":"df769a516aa7bfba9d754fab85de5705be76b91b1438ee5c87eae1671871de3a"} Feb 16 17:11:35 crc kubenswrapper[4794]: I0216 17:11:35.444200 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf" event={"ID":"0ca9bb6d-4f89-469a-aff2-3ecb9dcc814b","Type":"ContainerStarted","Data":"e118d1278cec5af826044ca0f07ea7953df729f857825b8932070c075f779a3b"} Feb 16 17:11:35 crc kubenswrapper[4794]: I0216 17:11:35.445823 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-tq9qc" event={"ID":"33908c91-9542-47cd-9530-dfe7b104e79e","Type":"ContainerStarted","Data":"28aa65ddb6698693c79d87b917ad8f5b159788e6c24e5b97d7fe75a371694fb8"} Feb 16 17:11:35 crc kubenswrapper[4794]: I0216 17:11:35.445928 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-tq9qc" Feb 16 17:11:35 crc kubenswrapper[4794]: I0216 17:11:35.447387 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568" event={"ID":"3a52f6b4-d7fb-4eab-92f6-8dee7c495f6a","Type":"ContainerStarted","Data":"5354d60797e809e53724ae4b5392d702b453572124ccae7745f5c4fb04e8ef77"} Feb 16 17:11:35 crc kubenswrapper[4794]: I0216 17:11:35.458342 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-d85pd" podStartSLOduration=13.824942171 podStartE2EDuration="23.458320619s" podCreationTimestamp="2026-02-16 17:11:12 +0000 UTC" firstStartedPulling="2026-02-16 17:11:25.226239524 +0000 UTC m=+711.174334172" lastFinishedPulling="2026-02-16 17:11:34.859617973 +0000 UTC m=+720.807712620" observedRunningTime="2026-02-16 17:11:35.455618943 +0000 UTC m=+721.403713590" watchObservedRunningTime="2026-02-16 17:11:35.458320619 +0000 UTC m=+721.406415266" Feb 16 17:11:35 crc kubenswrapper[4794]: I0216 17:11:35.484133 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568" podStartSLOduration=14.833836091 podStartE2EDuration="23.484115828s" podCreationTimestamp="2026-02-16 17:11:12 +0000 UTC" firstStartedPulling="2026-02-16 17:11:26.256089637 +0000 UTC m=+712.204184294" lastFinishedPulling="2026-02-16 17:11:34.906369394 +0000 UTC m=+720.854464031" observedRunningTime="2026-02-16 17:11:35.480403433 +0000 UTC m=+721.428498080" watchObservedRunningTime="2026-02-16 17:11:35.484115828 +0000 UTC m=+721.432210475" Feb 16 17:11:35 crc kubenswrapper[4794]: I0216 17:11:35.519388 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v7bg9" podStartSLOduration=13.935543562 podStartE2EDuration="23.519371953s" podCreationTimestamp="2026-02-16 17:11:12 +0000 UTC" firstStartedPulling="2026-02-16 17:11:25.262465655 +0000 UTC m=+711.210560312" lastFinishedPulling="2026-02-16 17:11:34.846294056 +0000 UTC m=+720.794388703" observedRunningTime="2026-02-16 17:11:35.516462861 +0000 UTC m=+721.464557518" watchObservedRunningTime="2026-02-16 17:11:35.519371953 +0000 UTC m=+721.467466600" Feb 16 17:11:35 crc kubenswrapper[4794]: I0216 17:11:35.569581 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-tq9qc" podStartSLOduration=16.06257638 podStartE2EDuration="23.569558421s" podCreationTimestamp="2026-02-16 17:11:12 +0000 UTC" firstStartedPulling="2026-02-16 17:11:27.432746825 +0000 UTC m=+713.380841472" lastFinishedPulling="2026-02-16 17:11:34.939728866 +0000 UTC m=+720.887823513" observedRunningTime="2026-02-16 17:11:35.566492325 +0000 UTC m=+721.514586972" watchObservedRunningTime="2026-02-16 17:11:35.569558421 +0000 UTC m=+721.517653058" Feb 16 17:11:35 crc kubenswrapper[4794]: I0216 17:11:35.595669 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf" podStartSLOduration=23.595646309 podStartE2EDuration="23.595646309s" podCreationTimestamp="2026-02-16 17:11:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:11:35.585654536 +0000 UTC m=+721.533749183" watchObservedRunningTime="2026-02-16 17:11:35.595646309 +0000 UTC m=+721.543740956" Feb 16 17:11:36 crc kubenswrapper[4794]: I0216 17:11:36.358509 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pvz6h" Feb 16 17:11:42 crc kubenswrapper[4794]: I0216 17:11:42.441800 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-l8dwz"] Feb 16 17:11:42 crc kubenswrapper[4794]: I0216 17:11:42.443686 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-l8dwz" Feb 16 17:11:42 crc kubenswrapper[4794]: I0216 17:11:42.454058 4794 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-t9vv8" Feb 16 17:11:42 crc kubenswrapper[4794]: I0216 17:11:42.454441 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 16 17:11:42 crc kubenswrapper[4794]: I0216 17:11:42.454658 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 16 17:11:42 crc kubenswrapper[4794]: I0216 17:11:42.457600 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-kr8wx"] Feb 16 17:11:42 crc kubenswrapper[4794]: I0216 17:11:42.458650 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-kr8wx" Feb 16 17:11:42 crc kubenswrapper[4794]: I0216 17:11:42.462039 4794 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-6s5vj" Feb 16 17:11:42 crc kubenswrapper[4794]: I0216 17:11:42.466827 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-l8dwz"] Feb 16 17:11:42 crc kubenswrapper[4794]: I0216 17:11:42.477500 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-q75lx"] Feb 16 17:11:42 crc kubenswrapper[4794]: I0216 17:11:42.479118 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-q75lx" Feb 16 17:11:42 crc kubenswrapper[4794]: I0216 17:11:42.481757 4794 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-vtvz4" Feb 16 17:11:42 crc kubenswrapper[4794]: I0216 17:11:42.489941 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-kr8wx"] Feb 16 17:11:42 crc kubenswrapper[4794]: I0216 17:11:42.499141 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-q75lx"] Feb 16 17:11:42 crc kubenswrapper[4794]: I0216 17:11:42.509516 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82txz\" (UniqueName: \"kubernetes.io/projected/9b1afc0d-a17d-4891-8e30-c1b1edf3deab-kube-api-access-82txz\") pod \"cert-manager-webhook-687f57d79b-q75lx\" (UID: \"9b1afc0d-a17d-4891-8e30-c1b1edf3deab\") " pod="cert-manager/cert-manager-webhook-687f57d79b-q75lx" Feb 16 17:11:42 crc kubenswrapper[4794]: I0216 17:11:42.509587 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsp6s\" (UniqueName: \"kubernetes.io/projected/c6c81378-1dc6-496a-946f-b403a2dc0260-kube-api-access-vsp6s\") pod \"cert-manager-cainjector-cf98fcc89-l8dwz\" (UID: \"c6c81378-1dc6-496a-946f-b403a2dc0260\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-l8dwz" Feb 16 17:11:42 crc kubenswrapper[4794]: I0216 17:11:42.509645 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwxnk\" (UniqueName: \"kubernetes.io/projected/66dd19a7-a89d-4a32-9c65-8b24e4b01363-kube-api-access-cwxnk\") pod \"cert-manager-858654f9db-kr8wx\" (UID: \"66dd19a7-a89d-4a32-9c65-8b24e4b01363\") " pod="cert-manager/cert-manager-858654f9db-kr8wx" Feb 16 17:11:42 crc kubenswrapper[4794]: I0216 17:11:42.611167 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82txz\" (UniqueName: \"kubernetes.io/projected/9b1afc0d-a17d-4891-8e30-c1b1edf3deab-kube-api-access-82txz\") pod \"cert-manager-webhook-687f57d79b-q75lx\" (UID: \"9b1afc0d-a17d-4891-8e30-c1b1edf3deab\") " pod="cert-manager/cert-manager-webhook-687f57d79b-q75lx" Feb 16 17:11:42 crc kubenswrapper[4794]: I0216 17:11:42.611223 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsp6s\" (UniqueName: \"kubernetes.io/projected/c6c81378-1dc6-496a-946f-b403a2dc0260-kube-api-access-vsp6s\") pod \"cert-manager-cainjector-cf98fcc89-l8dwz\" (UID: \"c6c81378-1dc6-496a-946f-b403a2dc0260\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-l8dwz" Feb 16 17:11:42 crc kubenswrapper[4794]: I0216 17:11:42.611279 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwxnk\" (UniqueName: \"kubernetes.io/projected/66dd19a7-a89d-4a32-9c65-8b24e4b01363-kube-api-access-cwxnk\") pod \"cert-manager-858654f9db-kr8wx\" (UID: \"66dd19a7-a89d-4a32-9c65-8b24e4b01363\") " pod="cert-manager/cert-manager-858654f9db-kr8wx" Feb 16 17:11:42 crc kubenswrapper[4794]: I0216 17:11:42.636928 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsp6s\" (UniqueName: \"kubernetes.io/projected/c6c81378-1dc6-496a-946f-b403a2dc0260-kube-api-access-vsp6s\") pod \"cert-manager-cainjector-cf98fcc89-l8dwz\" (UID: \"c6c81378-1dc6-496a-946f-b403a2dc0260\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-l8dwz" Feb 16 17:11:42 crc kubenswrapper[4794]: I0216 17:11:42.637510 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82txz\" (UniqueName: \"kubernetes.io/projected/9b1afc0d-a17d-4891-8e30-c1b1edf3deab-kube-api-access-82txz\") pod \"cert-manager-webhook-687f57d79b-q75lx\" (UID: \"9b1afc0d-a17d-4891-8e30-c1b1edf3deab\") " pod="cert-manager/cert-manager-webhook-687f57d79b-q75lx" Feb 16 17:11:42 crc kubenswrapper[4794]: I0216 17:11:42.638809 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwxnk\" (UniqueName: \"kubernetes.io/projected/66dd19a7-a89d-4a32-9c65-8b24e4b01363-kube-api-access-cwxnk\") pod \"cert-manager-858654f9db-kr8wx\" (UID: \"66dd19a7-a89d-4a32-9c65-8b24e4b01363\") " pod="cert-manager/cert-manager-858654f9db-kr8wx" Feb 16 17:11:42 crc kubenswrapper[4794]: I0216 17:11:42.773006 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-l8dwz" Feb 16 17:11:42 crc kubenswrapper[4794]: I0216 17:11:42.781814 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-kr8wx" Feb 16 17:11:42 crc kubenswrapper[4794]: I0216 17:11:42.804087 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-q75lx" Feb 16 17:11:43 crc kubenswrapper[4794]: I0216 17:11:43.082541 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-tq9qc" Feb 16 17:11:43 crc kubenswrapper[4794]: I0216 17:11:43.220854 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-kr8wx"] Feb 16 17:11:43 crc kubenswrapper[4794]: W0216 17:11:43.224359 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66dd19a7_a89d_4a32_9c65_8b24e4b01363.slice/crio-44281538db98bd63b2f978773f8f01aea2b9a7f0c1b00d4de59cf3023acec471 WatchSource:0}: Error finding container 44281538db98bd63b2f978773f8f01aea2b9a7f0c1b00d4de59cf3023acec471: Status 404 returned error can't find the container with id 44281538db98bd63b2f978773f8f01aea2b9a7f0c1b00d4de59cf3023acec471 Feb 16 17:11:43 crc kubenswrapper[4794]: I0216 17:11:43.277007 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-q75lx"] Feb 16 17:11:43 crc kubenswrapper[4794]: I0216 17:11:43.286246 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-l8dwz"] Feb 16 17:11:43 crc kubenswrapper[4794]: W0216 17:11:43.287470 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6c81378_1dc6_496a_946f_b403a2dc0260.slice/crio-6cb90a6204fce281a3f4b1ff926166fa9a3444781cde7ff6aab5ce8dfba32891 WatchSource:0}: Error finding container 6cb90a6204fce281a3f4b1ff926166fa9a3444781cde7ff6aab5ce8dfba32891: Status 404 returned error can't find the container with id 6cb90a6204fce281a3f4b1ff926166fa9a3444781cde7ff6aab5ce8dfba32891 Feb 16 17:11:43 crc kubenswrapper[4794]: W0216 17:11:43.287734 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b1afc0d_a17d_4891_8e30_c1b1edf3deab.slice/crio-2a0287d7a9318a6f113a8f3767a7b5c853bd7d2bc7abbb84111fde4d59d42c49 WatchSource:0}: Error finding container 2a0287d7a9318a6f113a8f3767a7b5c853bd7d2bc7abbb84111fde4d59d42c49: Status 404 returned error can't find the container with id 2a0287d7a9318a6f113a8f3767a7b5c853bd7d2bc7abbb84111fde4d59d42c49 Feb 16 17:11:43 crc kubenswrapper[4794]: I0216 17:11:43.506841 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-kr8wx" event={"ID":"66dd19a7-a89d-4a32-9c65-8b24e4b01363","Type":"ContainerStarted","Data":"44281538db98bd63b2f978773f8f01aea2b9a7f0c1b00d4de59cf3023acec471"} Feb 16 17:11:43 crc kubenswrapper[4794]: I0216 17:11:43.508182 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-l8dwz" event={"ID":"c6c81378-1dc6-496a-946f-b403a2dc0260","Type":"ContainerStarted","Data":"6cb90a6204fce281a3f4b1ff926166fa9a3444781cde7ff6aab5ce8dfba32891"} Feb 16 17:11:43 crc kubenswrapper[4794]: I0216 17:11:43.509313 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-q75lx" event={"ID":"9b1afc0d-a17d-4891-8e30-c1b1edf3deab","Type":"ContainerStarted","Data":"2a0287d7a9318a6f113a8f3767a7b5c853bd7d2bc7abbb84111fde4d59d42c49"} Feb 16 17:11:45 crc kubenswrapper[4794]: I0216 17:11:45.310374 4794 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 17:11:47 crc kubenswrapper[4794]: I0216 17:11:47.543133 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-q75lx" event={"ID":"9b1afc0d-a17d-4891-8e30-c1b1edf3deab","Type":"ContainerStarted","Data":"2f5ce9edc7a913ea352a562169565d5fad43f059ac49987e514aead3b7aee28b"} Feb 16 17:11:47 crc kubenswrapper[4794]: I0216 17:11:47.544351 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-q75lx" Feb 16 17:11:47 crc kubenswrapper[4794]: I0216 17:11:47.570158 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-q75lx" podStartSLOduration=1.6268126330000001 podStartE2EDuration="5.570135933s" podCreationTimestamp="2026-02-16 17:11:42 +0000 UTC" firstStartedPulling="2026-02-16 17:11:43.290082648 +0000 UTC m=+729.238177295" lastFinishedPulling="2026-02-16 17:11:47.233405948 +0000 UTC m=+733.181500595" observedRunningTime="2026-02-16 17:11:47.562485757 +0000 UTC m=+733.510580404" watchObservedRunningTime="2026-02-16 17:11:47.570135933 +0000 UTC m=+733.518230580" Feb 16 17:11:49 crc kubenswrapper[4794]: I0216 17:11:49.555566 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-l8dwz" event={"ID":"c6c81378-1dc6-496a-946f-b403a2dc0260","Type":"ContainerStarted","Data":"bba72abfb02f5a8f784cac554c03aae611f5f1d50bfccaab767c216b3247f68e"} Feb 16 17:11:49 crc kubenswrapper[4794]: I0216 17:11:49.557658 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-kr8wx" event={"ID":"66dd19a7-a89d-4a32-9c65-8b24e4b01363","Type":"ContainerStarted","Data":"ea8495b54d58e1e924e0150413e5629262d476ebff7ef321d53b4357bc69c60b"} Feb 16 17:11:49 crc kubenswrapper[4794]: I0216 17:11:49.579919 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-l8dwz" podStartSLOduration=2.163807965 podStartE2EDuration="7.57989839s" podCreationTimestamp="2026-02-16 17:11:42 +0000 UTC" firstStartedPulling="2026-02-16 17:11:43.289170692 +0000 UTC m=+729.237265329" lastFinishedPulling="2026-02-16 17:11:48.705261107 +0000 UTC m=+734.653355754" observedRunningTime="2026-02-16 17:11:49.57848323 +0000 UTC m=+735.526577897" watchObservedRunningTime="2026-02-16 17:11:49.57989839 +0000 UTC m=+735.527993037" Feb 16 17:11:49 crc kubenswrapper[4794]: I0216 17:11:49.604931 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-kr8wx" podStartSLOduration=2.166324267 podStartE2EDuration="7.604913647s" podCreationTimestamp="2026-02-16 17:11:42 +0000 UTC" firstStartedPulling="2026-02-16 17:11:43.226426009 +0000 UTC m=+729.174520656" lastFinishedPulling="2026-02-16 17:11:48.665015369 +0000 UTC m=+734.613110036" observedRunningTime="2026-02-16 17:11:49.60185369 +0000 UTC m=+735.549948337" watchObservedRunningTime="2026-02-16 17:11:49.604913647 +0000 UTC m=+735.553008294" Feb 16 17:11:50 crc kubenswrapper[4794]: I0216 17:11:50.141428 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:11:50 crc kubenswrapper[4794]: I0216 17:11:50.141601 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:11:52 crc kubenswrapper[4794]: I0216 17:11:52.809489 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-q75lx" Feb 16 17:12:15 crc kubenswrapper[4794]: I0216 17:12:15.411525 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jnn9l"] Feb 16 17:12:15 crc kubenswrapper[4794]: I0216 17:12:15.413709 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jnn9l" Feb 16 17:12:15 crc kubenswrapper[4794]: I0216 17:12:15.422603 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 17:12:15 crc kubenswrapper[4794]: I0216 17:12:15.428866 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jnn9l"] Feb 16 17:12:15 crc kubenswrapper[4794]: I0216 17:12:15.599111 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jnn9l\" (UID: \"eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jnn9l" Feb 16 17:12:15 crc kubenswrapper[4794]: I0216 17:12:15.599189 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jnn9l\" (UID: \"eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jnn9l" Feb 16 17:12:15 crc kubenswrapper[4794]: I0216 17:12:15.599367 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppdgh\" (UniqueName: \"kubernetes.io/projected/eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa-kube-api-access-ppdgh\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jnn9l\" (UID: \"eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jnn9l" Feb 16 17:12:15 crc kubenswrapper[4794]: I0216 17:12:15.642175 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e0898985jdd"] Feb 16 17:12:15 crc kubenswrapper[4794]: I0216 17:12:15.644257 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e0898985jdd" Feb 16 17:12:15 crc kubenswrapper[4794]: I0216 17:12:15.667284 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e0898985jdd"] Feb 16 17:12:15 crc kubenswrapper[4794]: I0216 17:12:15.700866 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c548d720-7bad-47af-badb-d01ab54e8afd-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e0898985jdd\" (UID: \"c548d720-7bad-47af-badb-d01ab54e8afd\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e0898985jdd" Feb 16 17:12:15 crc kubenswrapper[4794]: I0216 17:12:15.700944 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7twv\" (UniqueName: \"kubernetes.io/projected/c548d720-7bad-47af-badb-d01ab54e8afd-kube-api-access-c7twv\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e0898985jdd\" (UID: \"c548d720-7bad-47af-badb-d01ab54e8afd\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e0898985jdd" Feb 16 17:12:15 crc kubenswrapper[4794]: I0216 17:12:15.700985 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jnn9l\" (UID: \"eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jnn9l" Feb 16 17:12:15 crc kubenswrapper[4794]: I0216 17:12:15.701040 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jnn9l\" (UID: \"eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jnn9l" Feb 16 17:12:15 crc kubenswrapper[4794]: I0216 17:12:15.701082 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppdgh\" (UniqueName: \"kubernetes.io/projected/eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa-kube-api-access-ppdgh\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jnn9l\" (UID: \"eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jnn9l" Feb 16 17:12:15 crc kubenswrapper[4794]: I0216 17:12:15.701127 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c548d720-7bad-47af-badb-d01ab54e8afd-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e0898985jdd\" (UID: \"c548d720-7bad-47af-badb-d01ab54e8afd\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e0898985jdd" Feb 16 17:12:15 crc kubenswrapper[4794]: I0216 17:12:15.701766 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jnn9l\" (UID: \"eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jnn9l" Feb 16 17:12:15 crc kubenswrapper[4794]: I0216 17:12:15.702033 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jnn9l\" (UID: \"eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jnn9l" Feb 16 17:12:15 crc kubenswrapper[4794]: I0216 17:12:15.729374 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppdgh\" (UniqueName: \"kubernetes.io/projected/eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa-kube-api-access-ppdgh\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jnn9l\" (UID: \"eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jnn9l" Feb 16 17:12:15 crc kubenswrapper[4794]: I0216 17:12:15.743277 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jnn9l" Feb 16 17:12:15 crc kubenswrapper[4794]: I0216 17:12:15.802434 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c548d720-7bad-47af-badb-d01ab54e8afd-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e0898985jdd\" (UID: \"c548d720-7bad-47af-badb-d01ab54e8afd\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e0898985jdd" Feb 16 17:12:15 crc kubenswrapper[4794]: I0216 17:12:15.802643 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c548d720-7bad-47af-badb-d01ab54e8afd-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e0898985jdd\" (UID: \"c548d720-7bad-47af-badb-d01ab54e8afd\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e0898985jdd" Feb 16 17:12:15 crc kubenswrapper[4794]: I0216 17:12:15.802713 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7twv\" (UniqueName: \"kubernetes.io/projected/c548d720-7bad-47af-badb-d01ab54e8afd-kube-api-access-c7twv\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e0898985jdd\" (UID: \"c548d720-7bad-47af-badb-d01ab54e8afd\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e0898985jdd" Feb 16 17:12:15 crc kubenswrapper[4794]: I0216 17:12:15.803271 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c548d720-7bad-47af-badb-d01ab54e8afd-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e0898985jdd\" (UID: \"c548d720-7bad-47af-badb-d01ab54e8afd\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e0898985jdd" Feb 16 17:12:15 crc kubenswrapper[4794]: I0216 17:12:15.803367 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c548d720-7bad-47af-badb-d01ab54e8afd-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e0898985jdd\" (UID: \"c548d720-7bad-47af-badb-d01ab54e8afd\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e0898985jdd" Feb 16 17:12:15 crc kubenswrapper[4794]: I0216 17:12:15.825331 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7twv\" (UniqueName: \"kubernetes.io/projected/c548d720-7bad-47af-badb-d01ab54e8afd-kube-api-access-c7twv\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e0898985jdd\" (UID: \"c548d720-7bad-47af-badb-d01ab54e8afd\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e0898985jdd" Feb 16 17:12:15 crc kubenswrapper[4794]: I0216 17:12:15.957416 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e0898985jdd" Feb 16 17:12:16 crc kubenswrapper[4794]: I0216 17:12:16.147560 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e0898985jdd"] Feb 16 17:12:16 crc kubenswrapper[4794]: I0216 17:12:16.165431 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jnn9l"] Feb 16 17:12:16 crc kubenswrapper[4794]: W0216 17:12:16.177500 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb9f74b1_cfb9_43bd_981b_106ab4e9f0fa.slice/crio-e259096aaaf39a88c3d7c98bae4ba56ab2cf54bda158b07b4563d519e8ecb843 WatchSource:0}: Error finding container e259096aaaf39a88c3d7c98bae4ba56ab2cf54bda158b07b4563d519e8ecb843: Status 404 returned error can't find the container with id e259096aaaf39a88c3d7c98bae4ba56ab2cf54bda158b07b4563d519e8ecb843 Feb 16 17:12:16 crc kubenswrapper[4794]: I0216 17:12:16.769499 4794 generic.go:334] "Generic (PLEG): container finished" podID="c548d720-7bad-47af-badb-d01ab54e8afd" containerID="c5faa951e4c18334268844ae614e7f8fcc3bbe53101d287cfca3823513f55ef1" exitCode=0 Feb 16 17:12:16 crc kubenswrapper[4794]: I0216 17:12:16.769573 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e0898985jdd" event={"ID":"c548d720-7bad-47af-badb-d01ab54e8afd","Type":"ContainerDied","Data":"c5faa951e4c18334268844ae614e7f8fcc3bbe53101d287cfca3823513f55ef1"} Feb 16 17:12:16 crc kubenswrapper[4794]: I0216 17:12:16.770571 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e0898985jdd" event={"ID":"c548d720-7bad-47af-badb-d01ab54e8afd","Type":"ContainerStarted","Data":"3e455407e3599bebedb9fd0fa50f59f7c91ae30556ed35dec96e2074498db194"} Feb 16 17:12:16 crc kubenswrapper[4794]: I0216 17:12:16.777023 4794 generic.go:334] "Generic (PLEG): container finished" podID="eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa" containerID="0fdb4cea2306ddb1bf55b1abdc148ac056a486e6f7ed294cae883812267048f8" exitCode=0 Feb 16 17:12:16 crc kubenswrapper[4794]: I0216 17:12:16.777069 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jnn9l" event={"ID":"eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa","Type":"ContainerDied","Data":"0fdb4cea2306ddb1bf55b1abdc148ac056a486e6f7ed294cae883812267048f8"} Feb 16 17:12:16 crc kubenswrapper[4794]: I0216 17:12:16.777107 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jnn9l" event={"ID":"eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa","Type":"ContainerStarted","Data":"e259096aaaf39a88c3d7c98bae4ba56ab2cf54bda158b07b4563d519e8ecb843"} Feb 16 17:12:19 crc kubenswrapper[4794]: I0216 17:12:19.801883 4794 generic.go:334] "Generic (PLEG): container finished" podID="eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa" containerID="7a613da952756b9b40dbde700102a068e4f18984398a5d1f9f987bb1b881dfad" exitCode=0 Feb 16 17:12:19 crc kubenswrapper[4794]: I0216 17:12:19.801935 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jnn9l" event={"ID":"eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa","Type":"ContainerDied","Data":"7a613da952756b9b40dbde700102a068e4f18984398a5d1f9f987bb1b881dfad"} Feb 16 17:12:19 crc kubenswrapper[4794]: I0216 17:12:19.804982 4794 generic.go:334] "Generic (PLEG): container finished" podID="c548d720-7bad-47af-badb-d01ab54e8afd" containerID="adfc575a3bbb14bb285229b12bbd3fbd71fce5e6b8c0e2b3d93eb1194034e58c" exitCode=0 Feb 16 17:12:19 crc kubenswrapper[4794]: I0216 17:12:19.805046 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e0898985jdd" event={"ID":"c548d720-7bad-47af-badb-d01ab54e8afd","Type":"ContainerDied","Data":"adfc575a3bbb14bb285229b12bbd3fbd71fce5e6b8c0e2b3d93eb1194034e58c"} Feb 16 17:12:19 crc kubenswrapper[4794]: I0216 17:12:19.954931 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6jmf6"] Feb 16 17:12:19 crc kubenswrapper[4794]: I0216 17:12:19.960324 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6jmf6" Feb 16 17:12:19 crc kubenswrapper[4794]: I0216 17:12:19.965221 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6jmf6"] Feb 16 17:12:19 crc kubenswrapper[4794]: I0216 17:12:19.967839 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62ca5417-33d6-4b3f-9558-245fbc4ab116-utilities\") pod \"redhat-operators-6jmf6\" (UID: \"62ca5417-33d6-4b3f-9558-245fbc4ab116\") " pod="openshift-marketplace/redhat-operators-6jmf6" Feb 16 17:12:19 crc kubenswrapper[4794]: I0216 17:12:19.967899 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp5mg\" (UniqueName: \"kubernetes.io/projected/62ca5417-33d6-4b3f-9558-245fbc4ab116-kube-api-access-zp5mg\") pod \"redhat-operators-6jmf6\" (UID: \"62ca5417-33d6-4b3f-9558-245fbc4ab116\") " pod="openshift-marketplace/redhat-operators-6jmf6" Feb 16 17:12:19 crc kubenswrapper[4794]: I0216 17:12:19.967987 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62ca5417-33d6-4b3f-9558-245fbc4ab116-catalog-content\") pod \"redhat-operators-6jmf6\" (UID: \"62ca5417-33d6-4b3f-9558-245fbc4ab116\") " pod="openshift-marketplace/redhat-operators-6jmf6" Feb 16 17:12:20 crc kubenswrapper[4794]: I0216 17:12:20.068329 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62ca5417-33d6-4b3f-9558-245fbc4ab116-utilities\") pod \"redhat-operators-6jmf6\" (UID: \"62ca5417-33d6-4b3f-9558-245fbc4ab116\") " pod="openshift-marketplace/redhat-operators-6jmf6" Feb 16 17:12:20 crc kubenswrapper[4794]: I0216 17:12:20.068388 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp5mg\" (UniqueName: \"kubernetes.io/projected/62ca5417-33d6-4b3f-9558-245fbc4ab116-kube-api-access-zp5mg\") pod \"redhat-operators-6jmf6\" (UID: \"62ca5417-33d6-4b3f-9558-245fbc4ab116\") " pod="openshift-marketplace/redhat-operators-6jmf6" Feb 16 17:12:20 crc kubenswrapper[4794]: I0216 17:12:20.068793 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62ca5417-33d6-4b3f-9558-245fbc4ab116-catalog-content\") pod \"redhat-operators-6jmf6\" (UID: \"62ca5417-33d6-4b3f-9558-245fbc4ab116\") " pod="openshift-marketplace/redhat-operators-6jmf6" Feb 16 17:12:20 crc kubenswrapper[4794]: I0216 17:12:20.068889 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62ca5417-33d6-4b3f-9558-245fbc4ab116-utilities\") pod \"redhat-operators-6jmf6\" (UID: \"62ca5417-33d6-4b3f-9558-245fbc4ab116\") " pod="openshift-marketplace/redhat-operators-6jmf6" Feb 16 17:12:20 crc kubenswrapper[4794]: I0216 17:12:20.069130 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62ca5417-33d6-4b3f-9558-245fbc4ab116-catalog-content\") pod \"redhat-operators-6jmf6\" (UID: \"62ca5417-33d6-4b3f-9558-245fbc4ab116\") " pod="openshift-marketplace/redhat-operators-6jmf6" Feb 16 17:12:20 crc kubenswrapper[4794]: I0216 17:12:20.110629 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp5mg\" (UniqueName: \"kubernetes.io/projected/62ca5417-33d6-4b3f-9558-245fbc4ab116-kube-api-access-zp5mg\") pod \"redhat-operators-6jmf6\" (UID: \"62ca5417-33d6-4b3f-9558-245fbc4ab116\") " pod="openshift-marketplace/redhat-operators-6jmf6" Feb 16 17:12:20 crc kubenswrapper[4794]: I0216 17:12:20.140480 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:12:20 crc kubenswrapper[4794]: I0216 17:12:20.140556 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:12:20 crc kubenswrapper[4794]: I0216 17:12:20.140909 4794 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" Feb 16 17:12:20 crc kubenswrapper[4794]: I0216 17:12:20.141818 4794 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1242c1c0cd51c797081153357fc1a3afcbb8aac8f950b8ce178092b5638f56c5"} pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 17:12:20 crc kubenswrapper[4794]: I0216 17:12:20.141906 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" containerID="cri-o://1242c1c0cd51c797081153357fc1a3afcbb8aac8f950b8ce178092b5638f56c5" gracePeriod=600 Feb 16 17:12:20 crc kubenswrapper[4794]: I0216 17:12:20.280491 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6jmf6" Feb 16 17:12:20 crc kubenswrapper[4794]: I0216 17:12:20.501020 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6jmf6"] Feb 16 17:12:20 crc kubenswrapper[4794]: W0216 17:12:20.514488 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62ca5417_33d6_4b3f_9558_245fbc4ab116.slice/crio-7609fcc7f31f11a87d0d2934cf1d680ced38ecb45a01816bd64418940a2afa79 WatchSource:0}: Error finding container 7609fcc7f31f11a87d0d2934cf1d680ced38ecb45a01816bd64418940a2afa79: Status 404 returned error can't find the container with id 7609fcc7f31f11a87d0d2934cf1d680ced38ecb45a01816bd64418940a2afa79 Feb 16 17:12:20 crc kubenswrapper[4794]: I0216 17:12:20.813789 4794 generic.go:334] "Generic (PLEG): container finished" podID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerID="1242c1c0cd51c797081153357fc1a3afcbb8aac8f950b8ce178092b5638f56c5" exitCode=0 Feb 16 17:12:20 crc kubenswrapper[4794]: I0216 17:12:20.813996 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerDied","Data":"1242c1c0cd51c797081153357fc1a3afcbb8aac8f950b8ce178092b5638f56c5"} Feb 16 17:12:20 crc kubenswrapper[4794]: I0216 17:12:20.814175 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerStarted","Data":"3aa97207ca6eb1342d7e8e60d0b01510075367f6246c193f5626cd5253489630"} Feb 16 17:12:20 crc kubenswrapper[4794]: I0216 17:12:20.814196 4794 scope.go:117] "RemoveContainer" containerID="c272885df85830363f92a97efc1eb57e276b4a14b8042b5e60c9c53b0e8dd10b" Feb 16 17:12:20 crc kubenswrapper[4794]: I0216 17:12:20.817696 4794 generic.go:334] "Generic (PLEG): container finished" podID="c548d720-7bad-47af-badb-d01ab54e8afd" containerID="191d331ee4296dfd6603eba58bf9de9b9134dab1018a217c040466ba983dc97f" exitCode=0 Feb 16 17:12:20 crc kubenswrapper[4794]: I0216 17:12:20.817765 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e0898985jdd" event={"ID":"c548d720-7bad-47af-badb-d01ab54e8afd","Type":"ContainerDied","Data":"191d331ee4296dfd6603eba58bf9de9b9134dab1018a217c040466ba983dc97f"} Feb 16 17:12:20 crc kubenswrapper[4794]: I0216 17:12:20.819395 4794 generic.go:334] "Generic (PLEG): container finished" podID="62ca5417-33d6-4b3f-9558-245fbc4ab116" containerID="621aa927bc08a41045fed0ca6dba1ddcdadcb35b7f7cda2b9edea3aaa5044e7b" exitCode=0 Feb 16 17:12:20 crc kubenswrapper[4794]: I0216 17:12:20.819448 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jmf6" event={"ID":"62ca5417-33d6-4b3f-9558-245fbc4ab116","Type":"ContainerDied","Data":"621aa927bc08a41045fed0ca6dba1ddcdadcb35b7f7cda2b9edea3aaa5044e7b"} Feb 16 17:12:20 crc kubenswrapper[4794]: I0216 17:12:20.819465 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jmf6" event={"ID":"62ca5417-33d6-4b3f-9558-245fbc4ab116","Type":"ContainerStarted","Data":"7609fcc7f31f11a87d0d2934cf1d680ced38ecb45a01816bd64418940a2afa79"} Feb 16 17:12:20 crc kubenswrapper[4794]: I0216 17:12:20.823401 4794 generic.go:334] "Generic (PLEG): container finished" podID="eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa" containerID="a52f6f32863c216405f03568d1f8dc422a8de1af7f8340da9c2805d47114f4fa" exitCode=0 Feb 16 17:12:20 crc kubenswrapper[4794]: I0216 17:12:20.823426 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jnn9l" event={"ID":"eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa","Type":"ContainerDied","Data":"a52f6f32863c216405f03568d1f8dc422a8de1af7f8340da9c2805d47114f4fa"} Feb 16 17:12:21 crc kubenswrapper[4794]: I0216 17:12:21.159679 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-47lcs"] Feb 16 17:12:21 crc kubenswrapper[4794]: I0216 17:12:21.161221 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-47lcs" Feb 16 17:12:21 crc kubenswrapper[4794]: I0216 17:12:21.171425 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-47lcs"] Feb 16 17:12:21 crc kubenswrapper[4794]: I0216 17:12:21.286226 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zhjf\" (UniqueName: \"kubernetes.io/projected/5b4526e2-b1a7-43ff-9094-13bc4d1f3626-kube-api-access-8zhjf\") pod \"redhat-marketplace-47lcs\" (UID: \"5b4526e2-b1a7-43ff-9094-13bc4d1f3626\") " pod="openshift-marketplace/redhat-marketplace-47lcs" Feb 16 17:12:21 crc kubenswrapper[4794]: I0216 17:12:21.286360 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b4526e2-b1a7-43ff-9094-13bc4d1f3626-utilities\") pod \"redhat-marketplace-47lcs\" (UID: \"5b4526e2-b1a7-43ff-9094-13bc4d1f3626\") " pod="openshift-marketplace/redhat-marketplace-47lcs" Feb 16 17:12:21 crc kubenswrapper[4794]: I0216 17:12:21.286390 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b4526e2-b1a7-43ff-9094-13bc4d1f3626-catalog-content\") pod \"redhat-marketplace-47lcs\" (UID: \"5b4526e2-b1a7-43ff-9094-13bc4d1f3626\") " pod="openshift-marketplace/redhat-marketplace-47lcs" Feb 16 17:12:21 crc kubenswrapper[4794]: I0216 17:12:21.387939 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zhjf\" (UniqueName: \"kubernetes.io/projected/5b4526e2-b1a7-43ff-9094-13bc4d1f3626-kube-api-access-8zhjf\") pod \"redhat-marketplace-47lcs\" (UID: \"5b4526e2-b1a7-43ff-9094-13bc4d1f3626\") " pod="openshift-marketplace/redhat-marketplace-47lcs" Feb 16 17:12:21 crc kubenswrapper[4794]: I0216 17:12:21.388037 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b4526e2-b1a7-43ff-9094-13bc4d1f3626-utilities\") pod \"redhat-marketplace-47lcs\" (UID: \"5b4526e2-b1a7-43ff-9094-13bc4d1f3626\") " pod="openshift-marketplace/redhat-marketplace-47lcs" Feb 16 17:12:21 crc kubenswrapper[4794]: I0216 17:12:21.388059 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b4526e2-b1a7-43ff-9094-13bc4d1f3626-catalog-content\") pod \"redhat-marketplace-47lcs\" (UID: \"5b4526e2-b1a7-43ff-9094-13bc4d1f3626\") " pod="openshift-marketplace/redhat-marketplace-47lcs" Feb 16 17:12:21 crc kubenswrapper[4794]: I0216 17:12:21.388526 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b4526e2-b1a7-43ff-9094-13bc4d1f3626-catalog-content\") pod \"redhat-marketplace-47lcs\" (UID: \"5b4526e2-b1a7-43ff-9094-13bc4d1f3626\") " pod="openshift-marketplace/redhat-marketplace-47lcs" Feb 16 17:12:21 crc kubenswrapper[4794]: I0216 17:12:21.389076 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b4526e2-b1a7-43ff-9094-13bc4d1f3626-utilities\") pod \"redhat-marketplace-47lcs\" (UID: \"5b4526e2-b1a7-43ff-9094-13bc4d1f3626\") " pod="openshift-marketplace/redhat-marketplace-47lcs" Feb 16 17:12:21 crc kubenswrapper[4794]: I0216 17:12:21.411356 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zhjf\" (UniqueName: \"kubernetes.io/projected/5b4526e2-b1a7-43ff-9094-13bc4d1f3626-kube-api-access-8zhjf\") pod \"redhat-marketplace-47lcs\" (UID: \"5b4526e2-b1a7-43ff-9094-13bc4d1f3626\") " pod="openshift-marketplace/redhat-marketplace-47lcs" Feb 16 17:12:21 crc kubenswrapper[4794]: I0216 17:12:21.478949 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-47lcs" Feb 16 17:12:21 crc kubenswrapper[4794]: I0216 17:12:21.709207 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-47lcs"] Feb 16 17:12:21 crc kubenswrapper[4794]: I0216 17:12:21.835015 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jmf6" event={"ID":"62ca5417-33d6-4b3f-9558-245fbc4ab116","Type":"ContainerStarted","Data":"ab73987fd238bfda9b2bd754c848eeef7cad4a2d495c237724fe49e4c639ccad"} Feb 16 17:12:21 crc kubenswrapper[4794]: I0216 17:12:21.847769 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-47lcs" event={"ID":"5b4526e2-b1a7-43ff-9094-13bc4d1f3626","Type":"ContainerStarted","Data":"3f87ab56dc62e4afe2a5fe94bded639eb172d674f91732646dc191f4fdfdf69d"} Feb 16 17:12:22 crc kubenswrapper[4794]: I0216 17:12:22.081134 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jnn9l" Feb 16 17:12:22 crc kubenswrapper[4794]: I0216 17:12:22.105791 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa-util\") pod \"eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa\" (UID: \"eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa\") " Feb 16 17:12:22 crc kubenswrapper[4794]: I0216 17:12:22.105834 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppdgh\" (UniqueName: \"kubernetes.io/projected/eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa-kube-api-access-ppdgh\") pod \"eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa\" (UID: \"eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa\") " Feb 16 17:12:22 crc kubenswrapper[4794]: I0216 17:12:22.105925 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa-bundle\") pod \"eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa\" (UID: \"eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa\") " Feb 16 17:12:22 crc kubenswrapper[4794]: I0216 17:12:22.107744 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa-bundle" (OuterVolumeSpecName: "bundle") pod "eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa" (UID: "eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:12:22 crc kubenswrapper[4794]: I0216 17:12:22.111571 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa-kube-api-access-ppdgh" (OuterVolumeSpecName: "kube-api-access-ppdgh") pod "eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa" (UID: "eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa"). InnerVolumeSpecName "kube-api-access-ppdgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:12:22 crc kubenswrapper[4794]: I0216 17:12:22.208039 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppdgh\" (UniqueName: \"kubernetes.io/projected/eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa-kube-api-access-ppdgh\") on node \"crc\" DevicePath \"\"" Feb 16 17:12:22 crc kubenswrapper[4794]: I0216 17:12:22.208067 4794 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:12:22 crc kubenswrapper[4794]: I0216 17:12:22.236833 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa-util" (OuterVolumeSpecName: "util") pod "eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa" (UID: "eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:12:22 crc kubenswrapper[4794]: I0216 17:12:22.238427 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e0898985jdd" Feb 16 17:12:22 crc kubenswrapper[4794]: I0216 17:12:22.309003 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c548d720-7bad-47af-badb-d01ab54e8afd-util\") pod \"c548d720-7bad-47af-badb-d01ab54e8afd\" (UID: \"c548d720-7bad-47af-badb-d01ab54e8afd\") " Feb 16 17:12:22 crc kubenswrapper[4794]: I0216 17:12:22.309495 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c548d720-7bad-47af-badb-d01ab54e8afd-bundle\") pod \"c548d720-7bad-47af-badb-d01ab54e8afd\" (UID: \"c548d720-7bad-47af-badb-d01ab54e8afd\") " Feb 16 17:12:22 crc kubenswrapper[4794]: I0216 17:12:22.309603 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7twv\" (UniqueName: \"kubernetes.io/projected/c548d720-7bad-47af-badb-d01ab54e8afd-kube-api-access-c7twv\") pod \"c548d720-7bad-47af-badb-d01ab54e8afd\" (UID: \"c548d720-7bad-47af-badb-d01ab54e8afd\") " Feb 16 17:12:22 crc kubenswrapper[4794]: I0216 17:12:22.309921 4794 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa-util\") on node \"crc\" DevicePath \"\"" Feb 16 17:12:22 crc kubenswrapper[4794]: I0216 17:12:22.310605 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c548d720-7bad-47af-badb-d01ab54e8afd-bundle" (OuterVolumeSpecName: "bundle") pod "c548d720-7bad-47af-badb-d01ab54e8afd" (UID: "c548d720-7bad-47af-badb-d01ab54e8afd"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:12:22 crc kubenswrapper[4794]: I0216 17:12:22.312936 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c548d720-7bad-47af-badb-d01ab54e8afd-kube-api-access-c7twv" (OuterVolumeSpecName: "kube-api-access-c7twv") pod "c548d720-7bad-47af-badb-d01ab54e8afd" (UID: "c548d720-7bad-47af-badb-d01ab54e8afd"). InnerVolumeSpecName "kube-api-access-c7twv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:12:22 crc kubenswrapper[4794]: I0216 17:12:22.320518 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c548d720-7bad-47af-badb-d01ab54e8afd-util" (OuterVolumeSpecName: "util") pod "c548d720-7bad-47af-badb-d01ab54e8afd" (UID: "c548d720-7bad-47af-badb-d01ab54e8afd"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:12:22 crc kubenswrapper[4794]: I0216 17:12:22.411257 4794 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c548d720-7bad-47af-badb-d01ab54e8afd-util\") on node \"crc\" DevicePath \"\"" Feb 16 17:12:22 crc kubenswrapper[4794]: I0216 17:12:22.411300 4794 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c548d720-7bad-47af-badb-d01ab54e8afd-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:12:22 crc kubenswrapper[4794]: I0216 17:12:22.411358 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7twv\" (UniqueName: \"kubernetes.io/projected/c548d720-7bad-47af-badb-d01ab54e8afd-kube-api-access-c7twv\") on node \"crc\" DevicePath \"\"" Feb 16 17:12:22 crc kubenswrapper[4794]: I0216 17:12:22.856367 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jnn9l" Feb 16 17:12:22 crc kubenswrapper[4794]: I0216 17:12:22.856412 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jnn9l" event={"ID":"eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa","Type":"ContainerDied","Data":"e259096aaaf39a88c3d7c98bae4ba56ab2cf54bda158b07b4563d519e8ecb843"} Feb 16 17:12:22 crc kubenswrapper[4794]: I0216 17:12:22.856813 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e259096aaaf39a88c3d7c98bae4ba56ab2cf54bda158b07b4563d519e8ecb843" Feb 16 17:12:22 crc kubenswrapper[4794]: I0216 17:12:22.860930 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e0898985jdd" event={"ID":"c548d720-7bad-47af-badb-d01ab54e8afd","Type":"ContainerDied","Data":"3e455407e3599bebedb9fd0fa50f59f7c91ae30556ed35dec96e2074498db194"} Feb 16 17:12:22 crc kubenswrapper[4794]: I0216 17:12:22.860974 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e455407e3599bebedb9fd0fa50f59f7c91ae30556ed35dec96e2074498db194" Feb 16 17:12:22 crc kubenswrapper[4794]: I0216 17:12:22.861049 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e0898985jdd" Feb 16 17:12:22 crc kubenswrapper[4794]: I0216 17:12:22.863034 4794 generic.go:334] "Generic (PLEG): container finished" podID="5b4526e2-b1a7-43ff-9094-13bc4d1f3626" containerID="74507d7c67db4afee30582bb6cdcb2bdd618ac097648c697d0ce36c85be2c2c8" exitCode=0 Feb 16 17:12:22 crc kubenswrapper[4794]: I0216 17:12:22.863079 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-47lcs" event={"ID":"5b4526e2-b1a7-43ff-9094-13bc4d1f3626","Type":"ContainerDied","Data":"74507d7c67db4afee30582bb6cdcb2bdd618ac097648c697d0ce36c85be2c2c8"} Feb 16 17:12:22 crc kubenswrapper[4794]: I0216 17:12:22.866880 4794 generic.go:334] "Generic (PLEG): container finished" podID="62ca5417-33d6-4b3f-9558-245fbc4ab116" containerID="ab73987fd238bfda9b2bd754c848eeef7cad4a2d495c237724fe49e4c639ccad" exitCode=0 Feb 16 17:12:22 crc kubenswrapper[4794]: I0216 17:12:22.866919 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jmf6" event={"ID":"62ca5417-33d6-4b3f-9558-245fbc4ab116","Type":"ContainerDied","Data":"ab73987fd238bfda9b2bd754c848eeef7cad4a2d495c237724fe49e4c639ccad"} Feb 16 17:12:23 crc kubenswrapper[4794]: I0216 17:12:23.874659 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jmf6" event={"ID":"62ca5417-33d6-4b3f-9558-245fbc4ab116","Type":"ContainerStarted","Data":"d29b43222cc06e05ddbbec8495ae508f3696cd62846da87555095984f744dd8b"} Feb 16 17:12:23 crc kubenswrapper[4794]: I0216 17:12:23.876669 4794 generic.go:334] "Generic (PLEG): container finished" podID="5b4526e2-b1a7-43ff-9094-13bc4d1f3626" containerID="ead9f4d29f413bf3bb3198637976fd65629690580bed6e3b60e4efa8ce026471" exitCode=0 Feb 16 17:12:23 crc kubenswrapper[4794]: I0216 17:12:23.876699 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-47lcs" event={"ID":"5b4526e2-b1a7-43ff-9094-13bc4d1f3626","Type":"ContainerDied","Data":"ead9f4d29f413bf3bb3198637976fd65629690580bed6e3b60e4efa8ce026471"} Feb 16 17:12:23 crc kubenswrapper[4794]: I0216 17:12:23.901158 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6jmf6" podStartSLOduration=2.402315887 podStartE2EDuration="4.901138512s" podCreationTimestamp="2026-02-16 17:12:19 +0000 UTC" firstStartedPulling="2026-02-16 17:12:20.821622139 +0000 UTC m=+766.769716786" lastFinishedPulling="2026-02-16 17:12:23.320444764 +0000 UTC m=+769.268539411" observedRunningTime="2026-02-16 17:12:23.893672851 +0000 UTC m=+769.841767508" watchObservedRunningTime="2026-02-16 17:12:23.901138512 +0000 UTC m=+769.849233159" Feb 16 17:12:24 crc kubenswrapper[4794]: I0216 17:12:24.885336 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-47lcs" event={"ID":"5b4526e2-b1a7-43ff-9094-13bc4d1f3626","Type":"ContainerStarted","Data":"b525e6fc6cca4d8b620544d7916577de16c0d58c9ddb7596a838ada8d528f105"} Feb 16 17:12:24 crc kubenswrapper[4794]: I0216 17:12:24.917880 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-47lcs" podStartSLOduration=2.49125294 podStartE2EDuration="3.91786051s" podCreationTimestamp="2026-02-16 17:12:21 +0000 UTC" firstStartedPulling="2026-02-16 17:12:22.86483189 +0000 UTC m=+768.812926537" lastFinishedPulling="2026-02-16 17:12:24.29143945 +0000 UTC m=+770.239534107" observedRunningTime="2026-02-16 17:12:24.916061919 +0000 UTC m=+770.864156576" watchObservedRunningTime="2026-02-16 17:12:24.91786051 +0000 UTC m=+770.865955167" Feb 16 17:12:26 crc kubenswrapper[4794]: I0216 17:12:26.465082 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-nscl4"] Feb 16 17:12:26 crc kubenswrapper[4794]: E0216 17:12:26.465382 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa" containerName="pull" Feb 16 17:12:26 crc kubenswrapper[4794]: I0216 17:12:26.465397 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa" containerName="pull" Feb 16 17:12:26 crc kubenswrapper[4794]: E0216 17:12:26.465412 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa" containerName="extract" Feb 16 17:12:26 crc kubenswrapper[4794]: I0216 17:12:26.465419 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa" containerName="extract" Feb 16 17:12:26 crc kubenswrapper[4794]: E0216 17:12:26.465445 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c548d720-7bad-47af-badb-d01ab54e8afd" containerName="util" Feb 16 17:12:26 crc kubenswrapper[4794]: I0216 17:12:26.465453 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="c548d720-7bad-47af-badb-d01ab54e8afd" containerName="util" Feb 16 17:12:26 crc kubenswrapper[4794]: E0216 17:12:26.465466 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c548d720-7bad-47af-badb-d01ab54e8afd" containerName="extract" Feb 16 17:12:26 crc kubenswrapper[4794]: I0216 17:12:26.465475 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="c548d720-7bad-47af-badb-d01ab54e8afd" containerName="extract" Feb 16 17:12:26 crc kubenswrapper[4794]: E0216 17:12:26.465489 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa" containerName="util" Feb 16 17:12:26 crc kubenswrapper[4794]: I0216 17:12:26.465497 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa" containerName="util" Feb 16 17:12:26 crc kubenswrapper[4794]: E0216 17:12:26.465507 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c548d720-7bad-47af-badb-d01ab54e8afd" containerName="pull" Feb 16 17:12:26 crc kubenswrapper[4794]: I0216 17:12:26.465514 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="c548d720-7bad-47af-badb-d01ab54e8afd" containerName="pull" Feb 16 17:12:26 crc kubenswrapper[4794]: I0216 17:12:26.465654 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa" containerName="extract" Feb 16 17:12:26 crc kubenswrapper[4794]: I0216 17:12:26.465671 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="c548d720-7bad-47af-badb-d01ab54e8afd" containerName="extract" Feb 16 17:12:26 crc kubenswrapper[4794]: I0216 17:12:26.466230 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-nscl4" Feb 16 17:12:26 crc kubenswrapper[4794]: I0216 17:12:26.469970 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-klvkw" Feb 16 17:12:26 crc kubenswrapper[4794]: I0216 17:12:26.470074 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Feb 16 17:12:26 crc kubenswrapper[4794]: I0216 17:12:26.474745 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Feb 16 17:12:26 crc kubenswrapper[4794]: I0216 17:12:26.478182 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njkg6\" (UniqueName: \"kubernetes.io/projected/33b57aff-006a-45ac-8936-d763e799be70-kube-api-access-njkg6\") pod \"cluster-logging-operator-c769fd969-nscl4\" (UID: \"33b57aff-006a-45ac-8936-d763e799be70\") " pod="openshift-logging/cluster-logging-operator-c769fd969-nscl4" Feb 16 17:12:26 crc kubenswrapper[4794]: I0216 17:12:26.484740 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-nscl4"] Feb 16 17:12:26 crc kubenswrapper[4794]: I0216 17:12:26.579456 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njkg6\" (UniqueName: \"kubernetes.io/projected/33b57aff-006a-45ac-8936-d763e799be70-kube-api-access-njkg6\") pod \"cluster-logging-operator-c769fd969-nscl4\" (UID: \"33b57aff-006a-45ac-8936-d763e799be70\") " pod="openshift-logging/cluster-logging-operator-c769fd969-nscl4" Feb 16 17:12:26 crc kubenswrapper[4794]: I0216 17:12:26.609093 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njkg6\" (UniqueName: \"kubernetes.io/projected/33b57aff-006a-45ac-8936-d763e799be70-kube-api-access-njkg6\") pod \"cluster-logging-operator-c769fd969-nscl4\" (UID: \"33b57aff-006a-45ac-8936-d763e799be70\") " pod="openshift-logging/cluster-logging-operator-c769fd969-nscl4" Feb 16 17:12:26 crc kubenswrapper[4794]: I0216 17:12:26.783366 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-nscl4" Feb 16 17:12:27 crc kubenswrapper[4794]: I0216 17:12:27.023031 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-nscl4"] Feb 16 17:12:27 crc kubenswrapper[4794]: W0216 17:12:27.026697 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33b57aff_006a_45ac_8936_d763e799be70.slice/crio-b19d92c49f81a4f11da26a27236b90e934c03b3108f21dc9a770723b1ef0cc82 WatchSource:0}: Error finding container b19d92c49f81a4f11da26a27236b90e934c03b3108f21dc9a770723b1ef0cc82: Status 404 returned error can't find the container with id b19d92c49f81a4f11da26a27236b90e934c03b3108f21dc9a770723b1ef0cc82 Feb 16 17:12:27 crc kubenswrapper[4794]: I0216 17:12:27.906145 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-nscl4" event={"ID":"33b57aff-006a-45ac-8936-d763e799be70","Type":"ContainerStarted","Data":"b19d92c49f81a4f11da26a27236b90e934c03b3108f21dc9a770723b1ef0cc82"} Feb 16 17:12:30 crc kubenswrapper[4794]: I0216 17:12:30.282543 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6jmf6" Feb 16 17:12:30 crc kubenswrapper[4794]: I0216 17:12:30.282912 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6jmf6" Feb 16 17:12:30 crc kubenswrapper[4794]: I0216 17:12:30.369592 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6jmf6" Feb 16 17:12:30 crc kubenswrapper[4794]: I0216 17:12:30.963408 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6jmf6" Feb 16 17:12:31 crc kubenswrapper[4794]: I0216 17:12:31.479985 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-47lcs" Feb 16 17:12:31 crc kubenswrapper[4794]: I0216 17:12:31.480048 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-47lcs" Feb 16 17:12:31 crc kubenswrapper[4794]: I0216 17:12:31.520292 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-47lcs" Feb 16 17:12:31 crc kubenswrapper[4794]: I0216 17:12:31.975743 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-47lcs" Feb 16 17:12:34 crc kubenswrapper[4794]: I0216 17:12:34.549032 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6jmf6"] Feb 16 17:12:34 crc kubenswrapper[4794]: I0216 17:12:34.549491 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6jmf6" podUID="62ca5417-33d6-4b3f-9558-245fbc4ab116" containerName="registry-server" containerID="cri-o://d29b43222cc06e05ddbbec8495ae508f3696cd62846da87555095984f744dd8b" gracePeriod=2 Feb 16 17:12:34 crc kubenswrapper[4794]: I0216 17:12:34.917214 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6jmf6" Feb 16 17:12:34 crc kubenswrapper[4794]: I0216 17:12:34.955296 4794 generic.go:334] "Generic (PLEG): container finished" podID="62ca5417-33d6-4b3f-9558-245fbc4ab116" containerID="d29b43222cc06e05ddbbec8495ae508f3696cd62846da87555095984f744dd8b" exitCode=0 Feb 16 17:12:34 crc kubenswrapper[4794]: I0216 17:12:34.955392 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jmf6" event={"ID":"62ca5417-33d6-4b3f-9558-245fbc4ab116","Type":"ContainerDied","Data":"d29b43222cc06e05ddbbec8495ae508f3696cd62846da87555095984f744dd8b"} Feb 16 17:12:34 crc kubenswrapper[4794]: I0216 17:12:34.955423 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jmf6" event={"ID":"62ca5417-33d6-4b3f-9558-245fbc4ab116","Type":"ContainerDied","Data":"7609fcc7f31f11a87d0d2934cf1d680ced38ecb45a01816bd64418940a2afa79"} Feb 16 17:12:34 crc kubenswrapper[4794]: I0216 17:12:34.955444 4794 scope.go:117] "RemoveContainer" containerID="d29b43222cc06e05ddbbec8495ae508f3696cd62846da87555095984f744dd8b" Feb 16 17:12:34 crc kubenswrapper[4794]: I0216 17:12:34.955576 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6jmf6" Feb 16 17:12:34 crc kubenswrapper[4794]: I0216 17:12:34.958913 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-nscl4" event={"ID":"33b57aff-006a-45ac-8936-d763e799be70","Type":"ContainerStarted","Data":"472265d8beda3a29a7dcd4b50d8ea3a27d8ef802e6e78afa9dcf1068f434fa8c"} Feb 16 17:12:34 crc kubenswrapper[4794]: I0216 17:12:34.974059 4794 scope.go:117] "RemoveContainer" containerID="ab73987fd238bfda9b2bd754c848eeef7cad4a2d495c237724fe49e4c639ccad" Feb 16 17:12:34 crc kubenswrapper[4794]: I0216 17:12:34.991621 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-c769fd969-nscl4" podStartSLOduration=1.949768886 podStartE2EDuration="8.991602947s" podCreationTimestamp="2026-02-16 17:12:26 +0000 UTC" firstStartedPulling="2026-02-16 17:12:27.029738481 +0000 UTC m=+772.977833128" lastFinishedPulling="2026-02-16 17:12:34.071572542 +0000 UTC m=+780.019667189" observedRunningTime="2026-02-16 17:12:34.989704993 +0000 UTC m=+780.937799640" watchObservedRunningTime="2026-02-16 17:12:34.991602947 +0000 UTC m=+780.939697594" Feb 16 17:12:35 crc kubenswrapper[4794]: I0216 17:12:35.005328 4794 scope.go:117] "RemoveContainer" containerID="621aa927bc08a41045fed0ca6dba1ddcdadcb35b7f7cda2b9edea3aaa5044e7b" Feb 16 17:12:35 crc kubenswrapper[4794]: I0216 17:12:35.013873 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zp5mg\" (UniqueName: \"kubernetes.io/projected/62ca5417-33d6-4b3f-9558-245fbc4ab116-kube-api-access-zp5mg\") pod \"62ca5417-33d6-4b3f-9558-245fbc4ab116\" (UID: \"62ca5417-33d6-4b3f-9558-245fbc4ab116\") " Feb 16 17:12:35 crc kubenswrapper[4794]: I0216 17:12:35.013912 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62ca5417-33d6-4b3f-9558-245fbc4ab116-utilities\") pod \"62ca5417-33d6-4b3f-9558-245fbc4ab116\" (UID: \"62ca5417-33d6-4b3f-9558-245fbc4ab116\") " Feb 16 17:12:35 crc kubenswrapper[4794]: I0216 17:12:35.013939 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62ca5417-33d6-4b3f-9558-245fbc4ab116-catalog-content\") pod \"62ca5417-33d6-4b3f-9558-245fbc4ab116\" (UID: \"62ca5417-33d6-4b3f-9558-245fbc4ab116\") " Feb 16 17:12:35 crc kubenswrapper[4794]: I0216 17:12:35.017178 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62ca5417-33d6-4b3f-9558-245fbc4ab116-utilities" (OuterVolumeSpecName: "utilities") pod "62ca5417-33d6-4b3f-9558-245fbc4ab116" (UID: "62ca5417-33d6-4b3f-9558-245fbc4ab116"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:12:35 crc kubenswrapper[4794]: I0216 17:12:35.024556 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62ca5417-33d6-4b3f-9558-245fbc4ab116-kube-api-access-zp5mg" (OuterVolumeSpecName: "kube-api-access-zp5mg") pod "62ca5417-33d6-4b3f-9558-245fbc4ab116" (UID: "62ca5417-33d6-4b3f-9558-245fbc4ab116"). InnerVolumeSpecName "kube-api-access-zp5mg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:12:35 crc kubenswrapper[4794]: I0216 17:12:35.057496 4794 scope.go:117] "RemoveContainer" containerID="d29b43222cc06e05ddbbec8495ae508f3696cd62846da87555095984f744dd8b" Feb 16 17:12:35 crc kubenswrapper[4794]: E0216 17:12:35.064678 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d29b43222cc06e05ddbbec8495ae508f3696cd62846da87555095984f744dd8b\": container with ID starting with d29b43222cc06e05ddbbec8495ae508f3696cd62846da87555095984f744dd8b not found: ID does not exist" containerID="d29b43222cc06e05ddbbec8495ae508f3696cd62846da87555095984f744dd8b" Feb 16 17:12:35 crc kubenswrapper[4794]: I0216 17:12:35.064723 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d29b43222cc06e05ddbbec8495ae508f3696cd62846da87555095984f744dd8b"} err="failed to get container status \"d29b43222cc06e05ddbbec8495ae508f3696cd62846da87555095984f744dd8b\": rpc error: code = NotFound desc = could not find container \"d29b43222cc06e05ddbbec8495ae508f3696cd62846da87555095984f744dd8b\": container with ID starting with d29b43222cc06e05ddbbec8495ae508f3696cd62846da87555095984f744dd8b not found: ID does not exist" Feb 16 17:12:35 crc kubenswrapper[4794]: I0216 17:12:35.064747 4794 scope.go:117] "RemoveContainer" containerID="ab73987fd238bfda9b2bd754c848eeef7cad4a2d495c237724fe49e4c639ccad" Feb 16 17:12:35 crc kubenswrapper[4794]: E0216 17:12:35.076452 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab73987fd238bfda9b2bd754c848eeef7cad4a2d495c237724fe49e4c639ccad\": container with ID starting with ab73987fd238bfda9b2bd754c848eeef7cad4a2d495c237724fe49e4c639ccad not found: ID does not exist" containerID="ab73987fd238bfda9b2bd754c848eeef7cad4a2d495c237724fe49e4c639ccad" Feb 16 17:12:35 crc kubenswrapper[4794]: I0216 17:12:35.076512 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab73987fd238bfda9b2bd754c848eeef7cad4a2d495c237724fe49e4c639ccad"} err="failed to get container status \"ab73987fd238bfda9b2bd754c848eeef7cad4a2d495c237724fe49e4c639ccad\": rpc error: code = NotFound desc = could not find container \"ab73987fd238bfda9b2bd754c848eeef7cad4a2d495c237724fe49e4c639ccad\": container with ID starting with ab73987fd238bfda9b2bd754c848eeef7cad4a2d495c237724fe49e4c639ccad not found: ID does not exist" Feb 16 17:12:35 crc kubenswrapper[4794]: I0216 17:12:35.076543 4794 scope.go:117] "RemoveContainer" containerID="621aa927bc08a41045fed0ca6dba1ddcdadcb35b7f7cda2b9edea3aaa5044e7b" Feb 16 17:12:35 crc kubenswrapper[4794]: E0216 17:12:35.076976 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"621aa927bc08a41045fed0ca6dba1ddcdadcb35b7f7cda2b9edea3aaa5044e7b\": container with ID starting with 621aa927bc08a41045fed0ca6dba1ddcdadcb35b7f7cda2b9edea3aaa5044e7b not found: ID does not exist" containerID="621aa927bc08a41045fed0ca6dba1ddcdadcb35b7f7cda2b9edea3aaa5044e7b" Feb 16 17:12:35 crc kubenswrapper[4794]: I0216 17:12:35.077012 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"621aa927bc08a41045fed0ca6dba1ddcdadcb35b7f7cda2b9edea3aaa5044e7b"} err="failed to get container status \"621aa927bc08a41045fed0ca6dba1ddcdadcb35b7f7cda2b9edea3aaa5044e7b\": rpc error: code = NotFound desc = could not find container \"621aa927bc08a41045fed0ca6dba1ddcdadcb35b7f7cda2b9edea3aaa5044e7b\": container with ID starting with 621aa927bc08a41045fed0ca6dba1ddcdadcb35b7f7cda2b9edea3aaa5044e7b not found: ID does not exist" Feb 16 17:12:35 crc kubenswrapper[4794]: I0216 17:12:35.117794 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zp5mg\" (UniqueName: \"kubernetes.io/projected/62ca5417-33d6-4b3f-9558-245fbc4ab116-kube-api-access-zp5mg\") on node \"crc\" DevicePath \"\"" Feb 16 17:12:35 crc kubenswrapper[4794]: I0216 17:12:35.117832 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62ca5417-33d6-4b3f-9558-245fbc4ab116-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:12:35 crc kubenswrapper[4794]: I0216 17:12:35.206096 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62ca5417-33d6-4b3f-9558-245fbc4ab116-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "62ca5417-33d6-4b3f-9558-245fbc4ab116" (UID: "62ca5417-33d6-4b3f-9558-245fbc4ab116"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:12:35 crc kubenswrapper[4794]: I0216 17:12:35.219171 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62ca5417-33d6-4b3f-9558-245fbc4ab116-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:12:35 crc kubenswrapper[4794]: I0216 17:12:35.299260 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6jmf6"] Feb 16 17:12:35 crc kubenswrapper[4794]: I0216 17:12:35.305897 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6jmf6"] Feb 16 17:12:35 crc kubenswrapper[4794]: I0216 17:12:35.953761 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-47lcs"] Feb 16 17:12:35 crc kubenswrapper[4794]: I0216 17:12:35.954170 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-47lcs" podUID="5b4526e2-b1a7-43ff-9094-13bc4d1f3626" containerName="registry-server" containerID="cri-o://b525e6fc6cca4d8b620544d7916577de16c0d58c9ddb7596a838ada8d528f105" gracePeriod=2 Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.139487 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-8499595899-t6s7p"] Feb 16 17:12:36 crc kubenswrapper[4794]: E0216 17:12:36.139816 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62ca5417-33d6-4b3f-9558-245fbc4ab116" containerName="registry-server" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.139838 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="62ca5417-33d6-4b3f-9558-245fbc4ab116" containerName="registry-server" Feb 16 17:12:36 crc kubenswrapper[4794]: E0216 17:12:36.139867 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62ca5417-33d6-4b3f-9558-245fbc4ab116" containerName="extract-content" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.139875 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="62ca5417-33d6-4b3f-9558-245fbc4ab116" containerName="extract-content" Feb 16 17:12:36 crc kubenswrapper[4794]: E0216 17:12:36.139888 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62ca5417-33d6-4b3f-9558-245fbc4ab116" containerName="extract-utilities" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.139898 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="62ca5417-33d6-4b3f-9558-245fbc4ab116" containerName="extract-utilities" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.140036 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="62ca5417-33d6-4b3f-9558-245fbc4ab116" containerName="registry-server" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.140872 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-8499595899-t6s7p" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.142607 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.143982 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.144179 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.144755 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-787j8" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.145651 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.146081 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.161510 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-8499595899-t6s7p"] Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.232165 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1a441979-8971-4f00-9a49-0dbd7d90d537-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-8499595899-t6s7p\" (UID: \"1a441979-8971-4f00-9a49-0dbd7d90d537\") " pod="openshift-operators-redhat/loki-operator-controller-manager-8499595899-t6s7p" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.232230 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6gwj\" (UniqueName: \"kubernetes.io/projected/1a441979-8971-4f00-9a49-0dbd7d90d537-kube-api-access-g6gwj\") pod \"loki-operator-controller-manager-8499595899-t6s7p\" (UID: \"1a441979-8971-4f00-9a49-0dbd7d90d537\") " pod="openshift-operators-redhat/loki-operator-controller-manager-8499595899-t6s7p" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.232268 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1a441979-8971-4f00-9a49-0dbd7d90d537-webhook-cert\") pod \"loki-operator-controller-manager-8499595899-t6s7p\" (UID: \"1a441979-8971-4f00-9a49-0dbd7d90d537\") " pod="openshift-operators-redhat/loki-operator-controller-manager-8499595899-t6s7p" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.232514 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1a441979-8971-4f00-9a49-0dbd7d90d537-apiservice-cert\") pod \"loki-operator-controller-manager-8499595899-t6s7p\" (UID: \"1a441979-8971-4f00-9a49-0dbd7d90d537\") " pod="openshift-operators-redhat/loki-operator-controller-manager-8499595899-t6s7p" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.232584 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/1a441979-8971-4f00-9a49-0dbd7d90d537-manager-config\") pod \"loki-operator-controller-manager-8499595899-t6s7p\" (UID: \"1a441979-8971-4f00-9a49-0dbd7d90d537\") " pod="openshift-operators-redhat/loki-operator-controller-manager-8499595899-t6s7p" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.334293 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1a441979-8971-4f00-9a49-0dbd7d90d537-apiservice-cert\") pod \"loki-operator-controller-manager-8499595899-t6s7p\" (UID: \"1a441979-8971-4f00-9a49-0dbd7d90d537\") " pod="openshift-operators-redhat/loki-operator-controller-manager-8499595899-t6s7p" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.334739 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/1a441979-8971-4f00-9a49-0dbd7d90d537-manager-config\") pod \"loki-operator-controller-manager-8499595899-t6s7p\" (UID: \"1a441979-8971-4f00-9a49-0dbd7d90d537\") " pod="openshift-operators-redhat/loki-operator-controller-manager-8499595899-t6s7p" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.334773 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1a441979-8971-4f00-9a49-0dbd7d90d537-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-8499595899-t6s7p\" (UID: \"1a441979-8971-4f00-9a49-0dbd7d90d537\") " pod="openshift-operators-redhat/loki-operator-controller-manager-8499595899-t6s7p" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.334820 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6gwj\" (UniqueName: \"kubernetes.io/projected/1a441979-8971-4f00-9a49-0dbd7d90d537-kube-api-access-g6gwj\") pod \"loki-operator-controller-manager-8499595899-t6s7p\" (UID: \"1a441979-8971-4f00-9a49-0dbd7d90d537\") " pod="openshift-operators-redhat/loki-operator-controller-manager-8499595899-t6s7p" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.334854 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1a441979-8971-4f00-9a49-0dbd7d90d537-webhook-cert\") pod \"loki-operator-controller-manager-8499595899-t6s7p\" (UID: \"1a441979-8971-4f00-9a49-0dbd7d90d537\") " pod="openshift-operators-redhat/loki-operator-controller-manager-8499595899-t6s7p" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.339729 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/1a441979-8971-4f00-9a49-0dbd7d90d537-manager-config\") pod \"loki-operator-controller-manager-8499595899-t6s7p\" (UID: \"1a441979-8971-4f00-9a49-0dbd7d90d537\") " pod="openshift-operators-redhat/loki-operator-controller-manager-8499595899-t6s7p" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.349133 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1a441979-8971-4f00-9a49-0dbd7d90d537-apiservice-cert\") pod \"loki-operator-controller-manager-8499595899-t6s7p\" (UID: \"1a441979-8971-4f00-9a49-0dbd7d90d537\") " pod="openshift-operators-redhat/loki-operator-controller-manager-8499595899-t6s7p" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.350081 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1a441979-8971-4f00-9a49-0dbd7d90d537-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-8499595899-t6s7p\" (UID: \"1a441979-8971-4f00-9a49-0dbd7d90d537\") " pod="openshift-operators-redhat/loki-operator-controller-manager-8499595899-t6s7p" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.350608 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1a441979-8971-4f00-9a49-0dbd7d90d537-webhook-cert\") pod \"loki-operator-controller-manager-8499595899-t6s7p\" (UID: \"1a441979-8971-4f00-9a49-0dbd7d90d537\") " pod="openshift-operators-redhat/loki-operator-controller-manager-8499595899-t6s7p" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.367228 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6gwj\" (UniqueName: \"kubernetes.io/projected/1a441979-8971-4f00-9a49-0dbd7d90d537-kube-api-access-g6gwj\") pod \"loki-operator-controller-manager-8499595899-t6s7p\" (UID: \"1a441979-8971-4f00-9a49-0dbd7d90d537\") " pod="openshift-operators-redhat/loki-operator-controller-manager-8499595899-t6s7p" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.463886 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-8499595899-t6s7p" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.531111 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-47lcs" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.639808 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b4526e2-b1a7-43ff-9094-13bc4d1f3626-utilities\") pod \"5b4526e2-b1a7-43ff-9094-13bc4d1f3626\" (UID: \"5b4526e2-b1a7-43ff-9094-13bc4d1f3626\") " Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.640186 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b4526e2-b1a7-43ff-9094-13bc4d1f3626-catalog-content\") pod \"5b4526e2-b1a7-43ff-9094-13bc4d1f3626\" (UID: \"5b4526e2-b1a7-43ff-9094-13bc4d1f3626\") " Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.640259 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zhjf\" (UniqueName: \"kubernetes.io/projected/5b4526e2-b1a7-43ff-9094-13bc4d1f3626-kube-api-access-8zhjf\") pod \"5b4526e2-b1a7-43ff-9094-13bc4d1f3626\" (UID: \"5b4526e2-b1a7-43ff-9094-13bc4d1f3626\") " Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.640768 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b4526e2-b1a7-43ff-9094-13bc4d1f3626-utilities" (OuterVolumeSpecName: "utilities") pod "5b4526e2-b1a7-43ff-9094-13bc4d1f3626" (UID: "5b4526e2-b1a7-43ff-9094-13bc4d1f3626"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.646534 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b4526e2-b1a7-43ff-9094-13bc4d1f3626-kube-api-access-8zhjf" (OuterVolumeSpecName: "kube-api-access-8zhjf") pod "5b4526e2-b1a7-43ff-9094-13bc4d1f3626" (UID: "5b4526e2-b1a7-43ff-9094-13bc4d1f3626"). InnerVolumeSpecName "kube-api-access-8zhjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.676025 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b4526e2-b1a7-43ff-9094-13bc4d1f3626-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b4526e2-b1a7-43ff-9094-13bc4d1f3626" (UID: "5b4526e2-b1a7-43ff-9094-13bc4d1f3626"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.745254 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b4526e2-b1a7-43ff-9094-13bc4d1f3626-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.745297 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b4526e2-b1a7-43ff-9094-13bc4d1f3626-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.745329 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zhjf\" (UniqueName: \"kubernetes.io/projected/5b4526e2-b1a7-43ff-9094-13bc4d1f3626-kube-api-access-8zhjf\") on node \"crc\" DevicePath \"\"" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.806865 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62ca5417-33d6-4b3f-9558-245fbc4ab116" path="/var/lib/kubelet/pods/62ca5417-33d6-4b3f-9558-245fbc4ab116/volumes" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.894086 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-8499595899-t6s7p"] Feb 16 17:12:36 crc kubenswrapper[4794]: W0216 17:12:36.901146 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a441979_8971_4f00_9a49_0dbd7d90d537.slice/crio-41af830142d8d303ed15b44342b1e11bba16c091376166a55c03fef2ba304d0b WatchSource:0}: Error finding container 41af830142d8d303ed15b44342b1e11bba16c091376166a55c03fef2ba304d0b: Status 404 returned error can't find the container with id 41af830142d8d303ed15b44342b1e11bba16c091376166a55c03fef2ba304d0b Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.972132 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-8499595899-t6s7p" event={"ID":"1a441979-8971-4f00-9a49-0dbd7d90d537","Type":"ContainerStarted","Data":"41af830142d8d303ed15b44342b1e11bba16c091376166a55c03fef2ba304d0b"} Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.974861 4794 generic.go:334] "Generic (PLEG): container finished" podID="5b4526e2-b1a7-43ff-9094-13bc4d1f3626" containerID="b525e6fc6cca4d8b620544d7916577de16c0d58c9ddb7596a838ada8d528f105" exitCode=0 Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.974913 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-47lcs" event={"ID":"5b4526e2-b1a7-43ff-9094-13bc4d1f3626","Type":"ContainerDied","Data":"b525e6fc6cca4d8b620544d7916577de16c0d58c9ddb7596a838ada8d528f105"} Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.974946 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-47lcs" event={"ID":"5b4526e2-b1a7-43ff-9094-13bc4d1f3626","Type":"ContainerDied","Data":"3f87ab56dc62e4afe2a5fe94bded639eb172d674f91732646dc191f4fdfdf69d"} Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.974961 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-47lcs" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.974977 4794 scope.go:117] "RemoveContainer" containerID="b525e6fc6cca4d8b620544d7916577de16c0d58c9ddb7596a838ada8d528f105" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.997170 4794 scope.go:117] "RemoveContainer" containerID="ead9f4d29f413bf3bb3198637976fd65629690580bed6e3b60e4efa8ce026471" Feb 16 17:12:36 crc kubenswrapper[4794]: I0216 17:12:36.998053 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-47lcs"] Feb 16 17:12:37 crc kubenswrapper[4794]: I0216 17:12:37.014655 4794 scope.go:117] "RemoveContainer" containerID="74507d7c67db4afee30582bb6cdcb2bdd618ac097648c697d0ce36c85be2c2c8" Feb 16 17:12:37 crc kubenswrapper[4794]: I0216 17:12:37.015841 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-47lcs"] Feb 16 17:12:37 crc kubenswrapper[4794]: I0216 17:12:37.029181 4794 scope.go:117] "RemoveContainer" containerID="b525e6fc6cca4d8b620544d7916577de16c0d58c9ddb7596a838ada8d528f105" Feb 16 17:12:37 crc kubenswrapper[4794]: E0216 17:12:37.029628 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b525e6fc6cca4d8b620544d7916577de16c0d58c9ddb7596a838ada8d528f105\": container with ID starting with b525e6fc6cca4d8b620544d7916577de16c0d58c9ddb7596a838ada8d528f105 not found: ID does not exist" containerID="b525e6fc6cca4d8b620544d7916577de16c0d58c9ddb7596a838ada8d528f105" Feb 16 17:12:37 crc kubenswrapper[4794]: I0216 17:12:37.029663 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b525e6fc6cca4d8b620544d7916577de16c0d58c9ddb7596a838ada8d528f105"} err="failed to get container status \"b525e6fc6cca4d8b620544d7916577de16c0d58c9ddb7596a838ada8d528f105\": rpc error: code = NotFound desc = could not find container \"b525e6fc6cca4d8b620544d7916577de16c0d58c9ddb7596a838ada8d528f105\": container with ID starting with b525e6fc6cca4d8b620544d7916577de16c0d58c9ddb7596a838ada8d528f105 not found: ID does not exist" Feb 16 17:12:37 crc kubenswrapper[4794]: I0216 17:12:37.029684 4794 scope.go:117] "RemoveContainer" containerID="ead9f4d29f413bf3bb3198637976fd65629690580bed6e3b60e4efa8ce026471" Feb 16 17:12:37 crc kubenswrapper[4794]: E0216 17:12:37.030183 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ead9f4d29f413bf3bb3198637976fd65629690580bed6e3b60e4efa8ce026471\": container with ID starting with ead9f4d29f413bf3bb3198637976fd65629690580bed6e3b60e4efa8ce026471 not found: ID does not exist" containerID="ead9f4d29f413bf3bb3198637976fd65629690580bed6e3b60e4efa8ce026471" Feb 16 17:12:37 crc kubenswrapper[4794]: I0216 17:12:37.030231 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ead9f4d29f413bf3bb3198637976fd65629690580bed6e3b60e4efa8ce026471"} err="failed to get container status \"ead9f4d29f413bf3bb3198637976fd65629690580bed6e3b60e4efa8ce026471\": rpc error: code = NotFound desc = could not find container \"ead9f4d29f413bf3bb3198637976fd65629690580bed6e3b60e4efa8ce026471\": container with ID starting with ead9f4d29f413bf3bb3198637976fd65629690580bed6e3b60e4efa8ce026471 not found: ID does not exist" Feb 16 17:12:37 crc kubenswrapper[4794]: I0216 17:12:37.030270 4794 scope.go:117] "RemoveContainer" containerID="74507d7c67db4afee30582bb6cdcb2bdd618ac097648c697d0ce36c85be2c2c8" Feb 16 17:12:37 crc kubenswrapper[4794]: E0216 17:12:37.030692 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74507d7c67db4afee30582bb6cdcb2bdd618ac097648c697d0ce36c85be2c2c8\": container with ID starting with 74507d7c67db4afee30582bb6cdcb2bdd618ac097648c697d0ce36c85be2c2c8 not found: ID does not exist" containerID="74507d7c67db4afee30582bb6cdcb2bdd618ac097648c697d0ce36c85be2c2c8" Feb 16 17:12:37 crc kubenswrapper[4794]: I0216 17:12:37.030723 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74507d7c67db4afee30582bb6cdcb2bdd618ac097648c697d0ce36c85be2c2c8"} err="failed to get container status \"74507d7c67db4afee30582bb6cdcb2bdd618ac097648c697d0ce36c85be2c2c8\": rpc error: code = NotFound desc = could not find container \"74507d7c67db4afee30582bb6cdcb2bdd618ac097648c697d0ce36c85be2c2c8\": container with ID starting with 74507d7c67db4afee30582bb6cdcb2bdd618ac097648c697d0ce36c85be2c2c8 not found: ID does not exist" Feb 16 17:12:38 crc kubenswrapper[4794]: I0216 17:12:38.800320 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b4526e2-b1a7-43ff-9094-13bc4d1f3626" path="/var/lib/kubelet/pods/5b4526e2-b1a7-43ff-9094-13bc4d1f3626/volumes" Feb 16 17:12:42 crc kubenswrapper[4794]: I0216 17:12:42.011499 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-8499595899-t6s7p" event={"ID":"1a441979-8971-4f00-9a49-0dbd7d90d537","Type":"ContainerStarted","Data":"bf5a725141658c4a00d8d7a2b4ecde3c557fef5d660b5c0b2bda237ae1a1a96a"} Feb 16 17:12:48 crc kubenswrapper[4794]: I0216 17:12:48.062781 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-8499595899-t6s7p" event={"ID":"1a441979-8971-4f00-9a49-0dbd7d90d537","Type":"ContainerStarted","Data":"c6092d94f755181959c50e9118dc6fa96325db21745553a2b15c213375d46c55"} Feb 16 17:12:48 crc kubenswrapper[4794]: I0216 17:12:48.063485 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-8499595899-t6s7p" Feb 16 17:12:48 crc kubenswrapper[4794]: I0216 17:12:48.069816 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-8499595899-t6s7p" Feb 16 17:12:48 crc kubenswrapper[4794]: I0216 17:12:48.088630 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-8499595899-t6s7p" podStartSLOduration=2.001094749 podStartE2EDuration="12.088603042s" podCreationTimestamp="2026-02-16 17:12:36 +0000 UTC" firstStartedPulling="2026-02-16 17:12:36.907480948 +0000 UTC m=+782.855575605" lastFinishedPulling="2026-02-16 17:12:46.994989251 +0000 UTC m=+792.943083898" observedRunningTime="2026-02-16 17:12:48.086151553 +0000 UTC m=+794.034246220" watchObservedRunningTime="2026-02-16 17:12:48.088603042 +0000 UTC m=+794.036697719" Feb 16 17:12:51 crc kubenswrapper[4794]: I0216 17:12:51.837067 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 16 17:12:51 crc kubenswrapper[4794]: E0216 17:12:51.837715 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b4526e2-b1a7-43ff-9094-13bc4d1f3626" containerName="extract-utilities" Feb 16 17:12:51 crc kubenswrapper[4794]: I0216 17:12:51.837727 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b4526e2-b1a7-43ff-9094-13bc4d1f3626" containerName="extract-utilities" Feb 16 17:12:51 crc kubenswrapper[4794]: E0216 17:12:51.837751 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b4526e2-b1a7-43ff-9094-13bc4d1f3626" containerName="extract-content" Feb 16 17:12:51 crc kubenswrapper[4794]: I0216 17:12:51.837758 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b4526e2-b1a7-43ff-9094-13bc4d1f3626" containerName="extract-content" Feb 16 17:12:51 crc kubenswrapper[4794]: E0216 17:12:51.837765 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b4526e2-b1a7-43ff-9094-13bc4d1f3626" containerName="registry-server" Feb 16 17:12:51 crc kubenswrapper[4794]: I0216 17:12:51.837771 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b4526e2-b1a7-43ff-9094-13bc4d1f3626" containerName="registry-server" Feb 16 17:12:51 crc kubenswrapper[4794]: I0216 17:12:51.837884 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b4526e2-b1a7-43ff-9094-13bc4d1f3626" containerName="registry-server" Feb 16 17:12:51 crc kubenswrapper[4794]: I0216 17:12:51.838270 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 16 17:12:51 crc kubenswrapper[4794]: I0216 17:12:51.844855 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 16 17:12:51 crc kubenswrapper[4794]: I0216 17:12:51.845123 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 16 17:12:51 crc kubenswrapper[4794]: I0216 17:12:51.845191 4794 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-97vvk" Feb 16 17:12:51 crc kubenswrapper[4794]: I0216 17:12:51.855468 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 16 17:12:51 crc kubenswrapper[4794]: I0216 17:12:51.977561 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcrt6\" (UniqueName: \"kubernetes.io/projected/431fca7a-96e4-46c0-a862-f2c164bd20a7-kube-api-access-tcrt6\") pod \"minio\" (UID: \"431fca7a-96e4-46c0-a862-f2c164bd20a7\") " pod="minio-dev/minio" Feb 16 17:12:51 crc kubenswrapper[4794]: I0216 17:12:51.977643 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1354d480-f8e3-4ebc-aee5-7ba4ec60383a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1354d480-f8e3-4ebc-aee5-7ba4ec60383a\") pod \"minio\" (UID: \"431fca7a-96e4-46c0-a862-f2c164bd20a7\") " pod="minio-dev/minio" Feb 16 17:12:52 crc kubenswrapper[4794]: I0216 17:12:52.079066 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcrt6\" (UniqueName: \"kubernetes.io/projected/431fca7a-96e4-46c0-a862-f2c164bd20a7-kube-api-access-tcrt6\") pod \"minio\" (UID: \"431fca7a-96e4-46c0-a862-f2c164bd20a7\") " pod="minio-dev/minio" Feb 16 17:12:52 crc kubenswrapper[4794]: I0216 17:12:52.079623 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1354d480-f8e3-4ebc-aee5-7ba4ec60383a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1354d480-f8e3-4ebc-aee5-7ba4ec60383a\") pod \"minio\" (UID: \"431fca7a-96e4-46c0-a862-f2c164bd20a7\") " pod="minio-dev/minio" Feb 16 17:12:52 crc kubenswrapper[4794]: I0216 17:12:52.085245 4794 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:12:52 crc kubenswrapper[4794]: I0216 17:12:52.085434 4794 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1354d480-f8e3-4ebc-aee5-7ba4ec60383a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1354d480-f8e3-4ebc-aee5-7ba4ec60383a\") pod \"minio\" (UID: \"431fca7a-96e4-46c0-a862-f2c164bd20a7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/70483a54544dd3195f235f2ec5389871c8587aabf1bae85c13631cbada587326/globalmount\"" pod="minio-dev/minio" Feb 16 17:12:52 crc kubenswrapper[4794]: I0216 17:12:52.104022 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcrt6\" (UniqueName: \"kubernetes.io/projected/431fca7a-96e4-46c0-a862-f2c164bd20a7-kube-api-access-tcrt6\") pod \"minio\" (UID: \"431fca7a-96e4-46c0-a862-f2c164bd20a7\") " pod="minio-dev/minio" Feb 16 17:12:52 crc kubenswrapper[4794]: I0216 17:12:52.117103 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1354d480-f8e3-4ebc-aee5-7ba4ec60383a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1354d480-f8e3-4ebc-aee5-7ba4ec60383a\") pod \"minio\" (UID: \"431fca7a-96e4-46c0-a862-f2c164bd20a7\") " pod="minio-dev/minio" Feb 16 17:12:52 crc kubenswrapper[4794]: I0216 17:12:52.152682 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 16 17:12:52 crc kubenswrapper[4794]: I0216 17:12:52.558490 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 16 17:12:53 crc kubenswrapper[4794]: I0216 17:12:53.099877 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"431fca7a-96e4-46c0-a862-f2c164bd20a7","Type":"ContainerStarted","Data":"e0ea0ba18479397d1b517ee6fabdd4aee0980bea4bf90360edc85fcab99d3ea6"} Feb 16 17:12:56 crc kubenswrapper[4794]: I0216 17:12:56.120151 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"431fca7a-96e4-46c0-a862-f2c164bd20a7","Type":"ContainerStarted","Data":"ef405cb1b0183c62fa4403cd107eb7fc6bfe222a16c15e77c20d45324377dad8"} Feb 16 17:12:56 crc kubenswrapper[4794]: I0216 17:12:56.145166 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.359790425 podStartE2EDuration="7.145141266s" podCreationTimestamp="2026-02-16 17:12:49 +0000 UTC" firstStartedPulling="2026-02-16 17:12:52.564368476 +0000 UTC m=+798.512463123" lastFinishedPulling="2026-02-16 17:12:55.349719317 +0000 UTC m=+801.297813964" observedRunningTime="2026-02-16 17:12:56.14003206 +0000 UTC m=+802.088126707" watchObservedRunningTime="2026-02-16 17:12:56.145141266 +0000 UTC m=+802.093235913" Feb 16 17:13:00 crc kubenswrapper[4794]: I0216 17:13:00.949361 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-zvg2f"] Feb 16 17:13:00 crc kubenswrapper[4794]: I0216 17:13:00.950900 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zvg2f" Feb 16 17:13:00 crc kubenswrapper[4794]: I0216 17:13:00.953946 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Feb 16 17:13:00 crc kubenswrapper[4794]: I0216 17:13:00.953954 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Feb 16 17:13:00 crc kubenswrapper[4794]: I0216 17:13:00.954327 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Feb 16 17:13:00 crc kubenswrapper[4794]: I0216 17:13:00.954458 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Feb 16 17:13:00 crc kubenswrapper[4794]: I0216 17:13:00.954463 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-9pw8v" Feb 16 17:13:00 crc kubenswrapper[4794]: I0216 17:13:00.970091 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-zvg2f"] Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.105952 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-cm8fj"] Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.107192 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-cm8fj" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.108136 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/284971a6-d034-4e31-b64b-4e842d877aed-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-zvg2f\" (UID: \"284971a6-d034-4e31-b64b-4e842d877aed\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zvg2f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.108215 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg7pj\" (UniqueName: \"kubernetes.io/projected/284971a6-d034-4e31-b64b-4e842d877aed-kube-api-access-mg7pj\") pod \"logging-loki-distributor-5d5548c9f5-zvg2f\" (UID: \"284971a6-d034-4e31-b64b-4e842d877aed\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zvg2f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.108348 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/284971a6-d034-4e31-b64b-4e842d877aed-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-zvg2f\" (UID: \"284971a6-d034-4e31-b64b-4e842d877aed\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zvg2f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.108382 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/284971a6-d034-4e31-b64b-4e842d877aed-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-zvg2f\" (UID: \"284971a6-d034-4e31-b64b-4e842d877aed\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zvg2f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.108404 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284971a6-d034-4e31-b64b-4e842d877aed-config\") pod \"logging-loki-distributor-5d5548c9f5-zvg2f\" (UID: \"284971a6-d034-4e31-b64b-4e842d877aed\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zvg2f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.108789 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.109463 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.109757 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.123246 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-cm8fj"] Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.202753 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-4dmjf"] Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.203585 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-4dmjf" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.210011 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0814a3c5-3284-4e33-b3cc-4b4163bbcaa1-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-cm8fj\" (UID: \"0814a3c5-3284-4e33-b3cc-4b4163bbcaa1\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-cm8fj" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.210060 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/284971a6-d034-4e31-b64b-4e842d877aed-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-zvg2f\" (UID: \"284971a6-d034-4e31-b64b-4e842d877aed\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zvg2f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.210088 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/284971a6-d034-4e31-b64b-4e842d877aed-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-zvg2f\" (UID: \"284971a6-d034-4e31-b64b-4e842d877aed\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zvg2f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.210106 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284971a6-d034-4e31-b64b-4e842d877aed-config\") pod \"logging-loki-distributor-5d5548c9f5-zvg2f\" (UID: \"284971a6-d034-4e31-b64b-4e842d877aed\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zvg2f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.210136 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thtlk\" (UniqueName: \"kubernetes.io/projected/0814a3c5-3284-4e33-b3cc-4b4163bbcaa1-kube-api-access-thtlk\") pod \"logging-loki-querier-76bf7b6d45-cm8fj\" (UID: \"0814a3c5-3284-4e33-b3cc-4b4163bbcaa1\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-cm8fj" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.210157 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/284971a6-d034-4e31-b64b-4e842d877aed-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-zvg2f\" (UID: \"284971a6-d034-4e31-b64b-4e842d877aed\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zvg2f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.210182 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg7pj\" (UniqueName: \"kubernetes.io/projected/284971a6-d034-4e31-b64b-4e842d877aed-kube-api-access-mg7pj\") pod \"logging-loki-distributor-5d5548c9f5-zvg2f\" (UID: \"284971a6-d034-4e31-b64b-4e842d877aed\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zvg2f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.210210 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/0814a3c5-3284-4e33-b3cc-4b4163bbcaa1-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-cm8fj\" (UID: \"0814a3c5-3284-4e33-b3cc-4b4163bbcaa1\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-cm8fj" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.210232 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0814a3c5-3284-4e33-b3cc-4b4163bbcaa1-config\") pod \"logging-loki-querier-76bf7b6d45-cm8fj\" (UID: \"0814a3c5-3284-4e33-b3cc-4b4163bbcaa1\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-cm8fj" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.210259 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/0814a3c5-3284-4e33-b3cc-4b4163bbcaa1-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-cm8fj\" (UID: \"0814a3c5-3284-4e33-b3cc-4b4163bbcaa1\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-cm8fj" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.210283 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/0814a3c5-3284-4e33-b3cc-4b4163bbcaa1-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-cm8fj\" (UID: \"0814a3c5-3284-4e33-b3cc-4b4163bbcaa1\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-cm8fj" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.211093 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/284971a6-d034-4e31-b64b-4e842d877aed-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-zvg2f\" (UID: \"284971a6-d034-4e31-b64b-4e842d877aed\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zvg2f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.211421 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284971a6-d034-4e31-b64b-4e842d877aed-config\") pod \"logging-loki-distributor-5d5548c9f5-zvg2f\" (UID: \"284971a6-d034-4e31-b64b-4e842d877aed\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zvg2f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.215519 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.217348 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/284971a6-d034-4e31-b64b-4e842d877aed-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-zvg2f\" (UID: \"284971a6-d034-4e31-b64b-4e842d877aed\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zvg2f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.218897 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/284971a6-d034-4e31-b64b-4e842d877aed-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-zvg2f\" (UID: \"284971a6-d034-4e31-b64b-4e842d877aed\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zvg2f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.223571 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.249053 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-4dmjf"] Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.255073 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg7pj\" (UniqueName: \"kubernetes.io/projected/284971a6-d034-4e31-b64b-4e842d877aed-kube-api-access-mg7pj\") pod \"logging-loki-distributor-5d5548c9f5-zvg2f\" (UID: \"284971a6-d034-4e31-b64b-4e842d877aed\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zvg2f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.270621 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zvg2f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.315098 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thtlk\" (UniqueName: \"kubernetes.io/projected/0814a3c5-3284-4e33-b3cc-4b4163bbcaa1-kube-api-access-thtlk\") pod \"logging-loki-querier-76bf7b6d45-cm8fj\" (UID: \"0814a3c5-3284-4e33-b3cc-4b4163bbcaa1\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-cm8fj" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.315160 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5447b950-1b55-4b40-8f6f-5fde1e6fdf58-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-4dmjf\" (UID: \"5447b950-1b55-4b40-8f6f-5fde1e6fdf58\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-4dmjf" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.315227 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/0814a3c5-3284-4e33-b3cc-4b4163bbcaa1-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-cm8fj\" (UID: \"0814a3c5-3284-4e33-b3cc-4b4163bbcaa1\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-cm8fj" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.315258 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxrz9\" (UniqueName: \"kubernetes.io/projected/5447b950-1b55-4b40-8f6f-5fde1e6fdf58-kube-api-access-pxrz9\") pod \"logging-loki-query-frontend-6d6859c548-4dmjf\" (UID: \"5447b950-1b55-4b40-8f6f-5fde1e6fdf58\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-4dmjf" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.315281 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0814a3c5-3284-4e33-b3cc-4b4163bbcaa1-config\") pod \"logging-loki-querier-76bf7b6d45-cm8fj\" (UID: \"0814a3c5-3284-4e33-b3cc-4b4163bbcaa1\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-cm8fj" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.315339 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/0814a3c5-3284-4e33-b3cc-4b4163bbcaa1-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-cm8fj\" (UID: \"0814a3c5-3284-4e33-b3cc-4b4163bbcaa1\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-cm8fj" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.315372 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/0814a3c5-3284-4e33-b3cc-4b4163bbcaa1-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-cm8fj\" (UID: \"0814a3c5-3284-4e33-b3cc-4b4163bbcaa1\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-cm8fj" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.315406 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5447b950-1b55-4b40-8f6f-5fde1e6fdf58-config\") pod \"logging-loki-query-frontend-6d6859c548-4dmjf\" (UID: \"5447b950-1b55-4b40-8f6f-5fde1e6fdf58\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-4dmjf" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.315440 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0814a3c5-3284-4e33-b3cc-4b4163bbcaa1-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-cm8fj\" (UID: \"0814a3c5-3284-4e33-b3cc-4b4163bbcaa1\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-cm8fj" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.315482 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/5447b950-1b55-4b40-8f6f-5fde1e6fdf58-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-4dmjf\" (UID: \"5447b950-1b55-4b40-8f6f-5fde1e6fdf58\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-4dmjf" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.315508 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/5447b950-1b55-4b40-8f6f-5fde1e6fdf58-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-4dmjf\" (UID: \"5447b950-1b55-4b40-8f6f-5fde1e6fdf58\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-4dmjf" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.316795 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0814a3c5-3284-4e33-b3cc-4b4163bbcaa1-config\") pod \"logging-loki-querier-76bf7b6d45-cm8fj\" (UID: \"0814a3c5-3284-4e33-b3cc-4b4163bbcaa1\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-cm8fj" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.317259 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0814a3c5-3284-4e33-b3cc-4b4163bbcaa1-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-cm8fj\" (UID: \"0814a3c5-3284-4e33-b3cc-4b4163bbcaa1\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-cm8fj" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.332233 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/0814a3c5-3284-4e33-b3cc-4b4163bbcaa1-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-cm8fj\" (UID: \"0814a3c5-3284-4e33-b3cc-4b4163bbcaa1\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-cm8fj" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.340097 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/0814a3c5-3284-4e33-b3cc-4b4163bbcaa1-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-cm8fj\" (UID: \"0814a3c5-3284-4e33-b3cc-4b4163bbcaa1\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-cm8fj" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.342000 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/0814a3c5-3284-4e33-b3cc-4b4163bbcaa1-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-cm8fj\" (UID: \"0814a3c5-3284-4e33-b3cc-4b4163bbcaa1\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-cm8fj" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.353491 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thtlk\" (UniqueName: \"kubernetes.io/projected/0814a3c5-3284-4e33-b3cc-4b4163bbcaa1-kube-api-access-thtlk\") pod \"logging-loki-querier-76bf7b6d45-cm8fj\" (UID: \"0814a3c5-3284-4e33-b3cc-4b4163bbcaa1\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-cm8fj" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.421937 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5447b950-1b55-4b40-8f6f-5fde1e6fdf58-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-4dmjf\" (UID: \"5447b950-1b55-4b40-8f6f-5fde1e6fdf58\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-4dmjf" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.422033 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxrz9\" (UniqueName: \"kubernetes.io/projected/5447b950-1b55-4b40-8f6f-5fde1e6fdf58-kube-api-access-pxrz9\") pod \"logging-loki-query-frontend-6d6859c548-4dmjf\" (UID: \"5447b950-1b55-4b40-8f6f-5fde1e6fdf58\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-4dmjf" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.422085 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5447b950-1b55-4b40-8f6f-5fde1e6fdf58-config\") pod \"logging-loki-query-frontend-6d6859c548-4dmjf\" (UID: \"5447b950-1b55-4b40-8f6f-5fde1e6fdf58\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-4dmjf" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.422129 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/5447b950-1b55-4b40-8f6f-5fde1e6fdf58-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-4dmjf\" (UID: \"5447b950-1b55-4b40-8f6f-5fde1e6fdf58\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-4dmjf" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.422156 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/5447b950-1b55-4b40-8f6f-5fde1e6fdf58-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-4dmjf\" (UID: \"5447b950-1b55-4b40-8f6f-5fde1e6fdf58\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-4dmjf" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.423469 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5447b950-1b55-4b40-8f6f-5fde1e6fdf58-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-4dmjf\" (UID: \"5447b950-1b55-4b40-8f6f-5fde1e6fdf58\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-4dmjf" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.424083 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5447b950-1b55-4b40-8f6f-5fde1e6fdf58-config\") pod \"logging-loki-query-frontend-6d6859c548-4dmjf\" (UID: \"5447b950-1b55-4b40-8f6f-5fde1e6fdf58\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-4dmjf" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.430096 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-cm8fj" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.431053 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/5447b950-1b55-4b40-8f6f-5fde1e6fdf58-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-4dmjf\" (UID: \"5447b950-1b55-4b40-8f6f-5fde1e6fdf58\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-4dmjf" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.463680 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/5447b950-1b55-4b40-8f6f-5fde1e6fdf58-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-4dmjf\" (UID: \"5447b950-1b55-4b40-8f6f-5fde1e6fdf58\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-4dmjf" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.475108 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxrz9\" (UniqueName: \"kubernetes.io/projected/5447b950-1b55-4b40-8f6f-5fde1e6fdf58-kube-api-access-pxrz9\") pod \"logging-loki-query-frontend-6d6859c548-4dmjf\" (UID: \"5447b950-1b55-4b40-8f6f-5fde1e6fdf58\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-4dmjf" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.479494 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-5db5847d75-whsqk"] Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.480669 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5db5847d75-whsqk" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.491332 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.491857 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.491929 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.491970 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.492001 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.500098 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-5db5847d75-dzs5f"] Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.501980 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5db5847d75-dzs5f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.515066 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-gksbt" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.519431 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5db5847d75-whsqk"] Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.530357 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5db5847d75-dzs5f"] Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.583858 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-4dmjf" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.624747 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/032057e1-9a2f-40a9-931a-9ff902e0abeb-tls-secret\") pod \"logging-loki-gateway-5db5847d75-dzs5f\" (UID: \"032057e1-9a2f-40a9-931a-9ff902e0abeb\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-dzs5f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.624789 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/032057e1-9a2f-40a9-931a-9ff902e0abeb-lokistack-gateway\") pod \"logging-loki-gateway-5db5847d75-dzs5f\" (UID: \"032057e1-9a2f-40a9-931a-9ff902e0abeb\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-dzs5f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.624823 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/032057e1-9a2f-40a9-931a-9ff902e0abeb-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5db5847d75-dzs5f\" (UID: \"032057e1-9a2f-40a9-931a-9ff902e0abeb\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-dzs5f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.624843 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d2f1ecd-980b-430c-8ed1-e83406722170-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5db5847d75-whsqk\" (UID: \"9d2f1ecd-980b-430c-8ed1-e83406722170\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-whsqk" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.624859 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/9d2f1ecd-980b-430c-8ed1-e83406722170-tenants\") pod \"logging-loki-gateway-5db5847d75-whsqk\" (UID: \"9d2f1ecd-980b-430c-8ed1-e83406722170\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-whsqk" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.624884 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/032057e1-9a2f-40a9-931a-9ff902e0abeb-rbac\") pod \"logging-loki-gateway-5db5847d75-dzs5f\" (UID: \"032057e1-9a2f-40a9-931a-9ff902e0abeb\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-dzs5f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.624906 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/9d2f1ecd-980b-430c-8ed1-e83406722170-lokistack-gateway\") pod \"logging-loki-gateway-5db5847d75-whsqk\" (UID: \"9d2f1ecd-980b-430c-8ed1-e83406722170\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-whsqk" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.624924 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr9q5\" (UniqueName: \"kubernetes.io/projected/032057e1-9a2f-40a9-931a-9ff902e0abeb-kube-api-access-dr9q5\") pod \"logging-loki-gateway-5db5847d75-dzs5f\" (UID: \"032057e1-9a2f-40a9-931a-9ff902e0abeb\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-dzs5f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.624947 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/9d2f1ecd-980b-430c-8ed1-e83406722170-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5db5847d75-whsqk\" (UID: \"9d2f1ecd-980b-430c-8ed1-e83406722170\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-whsqk" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.624962 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/032057e1-9a2f-40a9-931a-9ff902e0abeb-tenants\") pod \"logging-loki-gateway-5db5847d75-dzs5f\" (UID: \"032057e1-9a2f-40a9-931a-9ff902e0abeb\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-dzs5f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.624979 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d2f1ecd-980b-430c-8ed1-e83406722170-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5db5847d75-whsqk\" (UID: \"9d2f1ecd-980b-430c-8ed1-e83406722170\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-whsqk" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.625005 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/9d2f1ecd-980b-430c-8ed1-e83406722170-rbac\") pod \"logging-loki-gateway-5db5847d75-whsqk\" (UID: \"9d2f1ecd-980b-430c-8ed1-e83406722170\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-whsqk" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.625022 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/032057e1-9a2f-40a9-931a-9ff902e0abeb-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5db5847d75-dzs5f\" (UID: \"032057e1-9a2f-40a9-931a-9ff902e0abeb\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-dzs5f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.625040 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/9d2f1ecd-980b-430c-8ed1-e83406722170-tls-secret\") pod \"logging-loki-gateway-5db5847d75-whsqk\" (UID: \"9d2f1ecd-980b-430c-8ed1-e83406722170\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-whsqk" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.625060 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/032057e1-9a2f-40a9-931a-9ff902e0abeb-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5db5847d75-dzs5f\" (UID: \"032057e1-9a2f-40a9-931a-9ff902e0abeb\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-dzs5f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.625076 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djdh2\" (UniqueName: \"kubernetes.io/projected/9d2f1ecd-980b-430c-8ed1-e83406722170-kube-api-access-djdh2\") pod \"logging-loki-gateway-5db5847d75-whsqk\" (UID: \"9d2f1ecd-980b-430c-8ed1-e83406722170\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-whsqk" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.727500 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/032057e1-9a2f-40a9-931a-9ff902e0abeb-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5db5847d75-dzs5f\" (UID: \"032057e1-9a2f-40a9-931a-9ff902e0abeb\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-dzs5f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.727551 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d2f1ecd-980b-430c-8ed1-e83406722170-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5db5847d75-whsqk\" (UID: \"9d2f1ecd-980b-430c-8ed1-e83406722170\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-whsqk" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.727572 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/9d2f1ecd-980b-430c-8ed1-e83406722170-tenants\") pod \"logging-loki-gateway-5db5847d75-whsqk\" (UID: \"9d2f1ecd-980b-430c-8ed1-e83406722170\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-whsqk" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.727601 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/032057e1-9a2f-40a9-931a-9ff902e0abeb-rbac\") pod \"logging-loki-gateway-5db5847d75-dzs5f\" (UID: \"032057e1-9a2f-40a9-931a-9ff902e0abeb\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-dzs5f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.727628 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/9d2f1ecd-980b-430c-8ed1-e83406722170-lokistack-gateway\") pod \"logging-loki-gateway-5db5847d75-whsqk\" (UID: \"9d2f1ecd-980b-430c-8ed1-e83406722170\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-whsqk" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.727646 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr9q5\" (UniqueName: \"kubernetes.io/projected/032057e1-9a2f-40a9-931a-9ff902e0abeb-kube-api-access-dr9q5\") pod \"logging-loki-gateway-5db5847d75-dzs5f\" (UID: \"032057e1-9a2f-40a9-931a-9ff902e0abeb\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-dzs5f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.727667 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/9d2f1ecd-980b-430c-8ed1-e83406722170-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5db5847d75-whsqk\" (UID: \"9d2f1ecd-980b-430c-8ed1-e83406722170\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-whsqk" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.727681 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/032057e1-9a2f-40a9-931a-9ff902e0abeb-tenants\") pod \"logging-loki-gateway-5db5847d75-dzs5f\" (UID: \"032057e1-9a2f-40a9-931a-9ff902e0abeb\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-dzs5f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.727714 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d2f1ecd-980b-430c-8ed1-e83406722170-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5db5847d75-whsqk\" (UID: \"9d2f1ecd-980b-430c-8ed1-e83406722170\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-whsqk" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.727751 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/9d2f1ecd-980b-430c-8ed1-e83406722170-rbac\") pod \"logging-loki-gateway-5db5847d75-whsqk\" (UID: \"9d2f1ecd-980b-430c-8ed1-e83406722170\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-whsqk" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.727769 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/032057e1-9a2f-40a9-931a-9ff902e0abeb-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5db5847d75-dzs5f\" (UID: \"032057e1-9a2f-40a9-931a-9ff902e0abeb\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-dzs5f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.727796 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/9d2f1ecd-980b-430c-8ed1-e83406722170-tls-secret\") pod \"logging-loki-gateway-5db5847d75-whsqk\" (UID: \"9d2f1ecd-980b-430c-8ed1-e83406722170\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-whsqk" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.727815 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/032057e1-9a2f-40a9-931a-9ff902e0abeb-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5db5847d75-dzs5f\" (UID: \"032057e1-9a2f-40a9-931a-9ff902e0abeb\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-dzs5f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.727831 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djdh2\" (UniqueName: \"kubernetes.io/projected/9d2f1ecd-980b-430c-8ed1-e83406722170-kube-api-access-djdh2\") pod \"logging-loki-gateway-5db5847d75-whsqk\" (UID: \"9d2f1ecd-980b-430c-8ed1-e83406722170\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-whsqk" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.727870 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/032057e1-9a2f-40a9-931a-9ff902e0abeb-tls-secret\") pod \"logging-loki-gateway-5db5847d75-dzs5f\" (UID: \"032057e1-9a2f-40a9-931a-9ff902e0abeb\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-dzs5f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.727889 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/032057e1-9a2f-40a9-931a-9ff902e0abeb-lokistack-gateway\") pod \"logging-loki-gateway-5db5847d75-dzs5f\" (UID: \"032057e1-9a2f-40a9-931a-9ff902e0abeb\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-dzs5f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.729236 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d2f1ecd-980b-430c-8ed1-e83406722170-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5db5847d75-whsqk\" (UID: \"9d2f1ecd-980b-430c-8ed1-e83406722170\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-whsqk" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.729682 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d2f1ecd-980b-430c-8ed1-e83406722170-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5db5847d75-whsqk\" (UID: \"9d2f1ecd-980b-430c-8ed1-e83406722170\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-whsqk" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.729751 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/032057e1-9a2f-40a9-931a-9ff902e0abeb-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5db5847d75-dzs5f\" (UID: \"032057e1-9a2f-40a9-931a-9ff902e0abeb\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-dzs5f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.729824 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/9d2f1ecd-980b-430c-8ed1-e83406722170-rbac\") pod \"logging-loki-gateway-5db5847d75-whsqk\" (UID: \"9d2f1ecd-980b-430c-8ed1-e83406722170\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-whsqk" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.730432 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/032057e1-9a2f-40a9-931a-9ff902e0abeb-rbac\") pod \"logging-loki-gateway-5db5847d75-dzs5f\" (UID: \"032057e1-9a2f-40a9-931a-9ff902e0abeb\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-dzs5f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.729468 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/032057e1-9a2f-40a9-931a-9ff902e0abeb-lokistack-gateway\") pod \"logging-loki-gateway-5db5847d75-dzs5f\" (UID: \"032057e1-9a2f-40a9-931a-9ff902e0abeb\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-dzs5f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.731612 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/9d2f1ecd-980b-430c-8ed1-e83406722170-lokistack-gateway\") pod \"logging-loki-gateway-5db5847d75-whsqk\" (UID: \"9d2f1ecd-980b-430c-8ed1-e83406722170\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-whsqk" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.732471 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/032057e1-9a2f-40a9-931a-9ff902e0abeb-tls-secret\") pod \"logging-loki-gateway-5db5847d75-dzs5f\" (UID: \"032057e1-9a2f-40a9-931a-9ff902e0abeb\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-dzs5f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.732822 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/9d2f1ecd-980b-430c-8ed1-e83406722170-tls-secret\") pod \"logging-loki-gateway-5db5847d75-whsqk\" (UID: \"9d2f1ecd-980b-430c-8ed1-e83406722170\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-whsqk" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.733221 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/032057e1-9a2f-40a9-931a-9ff902e0abeb-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5db5847d75-dzs5f\" (UID: \"032057e1-9a2f-40a9-931a-9ff902e0abeb\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-dzs5f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.737405 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/032057e1-9a2f-40a9-931a-9ff902e0abeb-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5db5847d75-dzs5f\" (UID: \"032057e1-9a2f-40a9-931a-9ff902e0abeb\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-dzs5f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.737445 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/9d2f1ecd-980b-430c-8ed1-e83406722170-tenants\") pod \"logging-loki-gateway-5db5847d75-whsqk\" (UID: \"9d2f1ecd-980b-430c-8ed1-e83406722170\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-whsqk" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.739858 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/032057e1-9a2f-40a9-931a-9ff902e0abeb-tenants\") pod \"logging-loki-gateway-5db5847d75-dzs5f\" (UID: \"032057e1-9a2f-40a9-931a-9ff902e0abeb\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-dzs5f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.742217 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/9d2f1ecd-980b-430c-8ed1-e83406722170-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5db5847d75-whsqk\" (UID: \"9d2f1ecd-980b-430c-8ed1-e83406722170\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-whsqk" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.746183 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djdh2\" (UniqueName: \"kubernetes.io/projected/9d2f1ecd-980b-430c-8ed1-e83406722170-kube-api-access-djdh2\") pod \"logging-loki-gateway-5db5847d75-whsqk\" (UID: \"9d2f1ecd-980b-430c-8ed1-e83406722170\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-whsqk" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.747069 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr9q5\" (UniqueName: \"kubernetes.io/projected/032057e1-9a2f-40a9-931a-9ff902e0abeb-kube-api-access-dr9q5\") pod \"logging-loki-gateway-5db5847d75-dzs5f\" (UID: \"032057e1-9a2f-40a9-931a-9ff902e0abeb\") " pod="openshift-logging/logging-loki-gateway-5db5847d75-dzs5f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.831796 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5db5847d75-whsqk" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.843377 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5db5847d75-dzs5f" Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.873464 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-zvg2f"] Feb 16 17:13:01 crc kubenswrapper[4794]: W0216 17:13:01.876893 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod284971a6_d034_4e31_b64b_4e842d877aed.slice/crio-528979ba8d628605ae47083dfacc0c7cb34c1e6e85880c860d20874556bfcdcc WatchSource:0}: Error finding container 528979ba8d628605ae47083dfacc0c7cb34c1e6e85880c860d20874556bfcdcc: Status 404 returned error can't find the container with id 528979ba8d628605ae47083dfacc0c7cb34c1e6e85880c860d20874556bfcdcc Feb 16 17:13:01 crc kubenswrapper[4794]: I0216 17:13:01.922694 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-4dmjf"] Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.031212 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-cm8fj"] Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.090854 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.091704 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.094375 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.095196 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.107692 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.162043 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.163905 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.166958 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.167983 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.169056 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zvg2f" event={"ID":"284971a6-d034-4e31-b64b-4e842d877aed","Type":"ContainerStarted","Data":"528979ba8d628605ae47083dfacc0c7cb34c1e6e85880c860d20874556bfcdcc"} Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.170478 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-cm8fj" event={"ID":"0814a3c5-3284-4e33-b3cc-4b4163bbcaa1","Type":"ContainerStarted","Data":"84f4e989a129fd3568760a3706e786031e5279c45662952af54340b880f196d9"} Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.171925 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.176498 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-4dmjf" event={"ID":"5447b950-1b55-4b40-8f6f-5fde1e6fdf58","Type":"ContainerStarted","Data":"f43478610b19f9630fb56ef13a8252406daf650fc36d694874482e341163519b"} Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.235451 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a80879a-09d1-4346-bfd5-9dd30ed900f7-config\") pod \"logging-loki-ingester-0\" (UID: \"0a80879a-09d1-4346-bfd5-9dd30ed900f7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.235520 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-958kx\" (UniqueName: \"kubernetes.io/projected/1972cc9c-56ea-410c-859f-e179b114fca7-kube-api-access-958kx\") pod \"logging-loki-compactor-0\" (UID: \"1972cc9c-56ea-410c-859f-e179b114fca7\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.235561 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a80879a-09d1-4346-bfd5-9dd30ed900f7-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"0a80879a-09d1-4346-bfd5-9dd30ed900f7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.235587 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1972cc9c-56ea-410c-859f-e179b114fca7-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"1972cc9c-56ea-410c-859f-e179b114fca7\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.235616 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/1972cc9c-56ea-410c-859f-e179b114fca7-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"1972cc9c-56ea-410c-859f-e179b114fca7\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.235652 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/0a80879a-09d1-4346-bfd5-9dd30ed900f7-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"0a80879a-09d1-4346-bfd5-9dd30ed900f7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.235720 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/0a80879a-09d1-4346-bfd5-9dd30ed900f7-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"0a80879a-09d1-4346-bfd5-9dd30ed900f7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.235745 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e1c7711a-d0e8-4f48-8830-2e2809016edb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e1c7711a-d0e8-4f48-8830-2e2809016edb\") pod \"logging-loki-compactor-0\" (UID: \"1972cc9c-56ea-410c-859f-e179b114fca7\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.235775 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfpkp\" (UniqueName: \"kubernetes.io/projected/0a80879a-09d1-4346-bfd5-9dd30ed900f7-kube-api-access-sfpkp\") pod \"logging-loki-ingester-0\" (UID: \"0a80879a-09d1-4346-bfd5-9dd30ed900f7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.235813 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/1972cc9c-56ea-410c-859f-e179b114fca7-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"1972cc9c-56ea-410c-859f-e179b114fca7\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.235841 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/1972cc9c-56ea-410c-859f-e179b114fca7-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"1972cc9c-56ea-410c-859f-e179b114fca7\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.235873 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1972cc9c-56ea-410c-859f-e179b114fca7-config\") pod \"logging-loki-compactor-0\" (UID: \"1972cc9c-56ea-410c-859f-e179b114fca7\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.235903 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/0a80879a-09d1-4346-bfd5-9dd30ed900f7-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"0a80879a-09d1-4346-bfd5-9dd30ed900f7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.235944 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f645df0d-e2e5-4ba6-974b-a424b1c5f5b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f645df0d-e2e5-4ba6-974b-a424b1c5f5b5\") pod \"logging-loki-ingester-0\" (UID: \"0a80879a-09d1-4346-bfd5-9dd30ed900f7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.235982 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f6c1d66b-533c-4328-bbd2-40e342c5da2f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6c1d66b-533c-4328-bbd2-40e342c5da2f\") pod \"logging-loki-ingester-0\" (UID: \"0a80879a-09d1-4346-bfd5-9dd30ed900f7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.306582 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.307467 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.309787 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.309875 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.321956 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.337085 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a80879a-09d1-4346-bfd5-9dd30ed900f7-config\") pod \"logging-loki-ingester-0\" (UID: \"0a80879a-09d1-4346-bfd5-9dd30ed900f7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.337124 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-958kx\" (UniqueName: \"kubernetes.io/projected/1972cc9c-56ea-410c-859f-e179b114fca7-kube-api-access-958kx\") pod \"logging-loki-compactor-0\" (UID: \"1972cc9c-56ea-410c-859f-e179b114fca7\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.337151 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a80879a-09d1-4346-bfd5-9dd30ed900f7-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"0a80879a-09d1-4346-bfd5-9dd30ed900f7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.337172 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1972cc9c-56ea-410c-859f-e179b114fca7-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"1972cc9c-56ea-410c-859f-e179b114fca7\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.337190 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/1972cc9c-56ea-410c-859f-e179b114fca7-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"1972cc9c-56ea-410c-859f-e179b114fca7\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.337215 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/0a80879a-09d1-4346-bfd5-9dd30ed900f7-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"0a80879a-09d1-4346-bfd5-9dd30ed900f7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.337241 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/0a80879a-09d1-4346-bfd5-9dd30ed900f7-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"0a80879a-09d1-4346-bfd5-9dd30ed900f7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.337262 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e1c7711a-d0e8-4f48-8830-2e2809016edb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e1c7711a-d0e8-4f48-8830-2e2809016edb\") pod \"logging-loki-compactor-0\" (UID: \"1972cc9c-56ea-410c-859f-e179b114fca7\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.337281 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfpkp\" (UniqueName: \"kubernetes.io/projected/0a80879a-09d1-4346-bfd5-9dd30ed900f7-kube-api-access-sfpkp\") pod \"logging-loki-ingester-0\" (UID: \"0a80879a-09d1-4346-bfd5-9dd30ed900f7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.337352 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/1972cc9c-56ea-410c-859f-e179b114fca7-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"1972cc9c-56ea-410c-859f-e179b114fca7\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.337372 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/1972cc9c-56ea-410c-859f-e179b114fca7-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"1972cc9c-56ea-410c-859f-e179b114fca7\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.337398 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1972cc9c-56ea-410c-859f-e179b114fca7-config\") pod \"logging-loki-compactor-0\" (UID: \"1972cc9c-56ea-410c-859f-e179b114fca7\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.337421 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/0a80879a-09d1-4346-bfd5-9dd30ed900f7-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"0a80879a-09d1-4346-bfd5-9dd30ed900f7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.337440 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f645df0d-e2e5-4ba6-974b-a424b1c5f5b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f645df0d-e2e5-4ba6-974b-a424b1c5f5b5\") pod \"logging-loki-ingester-0\" (UID: \"0a80879a-09d1-4346-bfd5-9dd30ed900f7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.337469 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f6c1d66b-533c-4328-bbd2-40e342c5da2f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6c1d66b-533c-4328-bbd2-40e342c5da2f\") pod \"logging-loki-ingester-0\" (UID: \"0a80879a-09d1-4346-bfd5-9dd30ed900f7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.338624 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a80879a-09d1-4346-bfd5-9dd30ed900f7-config\") pod \"logging-loki-ingester-0\" (UID: \"0a80879a-09d1-4346-bfd5-9dd30ed900f7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.339404 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a80879a-09d1-4346-bfd5-9dd30ed900f7-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"0a80879a-09d1-4346-bfd5-9dd30ed900f7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.340369 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1972cc9c-56ea-410c-859f-e179b114fca7-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"1972cc9c-56ea-410c-859f-e179b114fca7\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.342068 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1972cc9c-56ea-410c-859f-e179b114fca7-config\") pod \"logging-loki-compactor-0\" (UID: \"1972cc9c-56ea-410c-859f-e179b114fca7\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.343085 4794 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.343122 4794 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f645df0d-e2e5-4ba6-974b-a424b1c5f5b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f645df0d-e2e5-4ba6-974b-a424b1c5f5b5\") pod \"logging-loki-ingester-0\" (UID: \"0a80879a-09d1-4346-bfd5-9dd30ed900f7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/56bbdd89e8f88f3f2350888ea65d6a1162e4e0886780d8e0c34cc266eb5538a3/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.343674 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/1972cc9c-56ea-410c-859f-e179b114fca7-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"1972cc9c-56ea-410c-859f-e179b114fca7\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.343770 4794 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.343809 4794 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f6c1d66b-533c-4328-bbd2-40e342c5da2f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6c1d66b-533c-4328-bbd2-40e342c5da2f\") pod \"logging-loki-ingester-0\" (UID: \"0a80879a-09d1-4346-bfd5-9dd30ed900f7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5059af7721bec82761dad0254aa960d25933d7425ef9a5042b327c05492b3b2a/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.344469 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5db5847d75-whsqk"] Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.347430 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/1972cc9c-56ea-410c-859f-e179b114fca7-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"1972cc9c-56ea-410c-859f-e179b114fca7\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.347590 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/0a80879a-09d1-4346-bfd5-9dd30ed900f7-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"0a80879a-09d1-4346-bfd5-9dd30ed900f7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.347657 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/0a80879a-09d1-4346-bfd5-9dd30ed900f7-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"0a80879a-09d1-4346-bfd5-9dd30ed900f7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.352533 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/1972cc9c-56ea-410c-859f-e179b114fca7-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"1972cc9c-56ea-410c-859f-e179b114fca7\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.352760 4794 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.356518 4794 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e1c7711a-d0e8-4f48-8830-2e2809016edb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e1c7711a-d0e8-4f48-8830-2e2809016edb\") pod \"logging-loki-compactor-0\" (UID: \"1972cc9c-56ea-410c-859f-e179b114fca7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5ebba81d7fd67e63d48e6e804ea699534abc0869f839616780cdec1336727ffe/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.359894 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/0a80879a-09d1-4346-bfd5-9dd30ed900f7-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"0a80879a-09d1-4346-bfd5-9dd30ed900f7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.360814 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-958kx\" (UniqueName: \"kubernetes.io/projected/1972cc9c-56ea-410c-859f-e179b114fca7-kube-api-access-958kx\") pod \"logging-loki-compactor-0\" (UID: \"1972cc9c-56ea-410c-859f-e179b114fca7\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.361221 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfpkp\" (UniqueName: \"kubernetes.io/projected/0a80879a-09d1-4346-bfd5-9dd30ed900f7-kube-api-access-sfpkp\") pod \"logging-loki-ingester-0\" (UID: \"0a80879a-09d1-4346-bfd5-9dd30ed900f7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.384545 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f645df0d-e2e5-4ba6-974b-a424b1c5f5b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f645df0d-e2e5-4ba6-974b-a424b1c5f5b5\") pod \"logging-loki-ingester-0\" (UID: \"0a80879a-09d1-4346-bfd5-9dd30ed900f7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.401228 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5db5847d75-dzs5f"] Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.401374 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f6c1d66b-533c-4328-bbd2-40e342c5da2f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6c1d66b-533c-4328-bbd2-40e342c5da2f\") pod \"logging-loki-ingester-0\" (UID: \"0a80879a-09d1-4346-bfd5-9dd30ed900f7\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.402767 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e1c7711a-d0e8-4f48-8830-2e2809016edb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e1c7711a-d0e8-4f48-8830-2e2809016edb\") pod \"logging-loki-compactor-0\" (UID: \"1972cc9c-56ea-410c-859f-e179b114fca7\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.409797 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.439248 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/3d3b5209-1436-45d6-9131-ad623f14e8f3-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"3d3b5209-1436-45d6-9131-ad623f14e8f3\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.440173 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d3b5209-1436-45d6-9131-ad623f14e8f3-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"3d3b5209-1436-45d6-9131-ad623f14e8f3\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.440262 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/3d3b5209-1436-45d6-9131-ad623f14e8f3-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"3d3b5209-1436-45d6-9131-ad623f14e8f3\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.440424 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfxx4\" (UniqueName: \"kubernetes.io/projected/3d3b5209-1436-45d6-9131-ad623f14e8f3-kube-api-access-pfxx4\") pod \"logging-loki-index-gateway-0\" (UID: \"3d3b5209-1436-45d6-9131-ad623f14e8f3\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.440507 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d3b5209-1436-45d6-9131-ad623f14e8f3-config\") pod \"logging-loki-index-gateway-0\" (UID: \"3d3b5209-1436-45d6-9131-ad623f14e8f3\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.440595 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1887cecf-2d05-4f88-a4f3-5b06dd2bf3b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1887cecf-2d05-4f88-a4f3-5b06dd2bf3b6\") pod \"logging-loki-index-gateway-0\" (UID: \"3d3b5209-1436-45d6-9131-ad623f14e8f3\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.440664 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/3d3b5209-1436-45d6-9131-ad623f14e8f3-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"3d3b5209-1436-45d6-9131-ad623f14e8f3\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.479812 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.541909 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfxx4\" (UniqueName: \"kubernetes.io/projected/3d3b5209-1436-45d6-9131-ad623f14e8f3-kube-api-access-pfxx4\") pod \"logging-loki-index-gateway-0\" (UID: \"3d3b5209-1436-45d6-9131-ad623f14e8f3\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.541946 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d3b5209-1436-45d6-9131-ad623f14e8f3-config\") pod \"logging-loki-index-gateway-0\" (UID: \"3d3b5209-1436-45d6-9131-ad623f14e8f3\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.541987 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1887cecf-2d05-4f88-a4f3-5b06dd2bf3b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1887cecf-2d05-4f88-a4f3-5b06dd2bf3b6\") pod \"logging-loki-index-gateway-0\" (UID: \"3d3b5209-1436-45d6-9131-ad623f14e8f3\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.542006 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/3d3b5209-1436-45d6-9131-ad623f14e8f3-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"3d3b5209-1436-45d6-9131-ad623f14e8f3\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.542064 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/3d3b5209-1436-45d6-9131-ad623f14e8f3-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"3d3b5209-1436-45d6-9131-ad623f14e8f3\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.542098 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d3b5209-1436-45d6-9131-ad623f14e8f3-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"3d3b5209-1436-45d6-9131-ad623f14e8f3\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.542122 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/3d3b5209-1436-45d6-9131-ad623f14e8f3-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"3d3b5209-1436-45d6-9131-ad623f14e8f3\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.543282 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d3b5209-1436-45d6-9131-ad623f14e8f3-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"3d3b5209-1436-45d6-9131-ad623f14e8f3\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.544039 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d3b5209-1436-45d6-9131-ad623f14e8f3-config\") pod \"logging-loki-index-gateway-0\" (UID: \"3d3b5209-1436-45d6-9131-ad623f14e8f3\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.546188 4794 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.546233 4794 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1887cecf-2d05-4f88-a4f3-5b06dd2bf3b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1887cecf-2d05-4f88-a4f3-5b06dd2bf3b6\") pod \"logging-loki-index-gateway-0\" (UID: \"3d3b5209-1436-45d6-9131-ad623f14e8f3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5583cf9d02cd838b169ed1b4c817efcb8fad1d770c10803c2609b1455f81e779/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.548930 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/3d3b5209-1436-45d6-9131-ad623f14e8f3-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"3d3b5209-1436-45d6-9131-ad623f14e8f3\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.549248 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/3d3b5209-1436-45d6-9131-ad623f14e8f3-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"3d3b5209-1436-45d6-9131-ad623f14e8f3\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.549426 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/3d3b5209-1436-45d6-9131-ad623f14e8f3-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"3d3b5209-1436-45d6-9131-ad623f14e8f3\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.563444 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfxx4\" (UniqueName: \"kubernetes.io/projected/3d3b5209-1436-45d6-9131-ad623f14e8f3-kube-api-access-pfxx4\") pod \"logging-loki-index-gateway-0\" (UID: \"3d3b5209-1436-45d6-9131-ad623f14e8f3\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.592008 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1887cecf-2d05-4f88-a4f3-5b06dd2bf3b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1887cecf-2d05-4f88-a4f3-5b06dd2bf3b6\") pod \"logging-loki-index-gateway-0\" (UID: \"3d3b5209-1436-45d6-9131-ad623f14e8f3\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.637075 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.841663 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 16 17:13:02 crc kubenswrapper[4794]: E0216 17:13:02.852121 4794 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a80879a_09d1_4346_bfd5_9dd30ed900f7.slice/crio-ad0d97ebfe6971881429e59e0c5bdfce04eb4f6286788e9eb71d1ec0b7de8b52\": RecentStats: unable to find data in memory cache]" Feb 16 17:13:02 crc kubenswrapper[4794]: I0216 17:13:02.917932 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 16 17:13:03 crc kubenswrapper[4794]: I0216 17:13:03.040467 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 16 17:13:03 crc kubenswrapper[4794]: I0216 17:13:03.186746 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5db5847d75-whsqk" event={"ID":"9d2f1ecd-980b-430c-8ed1-e83406722170","Type":"ContainerStarted","Data":"bce343f04452165b6e0bf8c4e4132db31a457412818440333ef67e64bd38736f"} Feb 16 17:13:03 crc kubenswrapper[4794]: I0216 17:13:03.188923 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"0a80879a-09d1-4346-bfd5-9dd30ed900f7","Type":"ContainerStarted","Data":"ad0d97ebfe6971881429e59e0c5bdfce04eb4f6286788e9eb71d1ec0b7de8b52"} Feb 16 17:13:03 crc kubenswrapper[4794]: I0216 17:13:03.190004 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5db5847d75-dzs5f" event={"ID":"032057e1-9a2f-40a9-931a-9ff902e0abeb","Type":"ContainerStarted","Data":"ba30d4c5a14f3a5ca4fc0ebacea551b6aa76aff360086162bfa4689fcefc171c"} Feb 16 17:13:03 crc kubenswrapper[4794]: I0216 17:13:03.191564 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"3d3b5209-1436-45d6-9131-ad623f14e8f3","Type":"ContainerStarted","Data":"53808990ed582fe881c4acb3fb77ae8b08806e9e16a9a5b2c4003e76bc34db8f"} Feb 16 17:13:03 crc kubenswrapper[4794]: I0216 17:13:03.192923 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"1972cc9c-56ea-410c-859f-e179b114fca7","Type":"ContainerStarted","Data":"06def8013e19508a91fc60c9437d4d9e48f2b418a25778a523c395f44bd92004"} Feb 16 17:13:07 crc kubenswrapper[4794]: I0216 17:13:07.222317 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zvg2f" event={"ID":"284971a6-d034-4e31-b64b-4e842d877aed","Type":"ContainerStarted","Data":"82018dfefa4efedec8cb7b474f9ecb4f5a832b97245b22691bb44ba1168cac4c"} Feb 16 17:13:07 crc kubenswrapper[4794]: I0216 17:13:07.223656 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zvg2f" Feb 16 17:13:07 crc kubenswrapper[4794]: I0216 17:13:07.223954 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5db5847d75-whsqk" event={"ID":"9d2f1ecd-980b-430c-8ed1-e83406722170","Type":"ContainerStarted","Data":"7cb7ddee08c0cf91c3e6e4b94912289d356372fdfcd56f1e7f20201199d94953"} Feb 16 17:13:07 crc kubenswrapper[4794]: I0216 17:13:07.224931 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"0a80879a-09d1-4346-bfd5-9dd30ed900f7","Type":"ContainerStarted","Data":"6aede3d3f0f2f9205d104c694f144beb56f9ab201e3db06d2af8f3d6af25faaa"} Feb 16 17:13:07 crc kubenswrapper[4794]: I0216 17:13:07.225382 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Feb 16 17:13:07 crc kubenswrapper[4794]: I0216 17:13:07.226987 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5db5847d75-dzs5f" event={"ID":"032057e1-9a2f-40a9-931a-9ff902e0abeb","Type":"ContainerStarted","Data":"6dafdadbe43e3ebaf956532ef29ae842c68a443cecb304d1f560984c83be3ea0"} Feb 16 17:13:07 crc kubenswrapper[4794]: I0216 17:13:07.235227 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"3d3b5209-1436-45d6-9131-ad623f14e8f3","Type":"ContainerStarted","Data":"e1970d2ad7414dada13cc63fa63ab9ba3b15748b352864635e706cca8e9e8fa1"} Feb 16 17:13:07 crc kubenswrapper[4794]: I0216 17:13:07.235610 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 17:13:07 crc kubenswrapper[4794]: I0216 17:13:07.237103 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"1972cc9c-56ea-410c-859f-e179b114fca7","Type":"ContainerStarted","Data":"e8c2f27a59578593866bcb499092c3f1f4abdad78967c1763a9c54ed5c2211a6"} Feb 16 17:13:07 crc kubenswrapper[4794]: I0216 17:13:07.238029 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Feb 16 17:13:07 crc kubenswrapper[4794]: I0216 17:13:07.239488 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-cm8fj" event={"ID":"0814a3c5-3284-4e33-b3cc-4b4163bbcaa1","Type":"ContainerStarted","Data":"efd26503edfa093b0a485ba1f02bb138462ae0f7f4825ee4539dc64faa6728a6"} Feb 16 17:13:07 crc kubenswrapper[4794]: I0216 17:13:07.239710 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76bf7b6d45-cm8fj" Feb 16 17:13:07 crc kubenswrapper[4794]: I0216 17:13:07.241120 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-4dmjf" event={"ID":"5447b950-1b55-4b40-8f6f-5fde1e6fdf58","Type":"ContainerStarted","Data":"bef526816bdc9c2ab73746da0896b6bd9eb736f6a24e41fbb58ea26c75646db8"} Feb 16 17:13:07 crc kubenswrapper[4794]: I0216 17:13:07.241286 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-4dmjf" Feb 16 17:13:07 crc kubenswrapper[4794]: I0216 17:13:07.247910 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zvg2f" podStartSLOduration=3.01026247 podStartE2EDuration="7.247881234s" podCreationTimestamp="2026-02-16 17:13:00 +0000 UTC" firstStartedPulling="2026-02-16 17:13:01.879776011 +0000 UTC m=+807.827870658" lastFinishedPulling="2026-02-16 17:13:06.117394775 +0000 UTC m=+812.065489422" observedRunningTime="2026-02-16 17:13:07.24422354 +0000 UTC m=+813.192318237" watchObservedRunningTime="2026-02-16 17:13:07.247881234 +0000 UTC m=+813.195975901" Feb 16 17:13:07 crc kubenswrapper[4794]: I0216 17:13:07.281599 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.244619346 podStartE2EDuration="6.28157519s" podCreationTimestamp="2026-02-16 17:13:01 +0000 UTC" firstStartedPulling="2026-02-16 17:13:02.942026493 +0000 UTC m=+808.890121140" lastFinishedPulling="2026-02-16 17:13:05.978982347 +0000 UTC m=+811.927076984" observedRunningTime="2026-02-16 17:13:07.27523247 +0000 UTC m=+813.223327117" watchObservedRunningTime="2026-02-16 17:13:07.28157519 +0000 UTC m=+813.229669847" Feb 16 17:13:07 crc kubenswrapper[4794]: I0216 17:13:07.325040 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76bf7b6d45-cm8fj" podStartSLOduration=2.258225828 podStartE2EDuration="6.325012083s" podCreationTimestamp="2026-02-16 17:13:01 +0000 UTC" firstStartedPulling="2026-02-16 17:13:02.040905055 +0000 UTC m=+807.988999702" lastFinishedPulling="2026-02-16 17:13:06.10769131 +0000 UTC m=+812.055785957" observedRunningTime="2026-02-16 17:13:07.309430111 +0000 UTC m=+813.257524778" watchObservedRunningTime="2026-02-16 17:13:07.325012083 +0000 UTC m=+813.273106740" Feb 16 17:13:07 crc kubenswrapper[4794]: I0216 17:13:07.355061 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.290907431 podStartE2EDuration="6.355034486s" podCreationTimestamp="2026-02-16 17:13:01 +0000 UTC" firstStartedPulling="2026-02-16 17:13:03.045816389 +0000 UTC m=+808.993911036" lastFinishedPulling="2026-02-16 17:13:06.109943444 +0000 UTC m=+812.058038091" observedRunningTime="2026-02-16 17:13:07.341624885 +0000 UTC m=+813.289719532" watchObservedRunningTime="2026-02-16 17:13:07.355034486 +0000 UTC m=+813.303129143" Feb 16 17:13:07 crc kubenswrapper[4794]: I0216 17:13:07.393581 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-4dmjf" podStartSLOduration=2.28046429 podStartE2EDuration="6.393552649s" podCreationTimestamp="2026-02-16 17:13:01 +0000 UTC" firstStartedPulling="2026-02-16 17:13:01.939197818 +0000 UTC m=+807.887292465" lastFinishedPulling="2026-02-16 17:13:06.052286167 +0000 UTC m=+812.000380824" observedRunningTime="2026-02-16 17:13:07.362478137 +0000 UTC m=+813.310572784" watchObservedRunningTime="2026-02-16 17:13:07.393552649 +0000 UTC m=+813.341647336" Feb 16 17:13:07 crc kubenswrapper[4794]: I0216 17:13:07.396066 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.130956321 podStartE2EDuration="6.39604779s" podCreationTimestamp="2026-02-16 17:13:01 +0000 UTC" firstStartedPulling="2026-02-16 17:13:02.843436965 +0000 UTC m=+808.791531622" lastFinishedPulling="2026-02-16 17:13:06.108528444 +0000 UTC m=+812.056623091" observedRunningTime="2026-02-16 17:13:07.383273897 +0000 UTC m=+813.331368584" watchObservedRunningTime="2026-02-16 17:13:07.39604779 +0000 UTC m=+813.344142477" Feb 16 17:13:09 crc kubenswrapper[4794]: I0216 17:13:09.259473 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5db5847d75-whsqk" event={"ID":"9d2f1ecd-980b-430c-8ed1-e83406722170","Type":"ContainerStarted","Data":"d9d81006b68190997fe1c89d4a0fc635e65295fd80ca207d9cde1c152a91b0ae"} Feb 16 17:13:09 crc kubenswrapper[4794]: I0216 17:13:09.259998 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5db5847d75-whsqk" Feb 16 17:13:09 crc kubenswrapper[4794]: I0216 17:13:09.261762 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5db5847d75-dzs5f" event={"ID":"032057e1-9a2f-40a9-931a-9ff902e0abeb","Type":"ContainerStarted","Data":"bf12623caba88d2f155d6bcde56679a4c0b5182fe7551cbe545636e5fb839fc1"} Feb 16 17:13:09 crc kubenswrapper[4794]: I0216 17:13:09.268453 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5db5847d75-whsqk" Feb 16 17:13:09 crc kubenswrapper[4794]: I0216 17:13:09.302880 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-5db5847d75-whsqk" podStartSLOduration=2.007551553 podStartE2EDuration="8.302856584s" podCreationTimestamp="2026-02-16 17:13:01 +0000 UTC" firstStartedPulling="2026-02-16 17:13:02.353964171 +0000 UTC m=+808.302058818" lastFinishedPulling="2026-02-16 17:13:08.649269212 +0000 UTC m=+814.597363849" observedRunningTime="2026-02-16 17:13:09.277499564 +0000 UTC m=+815.225594241" watchObservedRunningTime="2026-02-16 17:13:09.302856584 +0000 UTC m=+815.250951251" Feb 16 17:13:09 crc kubenswrapper[4794]: I0216 17:13:09.344676 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-5db5847d75-dzs5f" podStartSLOduration=2.505102666 podStartE2EDuration="8.34465793s" podCreationTimestamp="2026-02-16 17:13:01 +0000 UTC" firstStartedPulling="2026-02-16 17:13:02.40393665 +0000 UTC m=+808.352031287" lastFinishedPulling="2026-02-16 17:13:08.243491894 +0000 UTC m=+814.191586551" observedRunningTime="2026-02-16 17:13:09.340141492 +0000 UTC m=+815.288236149" watchObservedRunningTime="2026-02-16 17:13:09.34465793 +0000 UTC m=+815.292752577" Feb 16 17:13:10 crc kubenswrapper[4794]: I0216 17:13:10.271263 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5db5847d75-whsqk" Feb 16 17:13:10 crc kubenswrapper[4794]: I0216 17:13:10.271356 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5db5847d75-dzs5f" Feb 16 17:13:10 crc kubenswrapper[4794]: I0216 17:13:10.271388 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5db5847d75-dzs5f" Feb 16 17:13:10 crc kubenswrapper[4794]: I0216 17:13:10.285792 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5db5847d75-dzs5f" Feb 16 17:13:10 crc kubenswrapper[4794]: I0216 17:13:10.290322 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5db5847d75-dzs5f" Feb 16 17:13:10 crc kubenswrapper[4794]: I0216 17:13:10.290602 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5db5847d75-whsqk" Feb 16 17:13:21 crc kubenswrapper[4794]: I0216 17:13:21.276934 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-zvg2f" Feb 16 17:13:21 crc kubenswrapper[4794]: I0216 17:13:21.439175 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76bf7b6d45-cm8fj" Feb 16 17:13:21 crc kubenswrapper[4794]: I0216 17:13:21.593271 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-4dmjf" Feb 16 17:13:22 crc kubenswrapper[4794]: I0216 17:13:22.417025 4794 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 16 17:13:22 crc kubenswrapper[4794]: I0216 17:13:22.417092 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="0a80879a-09d1-4346-bfd5-9dd30ed900f7" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 17:13:22 crc kubenswrapper[4794]: I0216 17:13:22.489003 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Feb 16 17:13:22 crc kubenswrapper[4794]: I0216 17:13:22.649004 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 17:13:32 crc kubenswrapper[4794]: I0216 17:13:32.418493 4794 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 16 17:13:32 crc kubenswrapper[4794]: I0216 17:13:32.419180 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="0a80879a-09d1-4346-bfd5-9dd30ed900f7" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 17:13:42 crc kubenswrapper[4794]: I0216 17:13:42.418075 4794 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 16 17:13:42 crc kubenswrapper[4794]: I0216 17:13:42.418560 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="0a80879a-09d1-4346-bfd5-9dd30ed900f7" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 17:13:52 crc kubenswrapper[4794]: I0216 17:13:52.417248 4794 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 16 17:13:52 crc kubenswrapper[4794]: I0216 17:13:52.417998 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="0a80879a-09d1-4346-bfd5-9dd30ed900f7" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 17:14:02 crc kubenswrapper[4794]: I0216 17:14:02.415860 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Feb 16 17:14:20 crc kubenswrapper[4794]: I0216 17:14:20.140924 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:14:20 crc kubenswrapper[4794]: I0216 17:14:20.141664 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.120288 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-vfvc7"] Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.121757 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-vfvc7" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.123576 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.123854 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-nrln4" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.124047 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.124121 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.126750 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.139038 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-vfvc7"] Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.139715 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.217269 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/80f4b18e-832f-4df0-9a42-6b088efac106-collector-syslog-receiver\") pod \"collector-vfvc7\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " pod="openshift-logging/collector-vfvc7" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.217343 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/80f4b18e-832f-4df0-9a42-6b088efac106-tmp\") pod \"collector-vfvc7\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " pod="openshift-logging/collector-vfvc7" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.217546 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/80f4b18e-832f-4df0-9a42-6b088efac106-entrypoint\") pod \"collector-vfvc7\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " pod="openshift-logging/collector-vfvc7" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.217624 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/80f4b18e-832f-4df0-9a42-6b088efac106-collector-token\") pod \"collector-vfvc7\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " pod="openshift-logging/collector-vfvc7" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.217704 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc4s8\" (UniqueName: \"kubernetes.io/projected/80f4b18e-832f-4df0-9a42-6b088efac106-kube-api-access-cc4s8\") pod \"collector-vfvc7\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " pod="openshift-logging/collector-vfvc7" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.217795 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/80f4b18e-832f-4df0-9a42-6b088efac106-trusted-ca\") pod \"collector-vfvc7\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " pod="openshift-logging/collector-vfvc7" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.217827 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/80f4b18e-832f-4df0-9a42-6b088efac106-datadir\") pod \"collector-vfvc7\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " pod="openshift-logging/collector-vfvc7" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.217943 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/80f4b18e-832f-4df0-9a42-6b088efac106-metrics\") pod \"collector-vfvc7\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " pod="openshift-logging/collector-vfvc7" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.217995 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80f4b18e-832f-4df0-9a42-6b088efac106-config\") pod \"collector-vfvc7\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " pod="openshift-logging/collector-vfvc7" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.218078 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/80f4b18e-832f-4df0-9a42-6b088efac106-config-openshift-service-cacrt\") pod \"collector-vfvc7\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " pod="openshift-logging/collector-vfvc7" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.218213 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/80f4b18e-832f-4df0-9a42-6b088efac106-sa-token\") pod \"collector-vfvc7\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " pod="openshift-logging/collector-vfvc7" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.289698 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-vfvc7"] Feb 16 17:14:21 crc kubenswrapper[4794]: E0216 17:14:21.290554 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-cc4s8 metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-vfvc7" podUID="80f4b18e-832f-4df0-9a42-6b088efac106" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.319512 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/80f4b18e-832f-4df0-9a42-6b088efac106-sa-token\") pod \"collector-vfvc7\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " pod="openshift-logging/collector-vfvc7" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.319640 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/80f4b18e-832f-4df0-9a42-6b088efac106-collector-syslog-receiver\") pod \"collector-vfvc7\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " pod="openshift-logging/collector-vfvc7" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.319684 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/80f4b18e-832f-4df0-9a42-6b088efac106-tmp\") pod \"collector-vfvc7\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " pod="openshift-logging/collector-vfvc7" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.319723 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/80f4b18e-832f-4df0-9a42-6b088efac106-entrypoint\") pod \"collector-vfvc7\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " pod="openshift-logging/collector-vfvc7" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.319762 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/80f4b18e-832f-4df0-9a42-6b088efac106-collector-token\") pod \"collector-vfvc7\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " pod="openshift-logging/collector-vfvc7" Feb 16 17:14:21 crc kubenswrapper[4794]: E0216 17:14:21.319805 4794 secret.go:188] Couldn't get secret openshift-logging/collector-syslog-receiver: secret "collector-syslog-receiver" not found Feb 16 17:14:21 crc kubenswrapper[4794]: E0216 17:14:21.319904 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80f4b18e-832f-4df0-9a42-6b088efac106-collector-syslog-receiver podName:80f4b18e-832f-4df0-9a42-6b088efac106 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.819873522 +0000 UTC m=+887.767968189 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "collector-syslog-receiver" (UniqueName: "kubernetes.io/secret/80f4b18e-832f-4df0-9a42-6b088efac106-collector-syslog-receiver") pod "collector-vfvc7" (UID: "80f4b18e-832f-4df0-9a42-6b088efac106") : secret "collector-syslog-receiver" not found Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.319811 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc4s8\" (UniqueName: \"kubernetes.io/projected/80f4b18e-832f-4df0-9a42-6b088efac106-kube-api-access-cc4s8\") pod \"collector-vfvc7\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " pod="openshift-logging/collector-vfvc7" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.320036 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/80f4b18e-832f-4df0-9a42-6b088efac106-trusted-ca\") pod \"collector-vfvc7\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " pod="openshift-logging/collector-vfvc7" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.320065 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/80f4b18e-832f-4df0-9a42-6b088efac106-datadir\") pod \"collector-vfvc7\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " pod="openshift-logging/collector-vfvc7" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.320128 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/80f4b18e-832f-4df0-9a42-6b088efac106-metrics\") pod \"collector-vfvc7\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " pod="openshift-logging/collector-vfvc7" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.320157 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80f4b18e-832f-4df0-9a42-6b088efac106-config\") pod \"collector-vfvc7\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " pod="openshift-logging/collector-vfvc7" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.320182 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/80f4b18e-832f-4df0-9a42-6b088efac106-config-openshift-service-cacrt\") pod \"collector-vfvc7\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " pod="openshift-logging/collector-vfvc7" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.320208 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/80f4b18e-832f-4df0-9a42-6b088efac106-datadir\") pod \"collector-vfvc7\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " pod="openshift-logging/collector-vfvc7" Feb 16 17:14:21 crc kubenswrapper[4794]: E0216 17:14:21.320339 4794 secret.go:188] Couldn't get secret openshift-logging/collector-metrics: secret "collector-metrics" not found Feb 16 17:14:21 crc kubenswrapper[4794]: E0216 17:14:21.320387 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80f4b18e-832f-4df0-9a42-6b088efac106-metrics podName:80f4b18e-832f-4df0-9a42-6b088efac106 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.820371966 +0000 UTC m=+887.768466623 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics" (UniqueName: "kubernetes.io/secret/80f4b18e-832f-4df0-9a42-6b088efac106-metrics") pod "collector-vfvc7" (UID: "80f4b18e-832f-4df0-9a42-6b088efac106") : secret "collector-metrics" not found Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.321091 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/80f4b18e-832f-4df0-9a42-6b088efac106-entrypoint\") pod \"collector-vfvc7\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " pod="openshift-logging/collector-vfvc7" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.321227 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80f4b18e-832f-4df0-9a42-6b088efac106-config\") pod \"collector-vfvc7\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " pod="openshift-logging/collector-vfvc7" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.321596 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/80f4b18e-832f-4df0-9a42-6b088efac106-config-openshift-service-cacrt\") pod \"collector-vfvc7\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " pod="openshift-logging/collector-vfvc7" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.322116 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/80f4b18e-832f-4df0-9a42-6b088efac106-trusted-ca\") pod \"collector-vfvc7\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " pod="openshift-logging/collector-vfvc7" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.326108 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/80f4b18e-832f-4df0-9a42-6b088efac106-tmp\") pod \"collector-vfvc7\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " pod="openshift-logging/collector-vfvc7" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.327679 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/80f4b18e-832f-4df0-9a42-6b088efac106-collector-token\") pod \"collector-vfvc7\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " pod="openshift-logging/collector-vfvc7" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.352956 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/80f4b18e-832f-4df0-9a42-6b088efac106-sa-token\") pod \"collector-vfvc7\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " pod="openshift-logging/collector-vfvc7" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.357896 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc4s8\" (UniqueName: \"kubernetes.io/projected/80f4b18e-832f-4df0-9a42-6b088efac106-kube-api-access-cc4s8\") pod \"collector-vfvc7\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " pod="openshift-logging/collector-vfvc7" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.826123 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/80f4b18e-832f-4df0-9a42-6b088efac106-metrics\") pod \"collector-vfvc7\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " pod="openshift-logging/collector-vfvc7" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.826296 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/80f4b18e-832f-4df0-9a42-6b088efac106-collector-syslog-receiver\") pod \"collector-vfvc7\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " pod="openshift-logging/collector-vfvc7" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.829803 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/80f4b18e-832f-4df0-9a42-6b088efac106-metrics\") pod \"collector-vfvc7\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " pod="openshift-logging/collector-vfvc7" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.840326 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-vfvc7" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.841925 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/80f4b18e-832f-4df0-9a42-6b088efac106-collector-syslog-receiver\") pod \"collector-vfvc7\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " pod="openshift-logging/collector-vfvc7" Feb 16 17:14:21 crc kubenswrapper[4794]: I0216 17:14:21.918282 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-vfvc7" Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.028939 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/80f4b18e-832f-4df0-9a42-6b088efac106-datadir\") pod \"80f4b18e-832f-4df0-9a42-6b088efac106\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.028995 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/80f4b18e-832f-4df0-9a42-6b088efac106-metrics\") pod \"80f4b18e-832f-4df0-9a42-6b088efac106\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.029054 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/80f4b18e-832f-4df0-9a42-6b088efac106-trusted-ca\") pod \"80f4b18e-832f-4df0-9a42-6b088efac106\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.029074 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc4s8\" (UniqueName: \"kubernetes.io/projected/80f4b18e-832f-4df0-9a42-6b088efac106-kube-api-access-cc4s8\") pod \"80f4b18e-832f-4df0-9a42-6b088efac106\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.029124 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/80f4b18e-832f-4df0-9a42-6b088efac106-config-openshift-service-cacrt\") pod \"80f4b18e-832f-4df0-9a42-6b088efac106\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.029145 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80f4b18e-832f-4df0-9a42-6b088efac106-config\") pod \"80f4b18e-832f-4df0-9a42-6b088efac106\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.029167 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/80f4b18e-832f-4df0-9a42-6b088efac106-collector-syslog-receiver\") pod \"80f4b18e-832f-4df0-9a42-6b088efac106\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.029246 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/80f4b18e-832f-4df0-9a42-6b088efac106-entrypoint\") pod \"80f4b18e-832f-4df0-9a42-6b088efac106\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.029272 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/80f4b18e-832f-4df0-9a42-6b088efac106-sa-token\") pod \"80f4b18e-832f-4df0-9a42-6b088efac106\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.029295 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/80f4b18e-832f-4df0-9a42-6b088efac106-collector-token\") pod \"80f4b18e-832f-4df0-9a42-6b088efac106\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.029338 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/80f4b18e-832f-4df0-9a42-6b088efac106-tmp\") pod \"80f4b18e-832f-4df0-9a42-6b088efac106\" (UID: \"80f4b18e-832f-4df0-9a42-6b088efac106\") " Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.029063 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80f4b18e-832f-4df0-9a42-6b088efac106-datadir" (OuterVolumeSpecName: "datadir") pod "80f4b18e-832f-4df0-9a42-6b088efac106" (UID: "80f4b18e-832f-4df0-9a42-6b088efac106"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.029683 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80f4b18e-832f-4df0-9a42-6b088efac106-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "80f4b18e-832f-4df0-9a42-6b088efac106" (UID: "80f4b18e-832f-4df0-9a42-6b088efac106"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.029731 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80f4b18e-832f-4df0-9a42-6b088efac106-config" (OuterVolumeSpecName: "config") pod "80f4b18e-832f-4df0-9a42-6b088efac106" (UID: "80f4b18e-832f-4df0-9a42-6b088efac106"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.029823 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80f4b18e-832f-4df0-9a42-6b088efac106-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "80f4b18e-832f-4df0-9a42-6b088efac106" (UID: "80f4b18e-832f-4df0-9a42-6b088efac106"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.030132 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80f4b18e-832f-4df0-9a42-6b088efac106-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "80f4b18e-832f-4df0-9a42-6b088efac106" (UID: "80f4b18e-832f-4df0-9a42-6b088efac106"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.033777 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80f4b18e-832f-4df0-9a42-6b088efac106-tmp" (OuterVolumeSpecName: "tmp") pod "80f4b18e-832f-4df0-9a42-6b088efac106" (UID: "80f4b18e-832f-4df0-9a42-6b088efac106"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.033804 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80f4b18e-832f-4df0-9a42-6b088efac106-sa-token" (OuterVolumeSpecName: "sa-token") pod "80f4b18e-832f-4df0-9a42-6b088efac106" (UID: "80f4b18e-832f-4df0-9a42-6b088efac106"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.033806 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80f4b18e-832f-4df0-9a42-6b088efac106-metrics" (OuterVolumeSpecName: "metrics") pod "80f4b18e-832f-4df0-9a42-6b088efac106" (UID: "80f4b18e-832f-4df0-9a42-6b088efac106"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.033788 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80f4b18e-832f-4df0-9a42-6b088efac106-collector-token" (OuterVolumeSpecName: "collector-token") pod "80f4b18e-832f-4df0-9a42-6b088efac106" (UID: "80f4b18e-832f-4df0-9a42-6b088efac106"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.033824 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80f4b18e-832f-4df0-9a42-6b088efac106-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "80f4b18e-832f-4df0-9a42-6b088efac106" (UID: "80f4b18e-832f-4df0-9a42-6b088efac106"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.033902 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80f4b18e-832f-4df0-9a42-6b088efac106-kube-api-access-cc4s8" (OuterVolumeSpecName: "kube-api-access-cc4s8") pod "80f4b18e-832f-4df0-9a42-6b088efac106" (UID: "80f4b18e-832f-4df0-9a42-6b088efac106"). InnerVolumeSpecName "kube-api-access-cc4s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.131921 4794 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/80f4b18e-832f-4df0-9a42-6b088efac106-datadir\") on node \"crc\" DevicePath \"\"" Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.131975 4794 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/80f4b18e-832f-4df0-9a42-6b088efac106-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.131997 4794 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/80f4b18e-832f-4df0-9a42-6b088efac106-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.132017 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc4s8\" (UniqueName: \"kubernetes.io/projected/80f4b18e-832f-4df0-9a42-6b088efac106-kube-api-access-cc4s8\") on node \"crc\" DevicePath \"\"" Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.132040 4794 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/80f4b18e-832f-4df0-9a42-6b088efac106-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.132059 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80f4b18e-832f-4df0-9a42-6b088efac106-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.132077 4794 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/80f4b18e-832f-4df0-9a42-6b088efac106-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.132094 4794 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/80f4b18e-832f-4df0-9a42-6b088efac106-entrypoint\") on node \"crc\" DevicePath \"\"" Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.132111 4794 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/80f4b18e-832f-4df0-9a42-6b088efac106-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.132127 4794 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/80f4b18e-832f-4df0-9a42-6b088efac106-collector-token\") on node \"crc\" DevicePath \"\"" Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.132144 4794 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/80f4b18e-832f-4df0-9a42-6b088efac106-tmp\") on node \"crc\" DevicePath \"\"" Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.848726 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-vfvc7" Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.920900 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-vfvc7"] Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.920973 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-vfvc7"] Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.929491 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-z59t9"] Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.930490 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-z59t9" Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.934692 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.934865 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.934972 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.935193 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.935353 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-nrln4" Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.940182 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 16 17:14:22 crc kubenswrapper[4794]: I0216 17:14:22.964499 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-z59t9"] Feb 16 17:14:23 crc kubenswrapper[4794]: I0216 17:14:23.045539 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/fcda750d-2cf9-47c5-a47a-fdc01b82e986-metrics\") pod \"collector-z59t9\" (UID: \"fcda750d-2cf9-47c5-a47a-fdc01b82e986\") " pod="openshift-logging/collector-z59t9" Feb 16 17:14:23 crc kubenswrapper[4794]: I0216 17:14:23.045609 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/fcda750d-2cf9-47c5-a47a-fdc01b82e986-sa-token\") pod \"collector-z59t9\" (UID: \"fcda750d-2cf9-47c5-a47a-fdc01b82e986\") " pod="openshift-logging/collector-z59t9" Feb 16 17:14:23 crc kubenswrapper[4794]: I0216 17:14:23.045711 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/fcda750d-2cf9-47c5-a47a-fdc01b82e986-collector-token\") pod \"collector-z59t9\" (UID: \"fcda750d-2cf9-47c5-a47a-fdc01b82e986\") " pod="openshift-logging/collector-z59t9" Feb 16 17:14:23 crc kubenswrapper[4794]: I0216 17:14:23.045726 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fcda750d-2cf9-47c5-a47a-fdc01b82e986-trusted-ca\") pod \"collector-z59t9\" (UID: \"fcda750d-2cf9-47c5-a47a-fdc01b82e986\") " pod="openshift-logging/collector-z59t9" Feb 16 17:14:23 crc kubenswrapper[4794]: I0216 17:14:23.045748 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmwf5\" (UniqueName: \"kubernetes.io/projected/fcda750d-2cf9-47c5-a47a-fdc01b82e986-kube-api-access-tmwf5\") pod \"collector-z59t9\" (UID: \"fcda750d-2cf9-47c5-a47a-fdc01b82e986\") " pod="openshift-logging/collector-z59t9" Feb 16 17:14:23 crc kubenswrapper[4794]: I0216 17:14:23.045779 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcda750d-2cf9-47c5-a47a-fdc01b82e986-config\") pod \"collector-z59t9\" (UID: \"fcda750d-2cf9-47c5-a47a-fdc01b82e986\") " pod="openshift-logging/collector-z59t9" Feb 16 17:14:23 crc kubenswrapper[4794]: I0216 17:14:23.045882 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/fcda750d-2cf9-47c5-a47a-fdc01b82e986-entrypoint\") pod \"collector-z59t9\" (UID: \"fcda750d-2cf9-47c5-a47a-fdc01b82e986\") " pod="openshift-logging/collector-z59t9" Feb 16 17:14:23 crc kubenswrapper[4794]: I0216 17:14:23.045974 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/fcda750d-2cf9-47c5-a47a-fdc01b82e986-collector-syslog-receiver\") pod \"collector-z59t9\" (UID: \"fcda750d-2cf9-47c5-a47a-fdc01b82e986\") " pod="openshift-logging/collector-z59t9" Feb 16 17:14:23 crc kubenswrapper[4794]: I0216 17:14:23.046003 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fcda750d-2cf9-47c5-a47a-fdc01b82e986-tmp\") pod \"collector-z59t9\" (UID: \"fcda750d-2cf9-47c5-a47a-fdc01b82e986\") " pod="openshift-logging/collector-z59t9" Feb 16 17:14:23 crc kubenswrapper[4794]: I0216 17:14:23.046029 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/fcda750d-2cf9-47c5-a47a-fdc01b82e986-config-openshift-service-cacrt\") pod \"collector-z59t9\" (UID: \"fcda750d-2cf9-47c5-a47a-fdc01b82e986\") " pod="openshift-logging/collector-z59t9" Feb 16 17:14:23 crc kubenswrapper[4794]: I0216 17:14:23.046046 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/fcda750d-2cf9-47c5-a47a-fdc01b82e986-datadir\") pod \"collector-z59t9\" (UID: \"fcda750d-2cf9-47c5-a47a-fdc01b82e986\") " pod="openshift-logging/collector-z59t9" Feb 16 17:14:23 crc kubenswrapper[4794]: I0216 17:14:23.147150 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/fcda750d-2cf9-47c5-a47a-fdc01b82e986-metrics\") pod \"collector-z59t9\" (UID: \"fcda750d-2cf9-47c5-a47a-fdc01b82e986\") " pod="openshift-logging/collector-z59t9" Feb 16 17:14:23 crc kubenswrapper[4794]: I0216 17:14:23.147251 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/fcda750d-2cf9-47c5-a47a-fdc01b82e986-sa-token\") pod \"collector-z59t9\" (UID: \"fcda750d-2cf9-47c5-a47a-fdc01b82e986\") " pod="openshift-logging/collector-z59t9" Feb 16 17:14:23 crc kubenswrapper[4794]: I0216 17:14:23.147361 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/fcda750d-2cf9-47c5-a47a-fdc01b82e986-collector-token\") pod \"collector-z59t9\" (UID: \"fcda750d-2cf9-47c5-a47a-fdc01b82e986\") " pod="openshift-logging/collector-z59t9" Feb 16 17:14:23 crc kubenswrapper[4794]: I0216 17:14:23.147393 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fcda750d-2cf9-47c5-a47a-fdc01b82e986-trusted-ca\") pod \"collector-z59t9\" (UID: \"fcda750d-2cf9-47c5-a47a-fdc01b82e986\") " pod="openshift-logging/collector-z59t9" Feb 16 17:14:23 crc kubenswrapper[4794]: I0216 17:14:23.147428 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmwf5\" (UniqueName: \"kubernetes.io/projected/fcda750d-2cf9-47c5-a47a-fdc01b82e986-kube-api-access-tmwf5\") pod \"collector-z59t9\" (UID: \"fcda750d-2cf9-47c5-a47a-fdc01b82e986\") " pod="openshift-logging/collector-z59t9" Feb 16 17:14:23 crc kubenswrapper[4794]: I0216 17:14:23.147489 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcda750d-2cf9-47c5-a47a-fdc01b82e986-config\") pod \"collector-z59t9\" (UID: \"fcda750d-2cf9-47c5-a47a-fdc01b82e986\") " pod="openshift-logging/collector-z59t9" Feb 16 17:14:23 crc kubenswrapper[4794]: I0216 17:14:23.147529 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/fcda750d-2cf9-47c5-a47a-fdc01b82e986-entrypoint\") pod \"collector-z59t9\" (UID: \"fcda750d-2cf9-47c5-a47a-fdc01b82e986\") " pod="openshift-logging/collector-z59t9" Feb 16 17:14:23 crc kubenswrapper[4794]: I0216 17:14:23.147571 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/fcda750d-2cf9-47c5-a47a-fdc01b82e986-collector-syslog-receiver\") pod \"collector-z59t9\" (UID: \"fcda750d-2cf9-47c5-a47a-fdc01b82e986\") " pod="openshift-logging/collector-z59t9" Feb 16 17:14:23 crc kubenswrapper[4794]: I0216 17:14:23.147615 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fcda750d-2cf9-47c5-a47a-fdc01b82e986-tmp\") pod \"collector-z59t9\" (UID: \"fcda750d-2cf9-47c5-a47a-fdc01b82e986\") " pod="openshift-logging/collector-z59t9" Feb 16 17:14:23 crc kubenswrapper[4794]: I0216 17:14:23.147662 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/fcda750d-2cf9-47c5-a47a-fdc01b82e986-config-openshift-service-cacrt\") pod \"collector-z59t9\" (UID: \"fcda750d-2cf9-47c5-a47a-fdc01b82e986\") " pod="openshift-logging/collector-z59t9" Feb 16 17:14:23 crc kubenswrapper[4794]: I0216 17:14:23.147697 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/fcda750d-2cf9-47c5-a47a-fdc01b82e986-datadir\") pod \"collector-z59t9\" (UID: \"fcda750d-2cf9-47c5-a47a-fdc01b82e986\") " pod="openshift-logging/collector-z59t9" Feb 16 17:14:23 crc kubenswrapper[4794]: I0216 17:14:23.147808 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/fcda750d-2cf9-47c5-a47a-fdc01b82e986-datadir\") pod \"collector-z59t9\" (UID: \"fcda750d-2cf9-47c5-a47a-fdc01b82e986\") " pod="openshift-logging/collector-z59t9" Feb 16 17:14:23 crc kubenswrapper[4794]: I0216 17:14:23.148738 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/fcda750d-2cf9-47c5-a47a-fdc01b82e986-config-openshift-service-cacrt\") pod \"collector-z59t9\" (UID: \"fcda750d-2cf9-47c5-a47a-fdc01b82e986\") " pod="openshift-logging/collector-z59t9" Feb 16 17:14:23 crc kubenswrapper[4794]: I0216 17:14:23.148834 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcda750d-2cf9-47c5-a47a-fdc01b82e986-config\") pod \"collector-z59t9\" (UID: \"fcda750d-2cf9-47c5-a47a-fdc01b82e986\") " pod="openshift-logging/collector-z59t9" Feb 16 17:14:23 crc kubenswrapper[4794]: I0216 17:14:23.151037 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fcda750d-2cf9-47c5-a47a-fdc01b82e986-tmp\") pod \"collector-z59t9\" (UID: \"fcda750d-2cf9-47c5-a47a-fdc01b82e986\") " pod="openshift-logging/collector-z59t9" Feb 16 17:14:23 crc kubenswrapper[4794]: I0216 17:14:23.151423 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/fcda750d-2cf9-47c5-a47a-fdc01b82e986-entrypoint\") pod \"collector-z59t9\" (UID: \"fcda750d-2cf9-47c5-a47a-fdc01b82e986\") " pod="openshift-logging/collector-z59t9" Feb 16 17:14:23 crc kubenswrapper[4794]: I0216 17:14:23.151575 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/fcda750d-2cf9-47c5-a47a-fdc01b82e986-collector-syslog-receiver\") pod \"collector-z59t9\" (UID: \"fcda750d-2cf9-47c5-a47a-fdc01b82e986\") " pod="openshift-logging/collector-z59t9" Feb 16 17:14:23 crc kubenswrapper[4794]: I0216 17:14:23.152645 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fcda750d-2cf9-47c5-a47a-fdc01b82e986-trusted-ca\") pod \"collector-z59t9\" (UID: \"fcda750d-2cf9-47c5-a47a-fdc01b82e986\") " pod="openshift-logging/collector-z59t9" Feb 16 17:14:23 crc kubenswrapper[4794]: I0216 17:14:23.153507 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/fcda750d-2cf9-47c5-a47a-fdc01b82e986-metrics\") pod \"collector-z59t9\" (UID: \"fcda750d-2cf9-47c5-a47a-fdc01b82e986\") " pod="openshift-logging/collector-z59t9" Feb 16 17:14:23 crc kubenswrapper[4794]: I0216 17:14:23.154588 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/fcda750d-2cf9-47c5-a47a-fdc01b82e986-collector-token\") pod \"collector-z59t9\" (UID: \"fcda750d-2cf9-47c5-a47a-fdc01b82e986\") " pod="openshift-logging/collector-z59t9" Feb 16 17:14:23 crc kubenswrapper[4794]: I0216 17:14:23.167439 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmwf5\" (UniqueName: \"kubernetes.io/projected/fcda750d-2cf9-47c5-a47a-fdc01b82e986-kube-api-access-tmwf5\") pod \"collector-z59t9\" (UID: \"fcda750d-2cf9-47c5-a47a-fdc01b82e986\") " pod="openshift-logging/collector-z59t9" Feb 16 17:14:23 crc kubenswrapper[4794]: I0216 17:14:23.174786 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/fcda750d-2cf9-47c5-a47a-fdc01b82e986-sa-token\") pod \"collector-z59t9\" (UID: \"fcda750d-2cf9-47c5-a47a-fdc01b82e986\") " pod="openshift-logging/collector-z59t9" Feb 16 17:14:23 crc kubenswrapper[4794]: I0216 17:14:23.252442 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-z59t9" Feb 16 17:14:23 crc kubenswrapper[4794]: I0216 17:14:23.704706 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-z59t9"] Feb 16 17:14:23 crc kubenswrapper[4794]: I0216 17:14:23.856028 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-z59t9" event={"ID":"fcda750d-2cf9-47c5-a47a-fdc01b82e986","Type":"ContainerStarted","Data":"f1424fcce64a7bb373108d0e73b99f8c50f9b87e5fc13762caaf18e675551c62"} Feb 16 17:14:24 crc kubenswrapper[4794]: I0216 17:14:24.801469 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80f4b18e-832f-4df0-9a42-6b088efac106" path="/var/lib/kubelet/pods/80f4b18e-832f-4df0-9a42-6b088efac106/volumes" Feb 16 17:14:30 crc kubenswrapper[4794]: I0216 17:14:30.911573 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-z59t9" event={"ID":"fcda750d-2cf9-47c5-a47a-fdc01b82e986","Type":"ContainerStarted","Data":"0d00c4396db0bbf2e3a1cc8a30ebd45bd2355b56adda41eb9840d978222bc62b"} Feb 16 17:14:30 crc kubenswrapper[4794]: I0216 17:14:30.957181 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-z59t9" podStartSLOduration=2.568948405 podStartE2EDuration="8.957152693s" podCreationTimestamp="2026-02-16 17:14:22 +0000 UTC" firstStartedPulling="2026-02-16 17:14:23.721903793 +0000 UTC m=+889.669998440" lastFinishedPulling="2026-02-16 17:14:30.110108081 +0000 UTC m=+896.058202728" observedRunningTime="2026-02-16 17:14:30.947470648 +0000 UTC m=+896.895565345" watchObservedRunningTime="2026-02-16 17:14:30.957152693 +0000 UTC m=+896.905247360" Feb 16 17:14:50 crc kubenswrapper[4794]: I0216 17:14:50.141639 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:14:50 crc kubenswrapper[4794]: I0216 17:14:50.142384 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:15:00 crc kubenswrapper[4794]: I0216 17:15:00.155064 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521035-7bfvg"] Feb 16 17:15:00 crc kubenswrapper[4794]: I0216 17:15:00.156715 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-7bfvg" Feb 16 17:15:00 crc kubenswrapper[4794]: I0216 17:15:00.159206 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 17:15:00 crc kubenswrapper[4794]: I0216 17:15:00.159512 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 17:15:00 crc kubenswrapper[4794]: I0216 17:15:00.169522 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521035-7bfvg"] Feb 16 17:15:00 crc kubenswrapper[4794]: I0216 17:15:00.197065 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27cfc9a8-5cbe-4493-865a-115bf389ec3b-config-volume\") pod \"collect-profiles-29521035-7bfvg\" (UID: \"27cfc9a8-5cbe-4493-865a-115bf389ec3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-7bfvg" Feb 16 17:15:00 crc kubenswrapper[4794]: I0216 17:15:00.197275 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbzl7\" (UniqueName: \"kubernetes.io/projected/27cfc9a8-5cbe-4493-865a-115bf389ec3b-kube-api-access-kbzl7\") pod \"collect-profiles-29521035-7bfvg\" (UID: \"27cfc9a8-5cbe-4493-865a-115bf389ec3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-7bfvg" Feb 16 17:15:00 crc kubenswrapper[4794]: I0216 17:15:00.197429 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27cfc9a8-5cbe-4493-865a-115bf389ec3b-secret-volume\") pod \"collect-profiles-29521035-7bfvg\" (UID: \"27cfc9a8-5cbe-4493-865a-115bf389ec3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-7bfvg" Feb 16 17:15:00 crc kubenswrapper[4794]: I0216 17:15:00.298531 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbzl7\" (UniqueName: \"kubernetes.io/projected/27cfc9a8-5cbe-4493-865a-115bf389ec3b-kube-api-access-kbzl7\") pod \"collect-profiles-29521035-7bfvg\" (UID: \"27cfc9a8-5cbe-4493-865a-115bf389ec3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-7bfvg" Feb 16 17:15:00 crc kubenswrapper[4794]: I0216 17:15:00.298589 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27cfc9a8-5cbe-4493-865a-115bf389ec3b-secret-volume\") pod \"collect-profiles-29521035-7bfvg\" (UID: \"27cfc9a8-5cbe-4493-865a-115bf389ec3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-7bfvg" Feb 16 17:15:00 crc kubenswrapper[4794]: I0216 17:15:00.298619 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27cfc9a8-5cbe-4493-865a-115bf389ec3b-config-volume\") pod \"collect-profiles-29521035-7bfvg\" (UID: \"27cfc9a8-5cbe-4493-865a-115bf389ec3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-7bfvg" Feb 16 17:15:00 crc kubenswrapper[4794]: I0216 17:15:00.299515 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27cfc9a8-5cbe-4493-865a-115bf389ec3b-config-volume\") pod \"collect-profiles-29521035-7bfvg\" (UID: \"27cfc9a8-5cbe-4493-865a-115bf389ec3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-7bfvg" Feb 16 17:15:00 crc kubenswrapper[4794]: I0216 17:15:00.312020 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27cfc9a8-5cbe-4493-865a-115bf389ec3b-secret-volume\") pod \"collect-profiles-29521035-7bfvg\" (UID: \"27cfc9a8-5cbe-4493-865a-115bf389ec3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-7bfvg" Feb 16 17:15:00 crc kubenswrapper[4794]: I0216 17:15:00.314785 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbzl7\" (UniqueName: \"kubernetes.io/projected/27cfc9a8-5cbe-4493-865a-115bf389ec3b-kube-api-access-kbzl7\") pod \"collect-profiles-29521035-7bfvg\" (UID: \"27cfc9a8-5cbe-4493-865a-115bf389ec3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-7bfvg" Feb 16 17:15:00 crc kubenswrapper[4794]: I0216 17:15:00.476539 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-7bfvg" Feb 16 17:15:00 crc kubenswrapper[4794]: I0216 17:15:00.949299 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521035-7bfvg"] Feb 16 17:15:00 crc kubenswrapper[4794]: W0216 17:15:00.964467 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27cfc9a8_5cbe_4493_865a_115bf389ec3b.slice/crio-c599eb3431fb2bf3f6028469cf34aede48e8457490b187922abff582d59cce14 WatchSource:0}: Error finding container c599eb3431fb2bf3f6028469cf34aede48e8457490b187922abff582d59cce14: Status 404 returned error can't find the container with id c599eb3431fb2bf3f6028469cf34aede48e8457490b187922abff582d59cce14 Feb 16 17:15:01 crc kubenswrapper[4794]: I0216 17:15:01.171513 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-7bfvg" event={"ID":"27cfc9a8-5cbe-4493-865a-115bf389ec3b","Type":"ContainerStarted","Data":"cf9c9fac47fe6665514641843d214a2cfeed9f0c06f7e93bc1645127a7883c2b"} Feb 16 17:15:01 crc kubenswrapper[4794]: I0216 17:15:01.173030 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-7bfvg" event={"ID":"27cfc9a8-5cbe-4493-865a-115bf389ec3b","Type":"ContainerStarted","Data":"c599eb3431fb2bf3f6028469cf34aede48e8457490b187922abff582d59cce14"} Feb 16 17:15:01 crc kubenswrapper[4794]: I0216 17:15:01.189607 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-7bfvg" podStartSLOduration=1.189593832 podStartE2EDuration="1.189593832s" podCreationTimestamp="2026-02-16 17:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:15:01.1838556 +0000 UTC m=+927.131950267" watchObservedRunningTime="2026-02-16 17:15:01.189593832 +0000 UTC m=+927.137688479" Feb 16 17:15:01 crc kubenswrapper[4794]: I0216 17:15:01.529257 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecafn9zm"] Feb 16 17:15:01 crc kubenswrapper[4794]: I0216 17:15:01.536524 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecafn9zm"] Feb 16 17:15:01 crc kubenswrapper[4794]: I0216 17:15:01.536796 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecafn9zm" Feb 16 17:15:01 crc kubenswrapper[4794]: I0216 17:15:01.548064 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 17:15:01 crc kubenswrapper[4794]: I0216 17:15:01.722865 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4782dec2-0df6-498a-908f-ba56f68b462f-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecafn9zm\" (UID: \"4782dec2-0df6-498a-908f-ba56f68b462f\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecafn9zm" Feb 16 17:15:01 crc kubenswrapper[4794]: I0216 17:15:01.723177 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6nld\" (UniqueName: \"kubernetes.io/projected/4782dec2-0df6-498a-908f-ba56f68b462f-kube-api-access-b6nld\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecafn9zm\" (UID: \"4782dec2-0df6-498a-908f-ba56f68b462f\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecafn9zm" Feb 16 17:15:01 crc kubenswrapper[4794]: I0216 17:15:01.723357 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4782dec2-0df6-498a-908f-ba56f68b462f-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecafn9zm\" (UID: \"4782dec2-0df6-498a-908f-ba56f68b462f\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecafn9zm" Feb 16 17:15:01 crc kubenswrapper[4794]: I0216 17:15:01.824825 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4782dec2-0df6-498a-908f-ba56f68b462f-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecafn9zm\" (UID: \"4782dec2-0df6-498a-908f-ba56f68b462f\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecafn9zm" Feb 16 17:15:01 crc kubenswrapper[4794]: I0216 17:15:01.825163 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6nld\" (UniqueName: \"kubernetes.io/projected/4782dec2-0df6-498a-908f-ba56f68b462f-kube-api-access-b6nld\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecafn9zm\" (UID: \"4782dec2-0df6-498a-908f-ba56f68b462f\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecafn9zm" Feb 16 17:15:01 crc kubenswrapper[4794]: I0216 17:15:01.825620 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4782dec2-0df6-498a-908f-ba56f68b462f-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecafn9zm\" (UID: \"4782dec2-0df6-498a-908f-ba56f68b462f\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecafn9zm" Feb 16 17:15:01 crc kubenswrapper[4794]: I0216 17:15:01.826428 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4782dec2-0df6-498a-908f-ba56f68b462f-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecafn9zm\" (UID: \"4782dec2-0df6-498a-908f-ba56f68b462f\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecafn9zm" Feb 16 17:15:01 crc kubenswrapper[4794]: I0216 17:15:01.825795 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4782dec2-0df6-498a-908f-ba56f68b462f-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecafn9zm\" (UID: \"4782dec2-0df6-498a-908f-ba56f68b462f\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecafn9zm" Feb 16 17:15:01 crc kubenswrapper[4794]: I0216 17:15:01.867831 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6nld\" (UniqueName: \"kubernetes.io/projected/4782dec2-0df6-498a-908f-ba56f68b462f-kube-api-access-b6nld\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecafn9zm\" (UID: \"4782dec2-0df6-498a-908f-ba56f68b462f\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecafn9zm" Feb 16 17:15:02 crc kubenswrapper[4794]: I0216 17:15:02.150990 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecafn9zm" Feb 16 17:15:02 crc kubenswrapper[4794]: I0216 17:15:02.182952 4794 generic.go:334] "Generic (PLEG): container finished" podID="27cfc9a8-5cbe-4493-865a-115bf389ec3b" containerID="cf9c9fac47fe6665514641843d214a2cfeed9f0c06f7e93bc1645127a7883c2b" exitCode=0 Feb 16 17:15:02 crc kubenswrapper[4794]: I0216 17:15:02.183021 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-7bfvg" event={"ID":"27cfc9a8-5cbe-4493-865a-115bf389ec3b","Type":"ContainerDied","Data":"cf9c9fac47fe6665514641843d214a2cfeed9f0c06f7e93bc1645127a7883c2b"} Feb 16 17:15:02 crc kubenswrapper[4794]: I0216 17:15:02.592014 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecafn9zm"] Feb 16 17:15:03 crc kubenswrapper[4794]: I0216 17:15:03.191247 4794 generic.go:334] "Generic (PLEG): container finished" podID="4782dec2-0df6-498a-908f-ba56f68b462f" containerID="eedeae4edf88065e1aa8112ee76c6e5d70c172250c1ba6f68b6d2ff00a80f020" exitCode=0 Feb 16 17:15:03 crc kubenswrapper[4794]: I0216 17:15:03.191355 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecafn9zm" event={"ID":"4782dec2-0df6-498a-908f-ba56f68b462f","Type":"ContainerDied","Data":"eedeae4edf88065e1aa8112ee76c6e5d70c172250c1ba6f68b6d2ff00a80f020"} Feb 16 17:15:03 crc kubenswrapper[4794]: I0216 17:15:03.191389 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecafn9zm" event={"ID":"4782dec2-0df6-498a-908f-ba56f68b462f","Type":"ContainerStarted","Data":"fecf9e85cbf1f24fe692266b6745be6d8ddb9d3561da3875a7bd817d7c1d6bd5"} Feb 16 17:15:03 crc kubenswrapper[4794]: I0216 17:15:03.565265 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-7bfvg" Feb 16 17:15:03 crc kubenswrapper[4794]: I0216 17:15:03.660848 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27cfc9a8-5cbe-4493-865a-115bf389ec3b-config-volume\") pod \"27cfc9a8-5cbe-4493-865a-115bf389ec3b\" (UID: \"27cfc9a8-5cbe-4493-865a-115bf389ec3b\") " Feb 16 17:15:03 crc kubenswrapper[4794]: I0216 17:15:03.660892 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbzl7\" (UniqueName: \"kubernetes.io/projected/27cfc9a8-5cbe-4493-865a-115bf389ec3b-kube-api-access-kbzl7\") pod \"27cfc9a8-5cbe-4493-865a-115bf389ec3b\" (UID: \"27cfc9a8-5cbe-4493-865a-115bf389ec3b\") " Feb 16 17:15:03 crc kubenswrapper[4794]: I0216 17:15:03.660945 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27cfc9a8-5cbe-4493-865a-115bf389ec3b-secret-volume\") pod \"27cfc9a8-5cbe-4493-865a-115bf389ec3b\" (UID: \"27cfc9a8-5cbe-4493-865a-115bf389ec3b\") " Feb 16 17:15:03 crc kubenswrapper[4794]: I0216 17:15:03.661459 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27cfc9a8-5cbe-4493-865a-115bf389ec3b-config-volume" (OuterVolumeSpecName: "config-volume") pod "27cfc9a8-5cbe-4493-865a-115bf389ec3b" (UID: "27cfc9a8-5cbe-4493-865a-115bf389ec3b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:15:03 crc kubenswrapper[4794]: I0216 17:15:03.667655 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27cfc9a8-5cbe-4493-865a-115bf389ec3b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "27cfc9a8-5cbe-4493-865a-115bf389ec3b" (UID: "27cfc9a8-5cbe-4493-865a-115bf389ec3b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:15:03 crc kubenswrapper[4794]: I0216 17:15:03.669470 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27cfc9a8-5cbe-4493-865a-115bf389ec3b-kube-api-access-kbzl7" (OuterVolumeSpecName: "kube-api-access-kbzl7") pod "27cfc9a8-5cbe-4493-865a-115bf389ec3b" (UID: "27cfc9a8-5cbe-4493-865a-115bf389ec3b"). InnerVolumeSpecName "kube-api-access-kbzl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:15:03 crc kubenswrapper[4794]: I0216 17:15:03.762202 4794 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27cfc9a8-5cbe-4493-865a-115bf389ec3b-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 17:15:03 crc kubenswrapper[4794]: I0216 17:15:03.762243 4794 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27cfc9a8-5cbe-4493-865a-115bf389ec3b-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 17:15:03 crc kubenswrapper[4794]: I0216 17:15:03.762255 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbzl7\" (UniqueName: \"kubernetes.io/projected/27cfc9a8-5cbe-4493-865a-115bf389ec3b-kube-api-access-kbzl7\") on node \"crc\" DevicePath \"\"" Feb 16 17:15:04 crc kubenswrapper[4794]: I0216 17:15:04.200820 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-7bfvg" event={"ID":"27cfc9a8-5cbe-4493-865a-115bf389ec3b","Type":"ContainerDied","Data":"c599eb3431fb2bf3f6028469cf34aede48e8457490b187922abff582d59cce14"} Feb 16 17:15:04 crc kubenswrapper[4794]: I0216 17:15:04.200891 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c599eb3431fb2bf3f6028469cf34aede48e8457490b187922abff582d59cce14" Feb 16 17:15:04 crc kubenswrapper[4794]: I0216 17:15:04.200993 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-7bfvg" Feb 16 17:15:05 crc kubenswrapper[4794]: I0216 17:15:05.208873 4794 generic.go:334] "Generic (PLEG): container finished" podID="4782dec2-0df6-498a-908f-ba56f68b462f" containerID="b4a2d3ab2b5e8806029cdb6fdb8e6611b947e46363a537a2955564efd3abdd2e" exitCode=0 Feb 16 17:15:05 crc kubenswrapper[4794]: I0216 17:15:05.208969 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecafn9zm" event={"ID":"4782dec2-0df6-498a-908f-ba56f68b462f","Type":"ContainerDied","Data":"b4a2d3ab2b5e8806029cdb6fdb8e6611b947e46363a537a2955564efd3abdd2e"} Feb 16 17:15:06 crc kubenswrapper[4794]: I0216 17:15:06.218610 4794 generic.go:334] "Generic (PLEG): container finished" podID="4782dec2-0df6-498a-908f-ba56f68b462f" containerID="8a1307aa295ddec0f7a89982706bd8e39cbe4d12ceba20b1a48d6d0ee7d398ad" exitCode=0 Feb 16 17:15:06 crc kubenswrapper[4794]: I0216 17:15:06.218703 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecafn9zm" event={"ID":"4782dec2-0df6-498a-908f-ba56f68b462f","Type":"ContainerDied","Data":"8a1307aa295ddec0f7a89982706bd8e39cbe4d12ceba20b1a48d6d0ee7d398ad"} Feb 16 17:15:07 crc kubenswrapper[4794]: I0216 17:15:07.499459 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecafn9zm" Feb 16 17:15:07 crc kubenswrapper[4794]: I0216 17:15:07.621273 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4782dec2-0df6-498a-908f-ba56f68b462f-bundle\") pod \"4782dec2-0df6-498a-908f-ba56f68b462f\" (UID: \"4782dec2-0df6-498a-908f-ba56f68b462f\") " Feb 16 17:15:07 crc kubenswrapper[4794]: I0216 17:15:07.621365 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6nld\" (UniqueName: \"kubernetes.io/projected/4782dec2-0df6-498a-908f-ba56f68b462f-kube-api-access-b6nld\") pod \"4782dec2-0df6-498a-908f-ba56f68b462f\" (UID: \"4782dec2-0df6-498a-908f-ba56f68b462f\") " Feb 16 17:15:07 crc kubenswrapper[4794]: I0216 17:15:07.621389 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4782dec2-0df6-498a-908f-ba56f68b462f-util\") pod \"4782dec2-0df6-498a-908f-ba56f68b462f\" (UID: \"4782dec2-0df6-498a-908f-ba56f68b462f\") " Feb 16 17:15:07 crc kubenswrapper[4794]: I0216 17:15:07.622023 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4782dec2-0df6-498a-908f-ba56f68b462f-bundle" (OuterVolumeSpecName: "bundle") pod "4782dec2-0df6-498a-908f-ba56f68b462f" (UID: "4782dec2-0df6-498a-908f-ba56f68b462f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:15:07 crc kubenswrapper[4794]: I0216 17:15:07.627529 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4782dec2-0df6-498a-908f-ba56f68b462f-kube-api-access-b6nld" (OuterVolumeSpecName: "kube-api-access-b6nld") pod "4782dec2-0df6-498a-908f-ba56f68b462f" (UID: "4782dec2-0df6-498a-908f-ba56f68b462f"). InnerVolumeSpecName "kube-api-access-b6nld". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:15:07 crc kubenswrapper[4794]: I0216 17:15:07.634337 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4782dec2-0df6-498a-908f-ba56f68b462f-util" (OuterVolumeSpecName: "util") pod "4782dec2-0df6-498a-908f-ba56f68b462f" (UID: "4782dec2-0df6-498a-908f-ba56f68b462f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:15:07 crc kubenswrapper[4794]: I0216 17:15:07.723250 4794 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4782dec2-0df6-498a-908f-ba56f68b462f-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:15:07 crc kubenswrapper[4794]: I0216 17:15:07.723292 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6nld\" (UniqueName: \"kubernetes.io/projected/4782dec2-0df6-498a-908f-ba56f68b462f-kube-api-access-b6nld\") on node \"crc\" DevicePath \"\"" Feb 16 17:15:07 crc kubenswrapper[4794]: I0216 17:15:07.723321 4794 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4782dec2-0df6-498a-908f-ba56f68b462f-util\") on node \"crc\" DevicePath \"\"" Feb 16 17:15:08 crc kubenswrapper[4794]: I0216 17:15:08.234252 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecafn9zm" event={"ID":"4782dec2-0df6-498a-908f-ba56f68b462f","Type":"ContainerDied","Data":"fecf9e85cbf1f24fe692266b6745be6d8ddb9d3561da3875a7bd817d7c1d6bd5"} Feb 16 17:15:08 crc kubenswrapper[4794]: I0216 17:15:08.234372 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fecf9e85cbf1f24fe692266b6745be6d8ddb9d3561da3875a7bd817d7c1d6bd5" Feb 16 17:15:08 crc kubenswrapper[4794]: I0216 17:15:08.234460 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecafn9zm" Feb 16 17:15:08 crc kubenswrapper[4794]: I0216 17:15:08.886276 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gv97q"] Feb 16 17:15:08 crc kubenswrapper[4794]: E0216 17:15:08.886539 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27cfc9a8-5cbe-4493-865a-115bf389ec3b" containerName="collect-profiles" Feb 16 17:15:08 crc kubenswrapper[4794]: I0216 17:15:08.886551 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="27cfc9a8-5cbe-4493-865a-115bf389ec3b" containerName="collect-profiles" Feb 16 17:15:08 crc kubenswrapper[4794]: E0216 17:15:08.886562 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4782dec2-0df6-498a-908f-ba56f68b462f" containerName="extract" Feb 16 17:15:08 crc kubenswrapper[4794]: I0216 17:15:08.886567 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="4782dec2-0df6-498a-908f-ba56f68b462f" containerName="extract" Feb 16 17:15:08 crc kubenswrapper[4794]: E0216 17:15:08.886584 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4782dec2-0df6-498a-908f-ba56f68b462f" containerName="util" Feb 16 17:15:08 crc kubenswrapper[4794]: I0216 17:15:08.886591 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="4782dec2-0df6-498a-908f-ba56f68b462f" containerName="util" Feb 16 17:15:08 crc kubenswrapper[4794]: E0216 17:15:08.886606 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4782dec2-0df6-498a-908f-ba56f68b462f" containerName="pull" Feb 16 17:15:08 crc kubenswrapper[4794]: I0216 17:15:08.886612 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="4782dec2-0df6-498a-908f-ba56f68b462f" containerName="pull" Feb 16 17:15:08 crc kubenswrapper[4794]: I0216 17:15:08.886728 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="4782dec2-0df6-498a-908f-ba56f68b462f" containerName="extract" Feb 16 17:15:08 crc kubenswrapper[4794]: I0216 17:15:08.886749 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="27cfc9a8-5cbe-4493-865a-115bf389ec3b" containerName="collect-profiles" Feb 16 17:15:08 crc kubenswrapper[4794]: I0216 17:15:08.887644 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gv97q" Feb 16 17:15:08 crc kubenswrapper[4794]: I0216 17:15:08.900855 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gv97q"] Feb 16 17:15:08 crc kubenswrapper[4794]: I0216 17:15:08.941935 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46726706-d38e-4bcf-8296-2e16d5a21edd-catalog-content\") pod \"certified-operators-gv97q\" (UID: \"46726706-d38e-4bcf-8296-2e16d5a21edd\") " pod="openshift-marketplace/certified-operators-gv97q" Feb 16 17:15:08 crc kubenswrapper[4794]: I0216 17:15:08.942553 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46726706-d38e-4bcf-8296-2e16d5a21edd-utilities\") pod \"certified-operators-gv97q\" (UID: \"46726706-d38e-4bcf-8296-2e16d5a21edd\") " pod="openshift-marketplace/certified-operators-gv97q" Feb 16 17:15:08 crc kubenswrapper[4794]: I0216 17:15:08.942732 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f2fg\" (UniqueName: \"kubernetes.io/projected/46726706-d38e-4bcf-8296-2e16d5a21edd-kube-api-access-5f2fg\") pod \"certified-operators-gv97q\" (UID: \"46726706-d38e-4bcf-8296-2e16d5a21edd\") " pod="openshift-marketplace/certified-operators-gv97q" Feb 16 17:15:09 crc kubenswrapper[4794]: I0216 17:15:09.043934 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46726706-d38e-4bcf-8296-2e16d5a21edd-catalog-content\") pod \"certified-operators-gv97q\" (UID: \"46726706-d38e-4bcf-8296-2e16d5a21edd\") " pod="openshift-marketplace/certified-operators-gv97q" Feb 16 17:15:09 crc kubenswrapper[4794]: I0216 17:15:09.044004 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46726706-d38e-4bcf-8296-2e16d5a21edd-utilities\") pod \"certified-operators-gv97q\" (UID: \"46726706-d38e-4bcf-8296-2e16d5a21edd\") " pod="openshift-marketplace/certified-operators-gv97q" Feb 16 17:15:09 crc kubenswrapper[4794]: I0216 17:15:09.044042 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f2fg\" (UniqueName: \"kubernetes.io/projected/46726706-d38e-4bcf-8296-2e16d5a21edd-kube-api-access-5f2fg\") pod \"certified-operators-gv97q\" (UID: \"46726706-d38e-4bcf-8296-2e16d5a21edd\") " pod="openshift-marketplace/certified-operators-gv97q" Feb 16 17:15:09 crc kubenswrapper[4794]: I0216 17:15:09.044469 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46726706-d38e-4bcf-8296-2e16d5a21edd-catalog-content\") pod \"certified-operators-gv97q\" (UID: \"46726706-d38e-4bcf-8296-2e16d5a21edd\") " pod="openshift-marketplace/certified-operators-gv97q" Feb 16 17:15:09 crc kubenswrapper[4794]: I0216 17:15:09.044575 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46726706-d38e-4bcf-8296-2e16d5a21edd-utilities\") pod \"certified-operators-gv97q\" (UID: \"46726706-d38e-4bcf-8296-2e16d5a21edd\") " pod="openshift-marketplace/certified-operators-gv97q" Feb 16 17:15:09 crc kubenswrapper[4794]: I0216 17:15:09.062647 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f2fg\" (UniqueName: \"kubernetes.io/projected/46726706-d38e-4bcf-8296-2e16d5a21edd-kube-api-access-5f2fg\") pod \"certified-operators-gv97q\" (UID: \"46726706-d38e-4bcf-8296-2e16d5a21edd\") " pod="openshift-marketplace/certified-operators-gv97q" Feb 16 17:15:09 crc kubenswrapper[4794]: I0216 17:15:09.202062 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gv97q" Feb 16 17:15:09 crc kubenswrapper[4794]: I0216 17:15:09.716081 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gv97q"] Feb 16 17:15:09 crc kubenswrapper[4794]: W0216 17:15:09.722637 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46726706_d38e_4bcf_8296_2e16d5a21edd.slice/crio-b06677a1a636e1f460a0087a1916a4a64c60869ece14463542210028d9009f52 WatchSource:0}: Error finding container b06677a1a636e1f460a0087a1916a4a64c60869ece14463542210028d9009f52: Status 404 returned error can't find the container with id b06677a1a636e1f460a0087a1916a4a64c60869ece14463542210028d9009f52 Feb 16 17:15:10 crc kubenswrapper[4794]: I0216 17:15:10.251211 4794 generic.go:334] "Generic (PLEG): container finished" podID="46726706-d38e-4bcf-8296-2e16d5a21edd" containerID="0933f50662535eaee03f5fa45a13d9b06b76ba1a1c330cbd7a34dbbb9c4d3097" exitCode=0 Feb 16 17:15:10 crc kubenswrapper[4794]: I0216 17:15:10.251266 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gv97q" event={"ID":"46726706-d38e-4bcf-8296-2e16d5a21edd","Type":"ContainerDied","Data":"0933f50662535eaee03f5fa45a13d9b06b76ba1a1c330cbd7a34dbbb9c4d3097"} Feb 16 17:15:10 crc kubenswrapper[4794]: I0216 17:15:10.251332 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gv97q" event={"ID":"46726706-d38e-4bcf-8296-2e16d5a21edd","Type":"ContainerStarted","Data":"b06677a1a636e1f460a0087a1916a4a64c60869ece14463542210028d9009f52"} Feb 16 17:15:11 crc kubenswrapper[4794]: I0216 17:15:11.266886 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gv97q" event={"ID":"46726706-d38e-4bcf-8296-2e16d5a21edd","Type":"ContainerStarted","Data":"0fab34bc4b741cdc93c0d559185b4ffc1a972745028ee5906e486acf60baefc2"} Feb 16 17:15:12 crc kubenswrapper[4794]: I0216 17:15:12.275162 4794 generic.go:334] "Generic (PLEG): container finished" podID="46726706-d38e-4bcf-8296-2e16d5a21edd" containerID="0fab34bc4b741cdc93c0d559185b4ffc1a972745028ee5906e486acf60baefc2" exitCode=0 Feb 16 17:15:12 crc kubenswrapper[4794]: I0216 17:15:12.275230 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gv97q" event={"ID":"46726706-d38e-4bcf-8296-2e16d5a21edd","Type":"ContainerDied","Data":"0fab34bc4b741cdc93c0d559185b4ffc1a972745028ee5906e486acf60baefc2"} Feb 16 17:15:12 crc kubenswrapper[4794]: I0216 17:15:12.874512 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-nzzr7"] Feb 16 17:15:12 crc kubenswrapper[4794]: I0216 17:15:12.875739 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-nzzr7" Feb 16 17:15:12 crc kubenswrapper[4794]: I0216 17:15:12.878366 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-2h64b" Feb 16 17:15:12 crc kubenswrapper[4794]: I0216 17:15:12.879200 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 16 17:15:12 crc kubenswrapper[4794]: I0216 17:15:12.880941 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 16 17:15:12 crc kubenswrapper[4794]: I0216 17:15:12.892636 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-nzzr7"] Feb 16 17:15:12 crc kubenswrapper[4794]: I0216 17:15:12.898101 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj78c\" (UniqueName: \"kubernetes.io/projected/fb9092cd-fa5f-47de-9e4b-331f73e49c35-kube-api-access-rj78c\") pod \"nmstate-operator-694c9596b7-nzzr7\" (UID: \"fb9092cd-fa5f-47de-9e4b-331f73e49c35\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-nzzr7" Feb 16 17:15:12 crc kubenswrapper[4794]: I0216 17:15:12.999903 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj78c\" (UniqueName: \"kubernetes.io/projected/fb9092cd-fa5f-47de-9e4b-331f73e49c35-kube-api-access-rj78c\") pod \"nmstate-operator-694c9596b7-nzzr7\" (UID: \"fb9092cd-fa5f-47de-9e4b-331f73e49c35\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-nzzr7" Feb 16 17:15:13 crc kubenswrapper[4794]: I0216 17:15:13.019370 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj78c\" (UniqueName: \"kubernetes.io/projected/fb9092cd-fa5f-47de-9e4b-331f73e49c35-kube-api-access-rj78c\") pod \"nmstate-operator-694c9596b7-nzzr7\" (UID: \"fb9092cd-fa5f-47de-9e4b-331f73e49c35\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-nzzr7" Feb 16 17:15:13 crc kubenswrapper[4794]: I0216 17:15:13.190948 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-nzzr7" Feb 16 17:15:13 crc kubenswrapper[4794]: I0216 17:15:13.294864 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gv97q" event={"ID":"46726706-d38e-4bcf-8296-2e16d5a21edd","Type":"ContainerStarted","Data":"91274fd651622a3186ae4d5bd703e4b66821ab7f5f4e5bb4576a40d3b458ceb1"} Feb 16 17:15:13 crc kubenswrapper[4794]: I0216 17:15:13.326502 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gv97q" podStartSLOduration=2.8987768000000003 podStartE2EDuration="5.32648254s" podCreationTimestamp="2026-02-16 17:15:08 +0000 UTC" firstStartedPulling="2026-02-16 17:15:10.253051165 +0000 UTC m=+936.201145812" lastFinishedPulling="2026-02-16 17:15:12.680756905 +0000 UTC m=+938.628851552" observedRunningTime="2026-02-16 17:15:13.322143497 +0000 UTC m=+939.270238154" watchObservedRunningTime="2026-02-16 17:15:13.32648254 +0000 UTC m=+939.274577187" Feb 16 17:15:13 crc kubenswrapper[4794]: I0216 17:15:13.639834 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-nzzr7"] Feb 16 17:15:13 crc kubenswrapper[4794]: W0216 17:15:13.645103 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb9092cd_fa5f_47de_9e4b_331f73e49c35.slice/crio-19a93069cb6d91787302bca0784c7ae063a95493ebd1f48ab9cd1b3b2345d425 WatchSource:0}: Error finding container 19a93069cb6d91787302bca0784c7ae063a95493ebd1f48ab9cd1b3b2345d425: Status 404 returned error can't find the container with id 19a93069cb6d91787302bca0784c7ae063a95493ebd1f48ab9cd1b3b2345d425 Feb 16 17:15:14 crc kubenswrapper[4794]: I0216 17:15:14.302851 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-nzzr7" event={"ID":"fb9092cd-fa5f-47de-9e4b-331f73e49c35","Type":"ContainerStarted","Data":"19a93069cb6d91787302bca0784c7ae063a95493ebd1f48ab9cd1b3b2345d425"} Feb 16 17:15:16 crc kubenswrapper[4794]: I0216 17:15:16.320602 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-nzzr7" event={"ID":"fb9092cd-fa5f-47de-9e4b-331f73e49c35","Type":"ContainerStarted","Data":"bec87cd20cbc94af9fe493d304cbdf2708ae819fa383b001bbc0ef994dbc4b3f"} Feb 16 17:15:16 crc kubenswrapper[4794]: I0216 17:15:16.352374 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-nzzr7" podStartSLOduration=2.600042393 podStartE2EDuration="4.352356068s" podCreationTimestamp="2026-02-16 17:15:12 +0000 UTC" firstStartedPulling="2026-02-16 17:15:13.647023292 +0000 UTC m=+939.595117939" lastFinishedPulling="2026-02-16 17:15:15.399336967 +0000 UTC m=+941.347431614" observedRunningTime="2026-02-16 17:15:16.344272679 +0000 UTC m=+942.292367366" watchObservedRunningTime="2026-02-16 17:15:16.352356068 +0000 UTC m=+942.300450715" Feb 16 17:15:19 crc kubenswrapper[4794]: I0216 17:15:19.202489 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gv97q" Feb 16 17:15:19 crc kubenswrapper[4794]: I0216 17:15:19.203450 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gv97q" Feb 16 17:15:19 crc kubenswrapper[4794]: I0216 17:15:19.244067 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gv97q" Feb 16 17:15:19 crc kubenswrapper[4794]: I0216 17:15:19.405735 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gv97q" Feb 16 17:15:20 crc kubenswrapper[4794]: I0216 17:15:20.088892 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gv97q"] Feb 16 17:15:20 crc kubenswrapper[4794]: I0216 17:15:20.140590 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:15:20 crc kubenswrapper[4794]: I0216 17:15:20.140679 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:15:20 crc kubenswrapper[4794]: I0216 17:15:20.140734 4794 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" Feb 16 17:15:20 crc kubenswrapper[4794]: I0216 17:15:20.141507 4794 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3aa97207ca6eb1342d7e8e60d0b01510075367f6246c193f5626cd5253489630"} pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 17:15:20 crc kubenswrapper[4794]: I0216 17:15:20.141577 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" containerID="cri-o://3aa97207ca6eb1342d7e8e60d0b01510075367f6246c193f5626cd5253489630" gracePeriod=600 Feb 16 17:15:20 crc kubenswrapper[4794]: I0216 17:15:20.355558 4794 generic.go:334] "Generic (PLEG): container finished" podID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerID="3aa97207ca6eb1342d7e8e60d0b01510075367f6246c193f5626cd5253489630" exitCode=0 Feb 16 17:15:20 crc kubenswrapper[4794]: I0216 17:15:20.355639 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerDied","Data":"3aa97207ca6eb1342d7e8e60d0b01510075367f6246c193f5626cd5253489630"} Feb 16 17:15:20 crc kubenswrapper[4794]: I0216 17:15:20.355702 4794 scope.go:117] "RemoveContainer" containerID="1242c1c0cd51c797081153357fc1a3afcbb8aac8f950b8ce178092b5638f56c5" Feb 16 17:15:21 crc kubenswrapper[4794]: I0216 17:15:21.370609 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerStarted","Data":"cce390b6213c7330d230e979677c08327d065b64facb3363518840eb14ee0ef8"} Feb 16 17:15:21 crc kubenswrapper[4794]: I0216 17:15:21.370649 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gv97q" podUID="46726706-d38e-4bcf-8296-2e16d5a21edd" containerName="registry-server" containerID="cri-o://91274fd651622a3186ae4d5bd703e4b66821ab7f5f4e5bb4576a40d3b458ceb1" gracePeriod=2 Feb 16 17:15:21 crc kubenswrapper[4794]: I0216 17:15:21.787166 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gv97q" Feb 16 17:15:21 crc kubenswrapper[4794]: I0216 17:15:21.941021 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f2fg\" (UniqueName: \"kubernetes.io/projected/46726706-d38e-4bcf-8296-2e16d5a21edd-kube-api-access-5f2fg\") pod \"46726706-d38e-4bcf-8296-2e16d5a21edd\" (UID: \"46726706-d38e-4bcf-8296-2e16d5a21edd\") " Feb 16 17:15:21 crc kubenswrapper[4794]: I0216 17:15:21.941182 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46726706-d38e-4bcf-8296-2e16d5a21edd-utilities\") pod \"46726706-d38e-4bcf-8296-2e16d5a21edd\" (UID: \"46726706-d38e-4bcf-8296-2e16d5a21edd\") " Feb 16 17:15:21 crc kubenswrapper[4794]: I0216 17:15:21.941326 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46726706-d38e-4bcf-8296-2e16d5a21edd-catalog-content\") pod \"46726706-d38e-4bcf-8296-2e16d5a21edd\" (UID: \"46726706-d38e-4bcf-8296-2e16d5a21edd\") " Feb 16 17:15:21 crc kubenswrapper[4794]: I0216 17:15:21.942127 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46726706-d38e-4bcf-8296-2e16d5a21edd-utilities" (OuterVolumeSpecName: "utilities") pod "46726706-d38e-4bcf-8296-2e16d5a21edd" (UID: "46726706-d38e-4bcf-8296-2e16d5a21edd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:15:21 crc kubenswrapper[4794]: I0216 17:15:21.953448 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46726706-d38e-4bcf-8296-2e16d5a21edd-kube-api-access-5f2fg" (OuterVolumeSpecName: "kube-api-access-5f2fg") pod "46726706-d38e-4bcf-8296-2e16d5a21edd" (UID: "46726706-d38e-4bcf-8296-2e16d5a21edd"). InnerVolumeSpecName "kube-api-access-5f2fg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:15:21 crc kubenswrapper[4794]: I0216 17:15:21.995123 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46726706-d38e-4bcf-8296-2e16d5a21edd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46726706-d38e-4bcf-8296-2e16d5a21edd" (UID: "46726706-d38e-4bcf-8296-2e16d5a21edd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.042846 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46726706-d38e-4bcf-8296-2e16d5a21edd-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.042879 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46726706-d38e-4bcf-8296-2e16d5a21edd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.042892 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5f2fg\" (UniqueName: \"kubernetes.io/projected/46726706-d38e-4bcf-8296-2e16d5a21edd-kube-api-access-5f2fg\") on node \"crc\" DevicePath \"\"" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.379288 4794 generic.go:334] "Generic (PLEG): container finished" podID="46726706-d38e-4bcf-8296-2e16d5a21edd" containerID="91274fd651622a3186ae4d5bd703e4b66821ab7f5f4e5bb4576a40d3b458ceb1" exitCode=0 Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.379421 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gv97q" event={"ID":"46726706-d38e-4bcf-8296-2e16d5a21edd","Type":"ContainerDied","Data":"91274fd651622a3186ae4d5bd703e4b66821ab7f5f4e5bb4576a40d3b458ceb1"} Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.379738 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gv97q" event={"ID":"46726706-d38e-4bcf-8296-2e16d5a21edd","Type":"ContainerDied","Data":"b06677a1a636e1f460a0087a1916a4a64c60869ece14463542210028d9009f52"} Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.379490 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gv97q" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.379778 4794 scope.go:117] "RemoveContainer" containerID="91274fd651622a3186ae4d5bd703e4b66821ab7f5f4e5bb4576a40d3b458ceb1" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.405140 4794 scope.go:117] "RemoveContainer" containerID="0fab34bc4b741cdc93c0d559185b4ffc1a972745028ee5906e486acf60baefc2" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.409708 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gv97q"] Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.418939 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gv97q"] Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.431836 4794 scope.go:117] "RemoveContainer" containerID="0933f50662535eaee03f5fa45a13d9b06b76ba1a1c330cbd7a34dbbb9c4d3097" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.458894 4794 scope.go:117] "RemoveContainer" containerID="91274fd651622a3186ae4d5bd703e4b66821ab7f5f4e5bb4576a40d3b458ceb1" Feb 16 17:15:22 crc kubenswrapper[4794]: E0216 17:15:22.459427 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91274fd651622a3186ae4d5bd703e4b66821ab7f5f4e5bb4576a40d3b458ceb1\": container with ID starting with 91274fd651622a3186ae4d5bd703e4b66821ab7f5f4e5bb4576a40d3b458ceb1 not found: ID does not exist" containerID="91274fd651622a3186ae4d5bd703e4b66821ab7f5f4e5bb4576a40d3b458ceb1" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.459469 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91274fd651622a3186ae4d5bd703e4b66821ab7f5f4e5bb4576a40d3b458ceb1"} err="failed to get container status \"91274fd651622a3186ae4d5bd703e4b66821ab7f5f4e5bb4576a40d3b458ceb1\": rpc error: code = NotFound desc = could not find container \"91274fd651622a3186ae4d5bd703e4b66821ab7f5f4e5bb4576a40d3b458ceb1\": container with ID starting with 91274fd651622a3186ae4d5bd703e4b66821ab7f5f4e5bb4576a40d3b458ceb1 not found: ID does not exist" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.459496 4794 scope.go:117] "RemoveContainer" containerID="0fab34bc4b741cdc93c0d559185b4ffc1a972745028ee5906e486acf60baefc2" Feb 16 17:15:22 crc kubenswrapper[4794]: E0216 17:15:22.459843 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fab34bc4b741cdc93c0d559185b4ffc1a972745028ee5906e486acf60baefc2\": container with ID starting with 0fab34bc4b741cdc93c0d559185b4ffc1a972745028ee5906e486acf60baefc2 not found: ID does not exist" containerID="0fab34bc4b741cdc93c0d559185b4ffc1a972745028ee5906e486acf60baefc2" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.459905 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fab34bc4b741cdc93c0d559185b4ffc1a972745028ee5906e486acf60baefc2"} err="failed to get container status \"0fab34bc4b741cdc93c0d559185b4ffc1a972745028ee5906e486acf60baefc2\": rpc error: code = NotFound desc = could not find container \"0fab34bc4b741cdc93c0d559185b4ffc1a972745028ee5906e486acf60baefc2\": container with ID starting with 0fab34bc4b741cdc93c0d559185b4ffc1a972745028ee5906e486acf60baefc2 not found: ID does not exist" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.459925 4794 scope.go:117] "RemoveContainer" containerID="0933f50662535eaee03f5fa45a13d9b06b76ba1a1c330cbd7a34dbbb9c4d3097" Feb 16 17:15:22 crc kubenswrapper[4794]: E0216 17:15:22.460199 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0933f50662535eaee03f5fa45a13d9b06b76ba1a1c330cbd7a34dbbb9c4d3097\": container with ID starting with 0933f50662535eaee03f5fa45a13d9b06b76ba1a1c330cbd7a34dbbb9c4d3097 not found: ID does not exist" containerID="0933f50662535eaee03f5fa45a13d9b06b76ba1a1c330cbd7a34dbbb9c4d3097" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.460229 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0933f50662535eaee03f5fa45a13d9b06b76ba1a1c330cbd7a34dbbb9c4d3097"} err="failed to get container status \"0933f50662535eaee03f5fa45a13d9b06b76ba1a1c330cbd7a34dbbb9c4d3097\": rpc error: code = NotFound desc = could not find container \"0933f50662535eaee03f5fa45a13d9b06b76ba1a1c330cbd7a34dbbb9c4d3097\": container with ID starting with 0933f50662535eaee03f5fa45a13d9b06b76ba1a1c330cbd7a34dbbb9c4d3097 not found: ID does not exist" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.733853 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-9wmw5"] Feb 16 17:15:22 crc kubenswrapper[4794]: E0216 17:15:22.734121 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46726706-d38e-4bcf-8296-2e16d5a21edd" containerName="extract-content" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.734133 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="46726706-d38e-4bcf-8296-2e16d5a21edd" containerName="extract-content" Feb 16 17:15:22 crc kubenswrapper[4794]: E0216 17:15:22.734165 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46726706-d38e-4bcf-8296-2e16d5a21edd" containerName="registry-server" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.734172 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="46726706-d38e-4bcf-8296-2e16d5a21edd" containerName="registry-server" Feb 16 17:15:22 crc kubenswrapper[4794]: E0216 17:15:22.734185 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46726706-d38e-4bcf-8296-2e16d5a21edd" containerName="extract-utilities" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.734193 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="46726706-d38e-4bcf-8296-2e16d5a21edd" containerName="extract-utilities" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.734364 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="46726706-d38e-4bcf-8296-2e16d5a21edd" containerName="registry-server" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.735127 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-9wmw5" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.737287 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-5c8h8" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.743562 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-n99t7"] Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.744725 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-n99t7" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.747128 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.757720 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-v7cfc"] Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.758925 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-v7cfc" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.847005 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46726706-d38e-4bcf-8296-2e16d5a21edd" path="/var/lib/kubelet/pods/46726706-d38e-4bcf-8296-2e16d5a21edd/volumes" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.847793 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-9wmw5"] Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.859329 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/541b811d-ad2c-43e3-aa09-82833010ec62-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-n99t7\" (UID: \"541b811d-ad2c-43e3-aa09-82833010ec62\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-n99t7" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.860429 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87vtr\" (UniqueName: \"kubernetes.io/projected/541b811d-ad2c-43e3-aa09-82833010ec62-kube-api-access-87vtr\") pod \"nmstate-webhook-866bcb46dc-n99t7\" (UID: \"541b811d-ad2c-43e3-aa09-82833010ec62\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-n99t7" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.860533 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82577\" (UniqueName: \"kubernetes.io/projected/fb8ce142-2a32-4900-9a3b-7534607c176c-kube-api-access-82577\") pod \"nmstate-metrics-58c85c668d-9wmw5\" (UID: \"fb8ce142-2a32-4900-9a3b-7534607c176c\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-9wmw5" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.867390 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-n99t7"] Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.962942 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82577\" (UniqueName: \"kubernetes.io/projected/fb8ce142-2a32-4900-9a3b-7534607c176c-kube-api-access-82577\") pod \"nmstate-metrics-58c85c668d-9wmw5\" (UID: \"fb8ce142-2a32-4900-9a3b-7534607c176c\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-9wmw5" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.963011 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/eeb5a012-73db-4509-a3b5-35c56601ce33-nmstate-lock\") pod \"nmstate-handler-v7cfc\" (UID: \"eeb5a012-73db-4509-a3b5-35c56601ce33\") " pod="openshift-nmstate/nmstate-handler-v7cfc" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.963095 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/541b811d-ad2c-43e3-aa09-82833010ec62-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-n99t7\" (UID: \"541b811d-ad2c-43e3-aa09-82833010ec62\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-n99t7" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.963125 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87vtr\" (UniqueName: \"kubernetes.io/projected/541b811d-ad2c-43e3-aa09-82833010ec62-kube-api-access-87vtr\") pod \"nmstate-webhook-866bcb46dc-n99t7\" (UID: \"541b811d-ad2c-43e3-aa09-82833010ec62\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-n99t7" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.963144 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/eeb5a012-73db-4509-a3b5-35c56601ce33-dbus-socket\") pod \"nmstate-handler-v7cfc\" (UID: \"eeb5a012-73db-4509-a3b5-35c56601ce33\") " pod="openshift-nmstate/nmstate-handler-v7cfc" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.963175 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crqmw\" (UniqueName: \"kubernetes.io/projected/eeb5a012-73db-4509-a3b5-35c56601ce33-kube-api-access-crqmw\") pod \"nmstate-handler-v7cfc\" (UID: \"eeb5a012-73db-4509-a3b5-35c56601ce33\") " pod="openshift-nmstate/nmstate-handler-v7cfc" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.963192 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/eeb5a012-73db-4509-a3b5-35c56601ce33-ovs-socket\") pod \"nmstate-handler-v7cfc\" (UID: \"eeb5a012-73db-4509-a3b5-35c56601ce33\") " pod="openshift-nmstate/nmstate-handler-v7cfc" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.964126 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-gcn4p"] Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.965446 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-gcn4p" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.971958 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/541b811d-ad2c-43e3-aa09-82833010ec62-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-n99t7\" (UID: \"541b811d-ad2c-43e3-aa09-82833010ec62\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-n99t7" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.974480 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-zpnz4" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.978782 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.978927 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 16 17:15:22 crc kubenswrapper[4794]: I0216 17:15:22.981004 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-gcn4p"] Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:22.999972 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87vtr\" (UniqueName: \"kubernetes.io/projected/541b811d-ad2c-43e3-aa09-82833010ec62-kube-api-access-87vtr\") pod \"nmstate-webhook-866bcb46dc-n99t7\" (UID: \"541b811d-ad2c-43e3-aa09-82833010ec62\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-n99t7" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.007080 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82577\" (UniqueName: \"kubernetes.io/projected/fb8ce142-2a32-4900-9a3b-7534607c176c-kube-api-access-82577\") pod \"nmstate-metrics-58c85c668d-9wmw5\" (UID: \"fb8ce142-2a32-4900-9a3b-7534607c176c\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-9wmw5" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.055815 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-9wmw5" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.067527 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/016d1c49-2466-4430-9c2d-5402c0c46fe3-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-gcn4p\" (UID: \"016d1c49-2466-4430-9c2d-5402c0c46fe3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-gcn4p" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.067603 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5jwm\" (UniqueName: \"kubernetes.io/projected/016d1c49-2466-4430-9c2d-5402c0c46fe3-kube-api-access-m5jwm\") pod \"nmstate-console-plugin-5c78fc5d65-gcn4p\" (UID: \"016d1c49-2466-4430-9c2d-5402c0c46fe3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-gcn4p" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.067634 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/eeb5a012-73db-4509-a3b5-35c56601ce33-dbus-socket\") pod \"nmstate-handler-v7cfc\" (UID: \"eeb5a012-73db-4509-a3b5-35c56601ce33\") " pod="openshift-nmstate/nmstate-handler-v7cfc" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.067655 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crqmw\" (UniqueName: \"kubernetes.io/projected/eeb5a012-73db-4509-a3b5-35c56601ce33-kube-api-access-crqmw\") pod \"nmstate-handler-v7cfc\" (UID: \"eeb5a012-73db-4509-a3b5-35c56601ce33\") " pod="openshift-nmstate/nmstate-handler-v7cfc" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.067680 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/eeb5a012-73db-4509-a3b5-35c56601ce33-ovs-socket\") pod \"nmstate-handler-v7cfc\" (UID: \"eeb5a012-73db-4509-a3b5-35c56601ce33\") " pod="openshift-nmstate/nmstate-handler-v7cfc" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.067754 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/eeb5a012-73db-4509-a3b5-35c56601ce33-nmstate-lock\") pod \"nmstate-handler-v7cfc\" (UID: \"eeb5a012-73db-4509-a3b5-35c56601ce33\") " pod="openshift-nmstate/nmstate-handler-v7cfc" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.067783 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/016d1c49-2466-4430-9c2d-5402c0c46fe3-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-gcn4p\" (UID: \"016d1c49-2466-4430-9c2d-5402c0c46fe3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-gcn4p" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.068139 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/eeb5a012-73db-4509-a3b5-35c56601ce33-dbus-socket\") pod \"nmstate-handler-v7cfc\" (UID: \"eeb5a012-73db-4509-a3b5-35c56601ce33\") " pod="openshift-nmstate/nmstate-handler-v7cfc" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.068195 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/eeb5a012-73db-4509-a3b5-35c56601ce33-ovs-socket\") pod \"nmstate-handler-v7cfc\" (UID: \"eeb5a012-73db-4509-a3b5-35c56601ce33\") " pod="openshift-nmstate/nmstate-handler-v7cfc" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.068217 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/eeb5a012-73db-4509-a3b5-35c56601ce33-nmstate-lock\") pod \"nmstate-handler-v7cfc\" (UID: \"eeb5a012-73db-4509-a3b5-35c56601ce33\") " pod="openshift-nmstate/nmstate-handler-v7cfc" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.086828 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-n99t7" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.121906 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crqmw\" (UniqueName: \"kubernetes.io/projected/eeb5a012-73db-4509-a3b5-35c56601ce33-kube-api-access-crqmw\") pod \"nmstate-handler-v7cfc\" (UID: \"eeb5a012-73db-4509-a3b5-35c56601ce33\") " pod="openshift-nmstate/nmstate-handler-v7cfc" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.168868 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/016d1c49-2466-4430-9c2d-5402c0c46fe3-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-gcn4p\" (UID: \"016d1c49-2466-4430-9c2d-5402c0c46fe3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-gcn4p" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.168941 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/016d1c49-2466-4430-9c2d-5402c0c46fe3-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-gcn4p\" (UID: \"016d1c49-2466-4430-9c2d-5402c0c46fe3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-gcn4p" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.168987 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5jwm\" (UniqueName: \"kubernetes.io/projected/016d1c49-2466-4430-9c2d-5402c0c46fe3-kube-api-access-m5jwm\") pod \"nmstate-console-plugin-5c78fc5d65-gcn4p\" (UID: \"016d1c49-2466-4430-9c2d-5402c0c46fe3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-gcn4p" Feb 16 17:15:23 crc kubenswrapper[4794]: E0216 17:15:23.171154 4794 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 16 17:15:23 crc kubenswrapper[4794]: E0216 17:15:23.171215 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/016d1c49-2466-4430-9c2d-5402c0c46fe3-plugin-serving-cert podName:016d1c49-2466-4430-9c2d-5402c0c46fe3 nodeName:}" failed. No retries permitted until 2026-02-16 17:15:23.671199587 +0000 UTC m=+949.619294234 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/016d1c49-2466-4430-9c2d-5402c0c46fe3-plugin-serving-cert") pod "nmstate-console-plugin-5c78fc5d65-gcn4p" (UID: "016d1c49-2466-4430-9c2d-5402c0c46fe3") : secret "plugin-serving-cert" not found Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.172030 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/016d1c49-2466-4430-9c2d-5402c0c46fe3-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-gcn4p\" (UID: \"016d1c49-2466-4430-9c2d-5402c0c46fe3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-gcn4p" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.211241 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-576f6bf7c-mkh5d"] Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.220514 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5jwm\" (UniqueName: \"kubernetes.io/projected/016d1c49-2466-4430-9c2d-5402c0c46fe3-kube-api-access-m5jwm\") pod \"nmstate-console-plugin-5c78fc5d65-gcn4p\" (UID: \"016d1c49-2466-4430-9c2d-5402c0c46fe3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-gcn4p" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.221527 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-576f6bf7c-mkh5d" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.254024 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-576f6bf7c-mkh5d"] Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.310587 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jd2x8"] Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.312632 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jd2x8" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.329147 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jd2x8"] Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.373477 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-console-serving-cert\") pod \"console-576f6bf7c-mkh5d\" (UID: \"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c\") " pod="openshift-console/console-576f6bf7c-mkh5d" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.373776 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-trusted-ca-bundle\") pod \"console-576f6bf7c-mkh5d\" (UID: \"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c\") " pod="openshift-console/console-576f6bf7c-mkh5d" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.373812 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-oauth-serving-cert\") pod \"console-576f6bf7c-mkh5d\" (UID: \"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c\") " pod="openshift-console/console-576f6bf7c-mkh5d" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.373845 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-console-oauth-config\") pod \"console-576f6bf7c-mkh5d\" (UID: \"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c\") " pod="openshift-console/console-576f6bf7c-mkh5d" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.373907 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-service-ca\") pod \"console-576f6bf7c-mkh5d\" (UID: \"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c\") " pod="openshift-console/console-576f6bf7c-mkh5d" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.373934 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-console-config\") pod \"console-576f6bf7c-mkh5d\" (UID: \"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c\") " pod="openshift-console/console-576f6bf7c-mkh5d" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.373972 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d74cx\" (UniqueName: \"kubernetes.io/projected/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-kube-api-access-d74cx\") pod \"console-576f6bf7c-mkh5d\" (UID: \"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c\") " pod="openshift-console/console-576f6bf7c-mkh5d" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.397007 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-v7cfc" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.475282 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-console-config\") pod \"console-576f6bf7c-mkh5d\" (UID: \"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c\") " pod="openshift-console/console-576f6bf7c-mkh5d" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.475394 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d74cx\" (UniqueName: \"kubernetes.io/projected/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-kube-api-access-d74cx\") pod \"console-576f6bf7c-mkh5d\" (UID: \"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c\") " pod="openshift-console/console-576f6bf7c-mkh5d" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.475440 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58a927f6-aaee-4e8f-a4cf-09d067ec88ed-utilities\") pod \"community-operators-jd2x8\" (UID: \"58a927f6-aaee-4e8f-a4cf-09d067ec88ed\") " pod="openshift-marketplace/community-operators-jd2x8" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.475535 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-console-serving-cert\") pod \"console-576f6bf7c-mkh5d\" (UID: \"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c\") " pod="openshift-console/console-576f6bf7c-mkh5d" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.475557 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-trusted-ca-bundle\") pod \"console-576f6bf7c-mkh5d\" (UID: \"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c\") " pod="openshift-console/console-576f6bf7c-mkh5d" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.475586 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-oauth-serving-cert\") pod \"console-576f6bf7c-mkh5d\" (UID: \"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c\") " pod="openshift-console/console-576f6bf7c-mkh5d" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.475613 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-console-oauth-config\") pod \"console-576f6bf7c-mkh5d\" (UID: \"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c\") " pod="openshift-console/console-576f6bf7c-mkh5d" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.475652 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p98mr\" (UniqueName: \"kubernetes.io/projected/58a927f6-aaee-4e8f-a4cf-09d067ec88ed-kube-api-access-p98mr\") pod \"community-operators-jd2x8\" (UID: \"58a927f6-aaee-4e8f-a4cf-09d067ec88ed\") " pod="openshift-marketplace/community-operators-jd2x8" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.475683 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58a927f6-aaee-4e8f-a4cf-09d067ec88ed-catalog-content\") pod \"community-operators-jd2x8\" (UID: \"58a927f6-aaee-4e8f-a4cf-09d067ec88ed\") " pod="openshift-marketplace/community-operators-jd2x8" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.475718 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-service-ca\") pod \"console-576f6bf7c-mkh5d\" (UID: \"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c\") " pod="openshift-console/console-576f6bf7c-mkh5d" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.476692 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-service-ca\") pod \"console-576f6bf7c-mkh5d\" (UID: \"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c\") " pod="openshift-console/console-576f6bf7c-mkh5d" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.476700 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-oauth-serving-cert\") pod \"console-576f6bf7c-mkh5d\" (UID: \"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c\") " pod="openshift-console/console-576f6bf7c-mkh5d" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.476702 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-console-config\") pod \"console-576f6bf7c-mkh5d\" (UID: \"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c\") " pod="openshift-console/console-576f6bf7c-mkh5d" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.478197 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-trusted-ca-bundle\") pod \"console-576f6bf7c-mkh5d\" (UID: \"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c\") " pod="openshift-console/console-576f6bf7c-mkh5d" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.495382 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-console-serving-cert\") pod \"console-576f6bf7c-mkh5d\" (UID: \"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c\") " pod="openshift-console/console-576f6bf7c-mkh5d" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.495845 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-console-oauth-config\") pod \"console-576f6bf7c-mkh5d\" (UID: \"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c\") " pod="openshift-console/console-576f6bf7c-mkh5d" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.500095 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d74cx\" (UniqueName: \"kubernetes.io/projected/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-kube-api-access-d74cx\") pod \"console-576f6bf7c-mkh5d\" (UID: \"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c\") " pod="openshift-console/console-576f6bf7c-mkh5d" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.576057 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-576f6bf7c-mkh5d" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.577487 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p98mr\" (UniqueName: \"kubernetes.io/projected/58a927f6-aaee-4e8f-a4cf-09d067ec88ed-kube-api-access-p98mr\") pod \"community-operators-jd2x8\" (UID: \"58a927f6-aaee-4e8f-a4cf-09d067ec88ed\") " pod="openshift-marketplace/community-operators-jd2x8" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.577531 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58a927f6-aaee-4e8f-a4cf-09d067ec88ed-catalog-content\") pod \"community-operators-jd2x8\" (UID: \"58a927f6-aaee-4e8f-a4cf-09d067ec88ed\") " pod="openshift-marketplace/community-operators-jd2x8" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.577606 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58a927f6-aaee-4e8f-a4cf-09d067ec88ed-utilities\") pod \"community-operators-jd2x8\" (UID: \"58a927f6-aaee-4e8f-a4cf-09d067ec88ed\") " pod="openshift-marketplace/community-operators-jd2x8" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.578062 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58a927f6-aaee-4e8f-a4cf-09d067ec88ed-utilities\") pod \"community-operators-jd2x8\" (UID: \"58a927f6-aaee-4e8f-a4cf-09d067ec88ed\") " pod="openshift-marketplace/community-operators-jd2x8" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.578690 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58a927f6-aaee-4e8f-a4cf-09d067ec88ed-catalog-content\") pod \"community-operators-jd2x8\" (UID: \"58a927f6-aaee-4e8f-a4cf-09d067ec88ed\") " pod="openshift-marketplace/community-operators-jd2x8" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.600598 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p98mr\" (UniqueName: \"kubernetes.io/projected/58a927f6-aaee-4e8f-a4cf-09d067ec88ed-kube-api-access-p98mr\") pod \"community-operators-jd2x8\" (UID: \"58a927f6-aaee-4e8f-a4cf-09d067ec88ed\") " pod="openshift-marketplace/community-operators-jd2x8" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.654092 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jd2x8" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.676855 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-n99t7"] Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.678947 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/016d1c49-2466-4430-9c2d-5402c0c46fe3-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-gcn4p\" (UID: \"016d1c49-2466-4430-9c2d-5402c0c46fe3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-gcn4p" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.706172 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/016d1c49-2466-4430-9c2d-5402c0c46fe3-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-gcn4p\" (UID: \"016d1c49-2466-4430-9c2d-5402c0c46fe3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-gcn4p" Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.785524 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-9wmw5"] Feb 16 17:15:23 crc kubenswrapper[4794]: W0216 17:15:23.797540 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb8ce142_2a32_4900_9a3b_7534607c176c.slice/crio-37338c1d11dffdc728fe7ca3ffa2d4837d27334350cc26ffb7a8e384601091c2 WatchSource:0}: Error finding container 37338c1d11dffdc728fe7ca3ffa2d4837d27334350cc26ffb7a8e384601091c2: Status 404 returned error can't find the container with id 37338c1d11dffdc728fe7ca3ffa2d4837d27334350cc26ffb7a8e384601091c2 Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.898018 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-576f6bf7c-mkh5d"] Feb 16 17:15:23 crc kubenswrapper[4794]: I0216 17:15:23.976044 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-gcn4p" Feb 16 17:15:24 crc kubenswrapper[4794]: I0216 17:15:24.090283 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jd2x8"] Feb 16 17:15:24 crc kubenswrapper[4794]: W0216 17:15:24.104123 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58a927f6_aaee_4e8f_a4cf_09d067ec88ed.slice/crio-0e72d9eaf79976edcdd26611188e5a4c7e6afbb8b5321cc4fefcdfb162d51088 WatchSource:0}: Error finding container 0e72d9eaf79976edcdd26611188e5a4c7e6afbb8b5321cc4fefcdfb162d51088: Status 404 returned error can't find the container with id 0e72d9eaf79976edcdd26611188e5a4c7e6afbb8b5321cc4fefcdfb162d51088 Feb 16 17:15:24 crc kubenswrapper[4794]: I0216 17:15:24.399923 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-576f6bf7c-mkh5d" event={"ID":"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c","Type":"ContainerStarted","Data":"bb82561ee7b85bb649642db64d1c9def75f7f9722c2e24704b38b18398a51d21"} Feb 16 17:15:24 crc kubenswrapper[4794]: I0216 17:15:24.400628 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-576f6bf7c-mkh5d" event={"ID":"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c","Type":"ContainerStarted","Data":"d75f1d1aa108dfa2b7102778f83ee9b54bc07371a3b99352544184204aab2d65"} Feb 16 17:15:24 crc kubenswrapper[4794]: I0216 17:15:24.402978 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-v7cfc" event={"ID":"eeb5a012-73db-4509-a3b5-35c56601ce33","Type":"ContainerStarted","Data":"2c0e4b41dabcfa817bb8c85d5ebafc0352159e97a14b8f10a682e057d9fe6ad2"} Feb 16 17:15:24 crc kubenswrapper[4794]: I0216 17:15:24.404471 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-n99t7" event={"ID":"541b811d-ad2c-43e3-aa09-82833010ec62","Type":"ContainerStarted","Data":"fb644342946a038e13ae6a3c3fef51297e672222012fa263fc0736d11e5d2607"} Feb 16 17:15:24 crc kubenswrapper[4794]: I0216 17:15:24.405898 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-9wmw5" event={"ID":"fb8ce142-2a32-4900-9a3b-7534607c176c","Type":"ContainerStarted","Data":"37338c1d11dffdc728fe7ca3ffa2d4837d27334350cc26ffb7a8e384601091c2"} Feb 16 17:15:24 crc kubenswrapper[4794]: I0216 17:15:24.408166 4794 generic.go:334] "Generic (PLEG): container finished" podID="58a927f6-aaee-4e8f-a4cf-09d067ec88ed" containerID="01d114b9981287f412c665837450d16e90c33ff89532fa61a5c2d6324ad76b13" exitCode=0 Feb 16 17:15:24 crc kubenswrapper[4794]: I0216 17:15:24.408208 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jd2x8" event={"ID":"58a927f6-aaee-4e8f-a4cf-09d067ec88ed","Type":"ContainerDied","Data":"01d114b9981287f412c665837450d16e90c33ff89532fa61a5c2d6324ad76b13"} Feb 16 17:15:24 crc kubenswrapper[4794]: I0216 17:15:24.408230 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jd2x8" event={"ID":"58a927f6-aaee-4e8f-a4cf-09d067ec88ed","Type":"ContainerStarted","Data":"0e72d9eaf79976edcdd26611188e5a4c7e6afbb8b5321cc4fefcdfb162d51088"} Feb 16 17:15:24 crc kubenswrapper[4794]: I0216 17:15:24.427721 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-576f6bf7c-mkh5d" podStartSLOduration=1.427697506 podStartE2EDuration="1.427697506s" podCreationTimestamp="2026-02-16 17:15:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:15:24.417575359 +0000 UTC m=+950.365670006" watchObservedRunningTime="2026-02-16 17:15:24.427697506 +0000 UTC m=+950.375792153" Feb 16 17:15:24 crc kubenswrapper[4794]: I0216 17:15:24.497544 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-gcn4p"] Feb 16 17:15:25 crc kubenswrapper[4794]: I0216 17:15:25.418889 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jd2x8" event={"ID":"58a927f6-aaee-4e8f-a4cf-09d067ec88ed","Type":"ContainerStarted","Data":"644f42457c46b49edb4a1734caf4fff748988c19829a8116854a1eab81ab1a9b"} Feb 16 17:15:25 crc kubenswrapper[4794]: I0216 17:15:25.420896 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-gcn4p" event={"ID":"016d1c49-2466-4430-9c2d-5402c0c46fe3","Type":"ContainerStarted","Data":"efeb2faafea77c0663abd36faf28fa51a4841f9a9c6f0172a46d071b721f29df"} Feb 16 17:15:26 crc kubenswrapper[4794]: I0216 17:15:26.429402 4794 generic.go:334] "Generic (PLEG): container finished" podID="58a927f6-aaee-4e8f-a4cf-09d067ec88ed" containerID="644f42457c46b49edb4a1734caf4fff748988c19829a8116854a1eab81ab1a9b" exitCode=0 Feb 16 17:15:26 crc kubenswrapper[4794]: I0216 17:15:26.429566 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jd2x8" event={"ID":"58a927f6-aaee-4e8f-a4cf-09d067ec88ed","Type":"ContainerDied","Data":"644f42457c46b49edb4a1734caf4fff748988c19829a8116854a1eab81ab1a9b"} Feb 16 17:15:28 crc kubenswrapper[4794]: I0216 17:15:28.450283 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-v7cfc" event={"ID":"eeb5a012-73db-4509-a3b5-35c56601ce33","Type":"ContainerStarted","Data":"53bc99e08d9e4cd76907d33542bbd8e6b54e843487f5d5faa2061c3f1380294c"} Feb 16 17:15:28 crc kubenswrapper[4794]: I0216 17:15:28.452402 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-v7cfc" Feb 16 17:15:28 crc kubenswrapper[4794]: I0216 17:15:28.454255 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-gcn4p" event={"ID":"016d1c49-2466-4430-9c2d-5402c0c46fe3","Type":"ContainerStarted","Data":"fc1c06ddc6e8b5546081649fa612d79817523f80e87ecd09e0860677dd680c16"} Feb 16 17:15:28 crc kubenswrapper[4794]: I0216 17:15:28.456160 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-n99t7" event={"ID":"541b811d-ad2c-43e3-aa09-82833010ec62","Type":"ContainerStarted","Data":"0667492c73c3f42cc283304506239f997f6f2c01078ca287c319f67ea9a83853"} Feb 16 17:15:28 crc kubenswrapper[4794]: I0216 17:15:28.457532 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-n99t7" Feb 16 17:15:28 crc kubenswrapper[4794]: I0216 17:15:28.457885 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-9wmw5" event={"ID":"fb8ce142-2a32-4900-9a3b-7534607c176c","Type":"ContainerStarted","Data":"54ab0d5bb361f9e570391f31ef6669c4e24996c6c0d07e16ed4b5cdb5a19adc7"} Feb 16 17:15:28 crc kubenswrapper[4794]: I0216 17:15:28.461220 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jd2x8" event={"ID":"58a927f6-aaee-4e8f-a4cf-09d067ec88ed","Type":"ContainerStarted","Data":"6969a8ee336a933fa97574fe5b3dc5009eca04b6665798d7f20e1ca65462a931"} Feb 16 17:15:28 crc kubenswrapper[4794]: I0216 17:15:28.481810 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-v7cfc" podStartSLOduration=2.278032864 podStartE2EDuration="6.481791913s" podCreationTimestamp="2026-02-16 17:15:22 +0000 UTC" firstStartedPulling="2026-02-16 17:15:23.436061681 +0000 UTC m=+949.384156328" lastFinishedPulling="2026-02-16 17:15:27.63982074 +0000 UTC m=+953.587915377" observedRunningTime="2026-02-16 17:15:28.47640342 +0000 UTC m=+954.424498067" watchObservedRunningTime="2026-02-16 17:15:28.481791913 +0000 UTC m=+954.429886560" Feb 16 17:15:28 crc kubenswrapper[4794]: I0216 17:15:28.500336 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-gcn4p" podStartSLOduration=3.563235886 podStartE2EDuration="6.500292567s" podCreationTimestamp="2026-02-16 17:15:22 +0000 UTC" firstStartedPulling="2026-02-16 17:15:24.505707846 +0000 UTC m=+950.453802493" lastFinishedPulling="2026-02-16 17:15:27.442764527 +0000 UTC m=+953.390859174" observedRunningTime="2026-02-16 17:15:28.490766187 +0000 UTC m=+954.438860834" watchObservedRunningTime="2026-02-16 17:15:28.500292567 +0000 UTC m=+954.448387214" Feb 16 17:15:28 crc kubenswrapper[4794]: I0216 17:15:28.510113 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-n99t7" podStartSLOduration=2.796456092 podStartE2EDuration="6.510094505s" podCreationTimestamp="2026-02-16 17:15:22 +0000 UTC" firstStartedPulling="2026-02-16 17:15:23.730083361 +0000 UTC m=+949.678178008" lastFinishedPulling="2026-02-16 17:15:27.443721774 +0000 UTC m=+953.391816421" observedRunningTime="2026-02-16 17:15:28.507176292 +0000 UTC m=+954.455270939" watchObservedRunningTime="2026-02-16 17:15:28.510094505 +0000 UTC m=+954.458189152" Feb 16 17:15:28 crc kubenswrapper[4794]: I0216 17:15:28.537578 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jd2x8" podStartSLOduration=2.3061152910000002 podStartE2EDuration="5.537557783s" podCreationTimestamp="2026-02-16 17:15:23 +0000 UTC" firstStartedPulling="2026-02-16 17:15:24.409758097 +0000 UTC m=+950.357852744" lastFinishedPulling="2026-02-16 17:15:27.641200589 +0000 UTC m=+953.589295236" observedRunningTime="2026-02-16 17:15:28.532913561 +0000 UTC m=+954.481008228" watchObservedRunningTime="2026-02-16 17:15:28.537557783 +0000 UTC m=+954.485652430" Feb 16 17:15:30 crc kubenswrapper[4794]: I0216 17:15:30.481662 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-9wmw5" event={"ID":"fb8ce142-2a32-4900-9a3b-7534607c176c","Type":"ContainerStarted","Data":"87e80e7b0f2518864b566316f850e4d044ce44a56bb54b98f001bb30dbf74cef"} Feb 16 17:15:30 crc kubenswrapper[4794]: I0216 17:15:30.496166 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-9wmw5" podStartSLOduration=2.445706995 podStartE2EDuration="8.496145433s" podCreationTimestamp="2026-02-16 17:15:22 +0000 UTC" firstStartedPulling="2026-02-16 17:15:23.810294273 +0000 UTC m=+949.758388920" lastFinishedPulling="2026-02-16 17:15:29.860732721 +0000 UTC m=+955.808827358" observedRunningTime="2026-02-16 17:15:30.495427193 +0000 UTC m=+956.443521840" watchObservedRunningTime="2026-02-16 17:15:30.496145433 +0000 UTC m=+956.444240090" Feb 16 17:15:33 crc kubenswrapper[4794]: I0216 17:15:33.435202 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-v7cfc" Feb 16 17:15:33 crc kubenswrapper[4794]: I0216 17:15:33.578290 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-576f6bf7c-mkh5d" Feb 16 17:15:33 crc kubenswrapper[4794]: I0216 17:15:33.578431 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-576f6bf7c-mkh5d" Feb 16 17:15:33 crc kubenswrapper[4794]: I0216 17:15:33.586589 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-576f6bf7c-mkh5d" Feb 16 17:15:33 crc kubenswrapper[4794]: I0216 17:15:33.654538 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jd2x8" Feb 16 17:15:33 crc kubenswrapper[4794]: I0216 17:15:33.654606 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jd2x8" Feb 16 17:15:33 crc kubenswrapper[4794]: I0216 17:15:33.698911 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jd2x8" Feb 16 17:15:34 crc kubenswrapper[4794]: I0216 17:15:34.520862 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-576f6bf7c-mkh5d" Feb 16 17:15:34 crc kubenswrapper[4794]: I0216 17:15:34.575112 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jd2x8" Feb 16 17:15:34 crc kubenswrapper[4794]: I0216 17:15:34.580440 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-d58d8d689-ppcq9"] Feb 16 17:15:34 crc kubenswrapper[4794]: I0216 17:15:34.638816 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jd2x8"] Feb 16 17:15:36 crc kubenswrapper[4794]: I0216 17:15:36.528148 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jd2x8" podUID="58a927f6-aaee-4e8f-a4cf-09d067ec88ed" containerName="registry-server" containerID="cri-o://6969a8ee336a933fa97574fe5b3dc5009eca04b6665798d7f20e1ca65462a931" gracePeriod=2 Feb 16 17:15:37 crc kubenswrapper[4794]: I0216 17:15:37.480802 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jd2x8" Feb 16 17:15:37 crc kubenswrapper[4794]: I0216 17:15:37.536248 4794 generic.go:334] "Generic (PLEG): container finished" podID="58a927f6-aaee-4e8f-a4cf-09d067ec88ed" containerID="6969a8ee336a933fa97574fe5b3dc5009eca04b6665798d7f20e1ca65462a931" exitCode=0 Feb 16 17:15:37 crc kubenswrapper[4794]: I0216 17:15:37.536296 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jd2x8" event={"ID":"58a927f6-aaee-4e8f-a4cf-09d067ec88ed","Type":"ContainerDied","Data":"6969a8ee336a933fa97574fe5b3dc5009eca04b6665798d7f20e1ca65462a931"} Feb 16 17:15:37 crc kubenswrapper[4794]: I0216 17:15:37.536325 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jd2x8" Feb 16 17:15:37 crc kubenswrapper[4794]: I0216 17:15:37.536349 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jd2x8" event={"ID":"58a927f6-aaee-4e8f-a4cf-09d067ec88ed","Type":"ContainerDied","Data":"0e72d9eaf79976edcdd26611188e5a4c7e6afbb8b5321cc4fefcdfb162d51088"} Feb 16 17:15:37 crc kubenswrapper[4794]: I0216 17:15:37.536372 4794 scope.go:117] "RemoveContainer" containerID="6969a8ee336a933fa97574fe5b3dc5009eca04b6665798d7f20e1ca65462a931" Feb 16 17:15:37 crc kubenswrapper[4794]: I0216 17:15:37.547957 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58a927f6-aaee-4e8f-a4cf-09d067ec88ed-utilities\") pod \"58a927f6-aaee-4e8f-a4cf-09d067ec88ed\" (UID: \"58a927f6-aaee-4e8f-a4cf-09d067ec88ed\") " Feb 16 17:15:37 crc kubenswrapper[4794]: I0216 17:15:37.548055 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p98mr\" (UniqueName: \"kubernetes.io/projected/58a927f6-aaee-4e8f-a4cf-09d067ec88ed-kube-api-access-p98mr\") pod \"58a927f6-aaee-4e8f-a4cf-09d067ec88ed\" (UID: \"58a927f6-aaee-4e8f-a4cf-09d067ec88ed\") " Feb 16 17:15:37 crc kubenswrapper[4794]: I0216 17:15:37.548109 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58a927f6-aaee-4e8f-a4cf-09d067ec88ed-catalog-content\") pod \"58a927f6-aaee-4e8f-a4cf-09d067ec88ed\" (UID: \"58a927f6-aaee-4e8f-a4cf-09d067ec88ed\") " Feb 16 17:15:37 crc kubenswrapper[4794]: I0216 17:15:37.549150 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58a927f6-aaee-4e8f-a4cf-09d067ec88ed-utilities" (OuterVolumeSpecName: "utilities") pod "58a927f6-aaee-4e8f-a4cf-09d067ec88ed" (UID: "58a927f6-aaee-4e8f-a4cf-09d067ec88ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:15:37 crc kubenswrapper[4794]: I0216 17:15:37.554176 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58a927f6-aaee-4e8f-a4cf-09d067ec88ed-kube-api-access-p98mr" (OuterVolumeSpecName: "kube-api-access-p98mr") pod "58a927f6-aaee-4e8f-a4cf-09d067ec88ed" (UID: "58a927f6-aaee-4e8f-a4cf-09d067ec88ed"). InnerVolumeSpecName "kube-api-access-p98mr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:15:37 crc kubenswrapper[4794]: I0216 17:15:37.554210 4794 scope.go:117] "RemoveContainer" containerID="644f42457c46b49edb4a1734caf4fff748988c19829a8116854a1eab81ab1a9b" Feb 16 17:15:37 crc kubenswrapper[4794]: I0216 17:15:37.591240 4794 scope.go:117] "RemoveContainer" containerID="01d114b9981287f412c665837450d16e90c33ff89532fa61a5c2d6324ad76b13" Feb 16 17:15:37 crc kubenswrapper[4794]: I0216 17:15:37.604120 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58a927f6-aaee-4e8f-a4cf-09d067ec88ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58a927f6-aaee-4e8f-a4cf-09d067ec88ed" (UID: "58a927f6-aaee-4e8f-a4cf-09d067ec88ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:15:37 crc kubenswrapper[4794]: I0216 17:15:37.618839 4794 scope.go:117] "RemoveContainer" containerID="6969a8ee336a933fa97574fe5b3dc5009eca04b6665798d7f20e1ca65462a931" Feb 16 17:15:37 crc kubenswrapper[4794]: E0216 17:15:37.619644 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6969a8ee336a933fa97574fe5b3dc5009eca04b6665798d7f20e1ca65462a931\": container with ID starting with 6969a8ee336a933fa97574fe5b3dc5009eca04b6665798d7f20e1ca65462a931 not found: ID does not exist" containerID="6969a8ee336a933fa97574fe5b3dc5009eca04b6665798d7f20e1ca65462a931" Feb 16 17:15:37 crc kubenswrapper[4794]: I0216 17:15:37.619679 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6969a8ee336a933fa97574fe5b3dc5009eca04b6665798d7f20e1ca65462a931"} err="failed to get container status \"6969a8ee336a933fa97574fe5b3dc5009eca04b6665798d7f20e1ca65462a931\": rpc error: code = NotFound desc = could not find container \"6969a8ee336a933fa97574fe5b3dc5009eca04b6665798d7f20e1ca65462a931\": container with ID starting with 6969a8ee336a933fa97574fe5b3dc5009eca04b6665798d7f20e1ca65462a931 not found: ID does not exist" Feb 16 17:15:37 crc kubenswrapper[4794]: I0216 17:15:37.619705 4794 scope.go:117] "RemoveContainer" containerID="644f42457c46b49edb4a1734caf4fff748988c19829a8116854a1eab81ab1a9b" Feb 16 17:15:37 crc kubenswrapper[4794]: E0216 17:15:37.620194 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"644f42457c46b49edb4a1734caf4fff748988c19829a8116854a1eab81ab1a9b\": container with ID starting with 644f42457c46b49edb4a1734caf4fff748988c19829a8116854a1eab81ab1a9b not found: ID does not exist" containerID="644f42457c46b49edb4a1734caf4fff748988c19829a8116854a1eab81ab1a9b" Feb 16 17:15:37 crc kubenswrapper[4794]: I0216 17:15:37.620224 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"644f42457c46b49edb4a1734caf4fff748988c19829a8116854a1eab81ab1a9b"} err="failed to get container status \"644f42457c46b49edb4a1734caf4fff748988c19829a8116854a1eab81ab1a9b\": rpc error: code = NotFound desc = could not find container \"644f42457c46b49edb4a1734caf4fff748988c19829a8116854a1eab81ab1a9b\": container with ID starting with 644f42457c46b49edb4a1734caf4fff748988c19829a8116854a1eab81ab1a9b not found: ID does not exist" Feb 16 17:15:37 crc kubenswrapper[4794]: I0216 17:15:37.620242 4794 scope.go:117] "RemoveContainer" containerID="01d114b9981287f412c665837450d16e90c33ff89532fa61a5c2d6324ad76b13" Feb 16 17:15:37 crc kubenswrapper[4794]: E0216 17:15:37.621293 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01d114b9981287f412c665837450d16e90c33ff89532fa61a5c2d6324ad76b13\": container with ID starting with 01d114b9981287f412c665837450d16e90c33ff89532fa61a5c2d6324ad76b13 not found: ID does not exist" containerID="01d114b9981287f412c665837450d16e90c33ff89532fa61a5c2d6324ad76b13" Feb 16 17:15:37 crc kubenswrapper[4794]: I0216 17:15:37.621366 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01d114b9981287f412c665837450d16e90c33ff89532fa61a5c2d6324ad76b13"} err="failed to get container status \"01d114b9981287f412c665837450d16e90c33ff89532fa61a5c2d6324ad76b13\": rpc error: code = NotFound desc = could not find container \"01d114b9981287f412c665837450d16e90c33ff89532fa61a5c2d6324ad76b13\": container with ID starting with 01d114b9981287f412c665837450d16e90c33ff89532fa61a5c2d6324ad76b13 not found: ID does not exist" Feb 16 17:15:37 crc kubenswrapper[4794]: I0216 17:15:37.649890 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58a927f6-aaee-4e8f-a4cf-09d067ec88ed-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:15:37 crc kubenswrapper[4794]: I0216 17:15:37.650193 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p98mr\" (UniqueName: \"kubernetes.io/projected/58a927f6-aaee-4e8f-a4cf-09d067ec88ed-kube-api-access-p98mr\") on node \"crc\" DevicePath \"\"" Feb 16 17:15:37 crc kubenswrapper[4794]: I0216 17:15:37.650209 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58a927f6-aaee-4e8f-a4cf-09d067ec88ed-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:15:37 crc kubenswrapper[4794]: I0216 17:15:37.875776 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jd2x8"] Feb 16 17:15:37 crc kubenswrapper[4794]: I0216 17:15:37.881809 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jd2x8"] Feb 16 17:15:38 crc kubenswrapper[4794]: I0216 17:15:38.802966 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58a927f6-aaee-4e8f-a4cf-09d067ec88ed" path="/var/lib/kubelet/pods/58a927f6-aaee-4e8f-a4cf-09d067ec88ed/volumes" Feb 16 17:15:43 crc kubenswrapper[4794]: I0216 17:15:43.093693 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-n99t7" Feb 16 17:15:59 crc kubenswrapper[4794]: I0216 17:15:59.639941 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-d58d8d689-ppcq9" podUID="9bcab709-93c7-484e-b7f3-1bcdb808dd45" containerName="console" containerID="cri-o://b5f7e272d5ea88fb09c13744eb51c1a753aa4959926dda265e2610f23892805d" gracePeriod=15 Feb 16 17:16:00 crc kubenswrapper[4794]: I0216 17:16:00.110112 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-d58d8d689-ppcq9_9bcab709-93c7-484e-b7f3-1bcdb808dd45/console/0.log" Feb 16 17:16:00 crc kubenswrapper[4794]: I0216 17:16:00.110461 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-d58d8d689-ppcq9" Feb 16 17:16:00 crc kubenswrapper[4794]: I0216 17:16:00.259528 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57dbd\" (UniqueName: \"kubernetes.io/projected/9bcab709-93c7-484e-b7f3-1bcdb808dd45-kube-api-access-57dbd\") pod \"9bcab709-93c7-484e-b7f3-1bcdb808dd45\" (UID: \"9bcab709-93c7-484e-b7f3-1bcdb808dd45\") " Feb 16 17:16:00 crc kubenswrapper[4794]: I0216 17:16:00.259624 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9bcab709-93c7-484e-b7f3-1bcdb808dd45-service-ca\") pod \"9bcab709-93c7-484e-b7f3-1bcdb808dd45\" (UID: \"9bcab709-93c7-484e-b7f3-1bcdb808dd45\") " Feb 16 17:16:00 crc kubenswrapper[4794]: I0216 17:16:00.259694 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bcab709-93c7-484e-b7f3-1bcdb808dd45-trusted-ca-bundle\") pod \"9bcab709-93c7-484e-b7f3-1bcdb808dd45\" (UID: \"9bcab709-93c7-484e-b7f3-1bcdb808dd45\") " Feb 16 17:16:00 crc kubenswrapper[4794]: I0216 17:16:00.259727 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9bcab709-93c7-484e-b7f3-1bcdb808dd45-console-config\") pod \"9bcab709-93c7-484e-b7f3-1bcdb808dd45\" (UID: \"9bcab709-93c7-484e-b7f3-1bcdb808dd45\") " Feb 16 17:16:00 crc kubenswrapper[4794]: I0216 17:16:00.259759 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9bcab709-93c7-484e-b7f3-1bcdb808dd45-oauth-serving-cert\") pod \"9bcab709-93c7-484e-b7f3-1bcdb808dd45\" (UID: \"9bcab709-93c7-484e-b7f3-1bcdb808dd45\") " Feb 16 17:16:00 crc kubenswrapper[4794]: I0216 17:16:00.259828 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9bcab709-93c7-484e-b7f3-1bcdb808dd45-console-oauth-config\") pod \"9bcab709-93c7-484e-b7f3-1bcdb808dd45\" (UID: \"9bcab709-93c7-484e-b7f3-1bcdb808dd45\") " Feb 16 17:16:00 crc kubenswrapper[4794]: I0216 17:16:00.259871 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9bcab709-93c7-484e-b7f3-1bcdb808dd45-console-serving-cert\") pod \"9bcab709-93c7-484e-b7f3-1bcdb808dd45\" (UID: \"9bcab709-93c7-484e-b7f3-1bcdb808dd45\") " Feb 16 17:16:00 crc kubenswrapper[4794]: I0216 17:16:00.260676 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bcab709-93c7-484e-b7f3-1bcdb808dd45-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "9bcab709-93c7-484e-b7f3-1bcdb808dd45" (UID: "9bcab709-93c7-484e-b7f3-1bcdb808dd45"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:16:00 crc kubenswrapper[4794]: I0216 17:16:00.260700 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bcab709-93c7-484e-b7f3-1bcdb808dd45-service-ca" (OuterVolumeSpecName: "service-ca") pod "9bcab709-93c7-484e-b7f3-1bcdb808dd45" (UID: "9bcab709-93c7-484e-b7f3-1bcdb808dd45"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:16:00 crc kubenswrapper[4794]: I0216 17:16:00.260700 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bcab709-93c7-484e-b7f3-1bcdb808dd45-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "9bcab709-93c7-484e-b7f3-1bcdb808dd45" (UID: "9bcab709-93c7-484e-b7f3-1bcdb808dd45"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:16:00 crc kubenswrapper[4794]: I0216 17:16:00.260865 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bcab709-93c7-484e-b7f3-1bcdb808dd45-console-config" (OuterVolumeSpecName: "console-config") pod "9bcab709-93c7-484e-b7f3-1bcdb808dd45" (UID: "9bcab709-93c7-484e-b7f3-1bcdb808dd45"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:16:00 crc kubenswrapper[4794]: I0216 17:16:00.260972 4794 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9bcab709-93c7-484e-b7f3-1bcdb808dd45-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:16:00 crc kubenswrapper[4794]: I0216 17:16:00.260993 4794 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9bcab709-93c7-484e-b7f3-1bcdb808dd45-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:16:00 crc kubenswrapper[4794]: I0216 17:16:00.261002 4794 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bcab709-93c7-484e-b7f3-1bcdb808dd45-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:16:00 crc kubenswrapper[4794]: I0216 17:16:00.261009 4794 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9bcab709-93c7-484e-b7f3-1bcdb808dd45-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:16:00 crc kubenswrapper[4794]: I0216 17:16:00.267487 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bcab709-93c7-484e-b7f3-1bcdb808dd45-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "9bcab709-93c7-484e-b7f3-1bcdb808dd45" (UID: "9bcab709-93c7-484e-b7f3-1bcdb808dd45"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:16:00 crc kubenswrapper[4794]: I0216 17:16:00.267563 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bcab709-93c7-484e-b7f3-1bcdb808dd45-kube-api-access-57dbd" (OuterVolumeSpecName: "kube-api-access-57dbd") pod "9bcab709-93c7-484e-b7f3-1bcdb808dd45" (UID: "9bcab709-93c7-484e-b7f3-1bcdb808dd45"). InnerVolumeSpecName "kube-api-access-57dbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:16:00 crc kubenswrapper[4794]: I0216 17:16:00.275118 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bcab709-93c7-484e-b7f3-1bcdb808dd45-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "9bcab709-93c7-484e-b7f3-1bcdb808dd45" (UID: "9bcab709-93c7-484e-b7f3-1bcdb808dd45"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:16:00 crc kubenswrapper[4794]: I0216 17:16:00.362091 4794 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9bcab709-93c7-484e-b7f3-1bcdb808dd45-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:16:00 crc kubenswrapper[4794]: I0216 17:16:00.362149 4794 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9bcab709-93c7-484e-b7f3-1bcdb808dd45-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:16:00 crc kubenswrapper[4794]: I0216 17:16:00.362162 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57dbd\" (UniqueName: \"kubernetes.io/projected/9bcab709-93c7-484e-b7f3-1bcdb808dd45-kube-api-access-57dbd\") on node \"crc\" DevicePath \"\"" Feb 16 17:16:00 crc kubenswrapper[4794]: I0216 17:16:00.728150 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-d58d8d689-ppcq9_9bcab709-93c7-484e-b7f3-1bcdb808dd45/console/0.log" Feb 16 17:16:00 crc kubenswrapper[4794]: I0216 17:16:00.728212 4794 generic.go:334] "Generic (PLEG): container finished" podID="9bcab709-93c7-484e-b7f3-1bcdb808dd45" containerID="b5f7e272d5ea88fb09c13744eb51c1a753aa4959926dda265e2610f23892805d" exitCode=2 Feb 16 17:16:00 crc kubenswrapper[4794]: I0216 17:16:00.728245 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-d58d8d689-ppcq9" event={"ID":"9bcab709-93c7-484e-b7f3-1bcdb808dd45","Type":"ContainerDied","Data":"b5f7e272d5ea88fb09c13744eb51c1a753aa4959926dda265e2610f23892805d"} Feb 16 17:16:00 crc kubenswrapper[4794]: I0216 17:16:00.728280 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-d58d8d689-ppcq9" event={"ID":"9bcab709-93c7-484e-b7f3-1bcdb808dd45","Type":"ContainerDied","Data":"1e3b88aab234a8c0300fd0ad599fcad3be58fa95d6c909245bb22cba204b1b25"} Feb 16 17:16:00 crc kubenswrapper[4794]: I0216 17:16:00.728324 4794 scope.go:117] "RemoveContainer" containerID="b5f7e272d5ea88fb09c13744eb51c1a753aa4959926dda265e2610f23892805d" Feb 16 17:16:00 crc kubenswrapper[4794]: I0216 17:16:00.728339 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-d58d8d689-ppcq9" Feb 16 17:16:00 crc kubenswrapper[4794]: I0216 17:16:00.751555 4794 scope.go:117] "RemoveContainer" containerID="b5f7e272d5ea88fb09c13744eb51c1a753aa4959926dda265e2610f23892805d" Feb 16 17:16:00 crc kubenswrapper[4794]: E0216 17:16:00.752642 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5f7e272d5ea88fb09c13744eb51c1a753aa4959926dda265e2610f23892805d\": container with ID starting with b5f7e272d5ea88fb09c13744eb51c1a753aa4959926dda265e2610f23892805d not found: ID does not exist" containerID="b5f7e272d5ea88fb09c13744eb51c1a753aa4959926dda265e2610f23892805d" Feb 16 17:16:00 crc kubenswrapper[4794]: I0216 17:16:00.752685 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5f7e272d5ea88fb09c13744eb51c1a753aa4959926dda265e2610f23892805d"} err="failed to get container status \"b5f7e272d5ea88fb09c13744eb51c1a753aa4959926dda265e2610f23892805d\": rpc error: code = NotFound desc = could not find container \"b5f7e272d5ea88fb09c13744eb51c1a753aa4959926dda265e2610f23892805d\": container with ID starting with b5f7e272d5ea88fb09c13744eb51c1a753aa4959926dda265e2610f23892805d not found: ID does not exist" Feb 16 17:16:00 crc kubenswrapper[4794]: I0216 17:16:00.760499 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-d58d8d689-ppcq9"] Feb 16 17:16:00 crc kubenswrapper[4794]: I0216 17:16:00.767146 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-d58d8d689-ppcq9"] Feb 16 17:16:00 crc kubenswrapper[4794]: I0216 17:16:00.800500 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bcab709-93c7-484e-b7f3-1bcdb808dd45" path="/var/lib/kubelet/pods/9bcab709-93c7-484e-b7f3-1bcdb808dd45/volumes" Feb 16 17:16:01 crc kubenswrapper[4794]: I0216 17:16:01.907111 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213grn8t"] Feb 16 17:16:01 crc kubenswrapper[4794]: E0216 17:16:01.907473 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58a927f6-aaee-4e8f-a4cf-09d067ec88ed" containerName="registry-server" Feb 16 17:16:01 crc kubenswrapper[4794]: I0216 17:16:01.907491 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="58a927f6-aaee-4e8f-a4cf-09d067ec88ed" containerName="registry-server" Feb 16 17:16:01 crc kubenswrapper[4794]: E0216 17:16:01.907502 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58a927f6-aaee-4e8f-a4cf-09d067ec88ed" containerName="extract-content" Feb 16 17:16:01 crc kubenswrapper[4794]: I0216 17:16:01.907510 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="58a927f6-aaee-4e8f-a4cf-09d067ec88ed" containerName="extract-content" Feb 16 17:16:01 crc kubenswrapper[4794]: E0216 17:16:01.907526 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bcab709-93c7-484e-b7f3-1bcdb808dd45" containerName="console" Feb 16 17:16:01 crc kubenswrapper[4794]: I0216 17:16:01.907531 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bcab709-93c7-484e-b7f3-1bcdb808dd45" containerName="console" Feb 16 17:16:01 crc kubenswrapper[4794]: E0216 17:16:01.907548 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58a927f6-aaee-4e8f-a4cf-09d067ec88ed" containerName="extract-utilities" Feb 16 17:16:01 crc kubenswrapper[4794]: I0216 17:16:01.907554 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="58a927f6-aaee-4e8f-a4cf-09d067ec88ed" containerName="extract-utilities" Feb 16 17:16:01 crc kubenswrapper[4794]: I0216 17:16:01.907679 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="58a927f6-aaee-4e8f-a4cf-09d067ec88ed" containerName="registry-server" Feb 16 17:16:01 crc kubenswrapper[4794]: I0216 17:16:01.907689 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bcab709-93c7-484e-b7f3-1bcdb808dd45" containerName="console" Feb 16 17:16:01 crc kubenswrapper[4794]: I0216 17:16:01.908650 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213grn8t" Feb 16 17:16:01 crc kubenswrapper[4794]: I0216 17:16:01.912427 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 17:16:01 crc kubenswrapper[4794]: I0216 17:16:01.924945 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213grn8t"] Feb 16 17:16:02 crc kubenswrapper[4794]: I0216 17:16:02.084260 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1bd5ce2a-a814-4cae-bd5e-21ef1564d186-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213grn8t\" (UID: \"1bd5ce2a-a814-4cae-bd5e-21ef1564d186\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213grn8t" Feb 16 17:16:02 crc kubenswrapper[4794]: I0216 17:16:02.084360 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1bd5ce2a-a814-4cae-bd5e-21ef1564d186-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213grn8t\" (UID: \"1bd5ce2a-a814-4cae-bd5e-21ef1564d186\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213grn8t" Feb 16 17:16:02 crc kubenswrapper[4794]: I0216 17:16:02.084418 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvtg4\" (UniqueName: \"kubernetes.io/projected/1bd5ce2a-a814-4cae-bd5e-21ef1564d186-kube-api-access-bvtg4\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213grn8t\" (UID: \"1bd5ce2a-a814-4cae-bd5e-21ef1564d186\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213grn8t" Feb 16 17:16:02 crc kubenswrapper[4794]: I0216 17:16:02.186278 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1bd5ce2a-a814-4cae-bd5e-21ef1564d186-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213grn8t\" (UID: \"1bd5ce2a-a814-4cae-bd5e-21ef1564d186\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213grn8t" Feb 16 17:16:02 crc kubenswrapper[4794]: I0216 17:16:02.186429 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1bd5ce2a-a814-4cae-bd5e-21ef1564d186-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213grn8t\" (UID: \"1bd5ce2a-a814-4cae-bd5e-21ef1564d186\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213grn8t" Feb 16 17:16:02 crc kubenswrapper[4794]: I0216 17:16:02.186534 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvtg4\" (UniqueName: \"kubernetes.io/projected/1bd5ce2a-a814-4cae-bd5e-21ef1564d186-kube-api-access-bvtg4\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213grn8t\" (UID: \"1bd5ce2a-a814-4cae-bd5e-21ef1564d186\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213grn8t" Feb 16 17:16:02 crc kubenswrapper[4794]: I0216 17:16:02.187349 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1bd5ce2a-a814-4cae-bd5e-21ef1564d186-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213grn8t\" (UID: \"1bd5ce2a-a814-4cae-bd5e-21ef1564d186\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213grn8t" Feb 16 17:16:02 crc kubenswrapper[4794]: I0216 17:16:02.187363 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1bd5ce2a-a814-4cae-bd5e-21ef1564d186-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213grn8t\" (UID: \"1bd5ce2a-a814-4cae-bd5e-21ef1564d186\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213grn8t" Feb 16 17:16:02 crc kubenswrapper[4794]: I0216 17:16:02.207745 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvtg4\" (UniqueName: \"kubernetes.io/projected/1bd5ce2a-a814-4cae-bd5e-21ef1564d186-kube-api-access-bvtg4\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213grn8t\" (UID: \"1bd5ce2a-a814-4cae-bd5e-21ef1564d186\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213grn8t" Feb 16 17:16:02 crc kubenswrapper[4794]: I0216 17:16:02.229296 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213grn8t" Feb 16 17:16:02 crc kubenswrapper[4794]: I0216 17:16:02.683807 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213grn8t"] Feb 16 17:16:02 crc kubenswrapper[4794]: I0216 17:16:02.744634 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213grn8t" event={"ID":"1bd5ce2a-a814-4cae-bd5e-21ef1564d186","Type":"ContainerStarted","Data":"c2ed0e65973efd06d13e5df836856663ccbb6688bf3bde06e384a7cb5e9f2d67"} Feb 16 17:16:03 crc kubenswrapper[4794]: I0216 17:16:03.755236 4794 generic.go:334] "Generic (PLEG): container finished" podID="1bd5ce2a-a814-4cae-bd5e-21ef1564d186" containerID="01f9d86ac9ec7e374f0a5df43a8d59e406a13189e26936f40c0030899b28ef83" exitCode=0 Feb 16 17:16:03 crc kubenswrapper[4794]: I0216 17:16:03.755322 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213grn8t" event={"ID":"1bd5ce2a-a814-4cae-bd5e-21ef1564d186","Type":"ContainerDied","Data":"01f9d86ac9ec7e374f0a5df43a8d59e406a13189e26936f40c0030899b28ef83"} Feb 16 17:16:03 crc kubenswrapper[4794]: I0216 17:16:03.757251 4794 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 17:16:05 crc kubenswrapper[4794]: I0216 17:16:05.770365 4794 generic.go:334] "Generic (PLEG): container finished" podID="1bd5ce2a-a814-4cae-bd5e-21ef1564d186" containerID="a1200f615886ba85c5e5a7c8c8c9a03526dd905a70bab8e4873360227f223254" exitCode=0 Feb 16 17:16:05 crc kubenswrapper[4794]: I0216 17:16:05.770408 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213grn8t" event={"ID":"1bd5ce2a-a814-4cae-bd5e-21ef1564d186","Type":"ContainerDied","Data":"a1200f615886ba85c5e5a7c8c8c9a03526dd905a70bab8e4873360227f223254"} Feb 16 17:16:06 crc kubenswrapper[4794]: I0216 17:16:06.778754 4794 generic.go:334] "Generic (PLEG): container finished" podID="1bd5ce2a-a814-4cae-bd5e-21ef1564d186" containerID="d5d2c27707f01f2bc94eb24f7cb66b096600ab2128cee2352a68606b3d69ea56" exitCode=0 Feb 16 17:16:06 crc kubenswrapper[4794]: I0216 17:16:06.778812 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213grn8t" event={"ID":"1bd5ce2a-a814-4cae-bd5e-21ef1564d186","Type":"ContainerDied","Data":"d5d2c27707f01f2bc94eb24f7cb66b096600ab2128cee2352a68606b3d69ea56"} Feb 16 17:16:08 crc kubenswrapper[4794]: I0216 17:16:08.097179 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213grn8t" Feb 16 17:16:08 crc kubenswrapper[4794]: I0216 17:16:08.278269 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1bd5ce2a-a814-4cae-bd5e-21ef1564d186-util\") pod \"1bd5ce2a-a814-4cae-bd5e-21ef1564d186\" (UID: \"1bd5ce2a-a814-4cae-bd5e-21ef1564d186\") " Feb 16 17:16:08 crc kubenswrapper[4794]: I0216 17:16:08.278593 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1bd5ce2a-a814-4cae-bd5e-21ef1564d186-bundle\") pod \"1bd5ce2a-a814-4cae-bd5e-21ef1564d186\" (UID: \"1bd5ce2a-a814-4cae-bd5e-21ef1564d186\") " Feb 16 17:16:08 crc kubenswrapper[4794]: I0216 17:16:08.278695 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvtg4\" (UniqueName: \"kubernetes.io/projected/1bd5ce2a-a814-4cae-bd5e-21ef1564d186-kube-api-access-bvtg4\") pod \"1bd5ce2a-a814-4cae-bd5e-21ef1564d186\" (UID: \"1bd5ce2a-a814-4cae-bd5e-21ef1564d186\") " Feb 16 17:16:08 crc kubenswrapper[4794]: I0216 17:16:08.279957 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bd5ce2a-a814-4cae-bd5e-21ef1564d186-bundle" (OuterVolumeSpecName: "bundle") pod "1bd5ce2a-a814-4cae-bd5e-21ef1564d186" (UID: "1bd5ce2a-a814-4cae-bd5e-21ef1564d186"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:16:08 crc kubenswrapper[4794]: I0216 17:16:08.290400 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bd5ce2a-a814-4cae-bd5e-21ef1564d186-kube-api-access-bvtg4" (OuterVolumeSpecName: "kube-api-access-bvtg4") pod "1bd5ce2a-a814-4cae-bd5e-21ef1564d186" (UID: "1bd5ce2a-a814-4cae-bd5e-21ef1564d186"). InnerVolumeSpecName "kube-api-access-bvtg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:16:08 crc kubenswrapper[4794]: I0216 17:16:08.380641 4794 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1bd5ce2a-a814-4cae-bd5e-21ef1564d186-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:16:08 crc kubenswrapper[4794]: I0216 17:16:08.380689 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvtg4\" (UniqueName: \"kubernetes.io/projected/1bd5ce2a-a814-4cae-bd5e-21ef1564d186-kube-api-access-bvtg4\") on node \"crc\" DevicePath \"\"" Feb 16 17:16:08 crc kubenswrapper[4794]: I0216 17:16:08.486884 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bd5ce2a-a814-4cae-bd5e-21ef1564d186-util" (OuterVolumeSpecName: "util") pod "1bd5ce2a-a814-4cae-bd5e-21ef1564d186" (UID: "1bd5ce2a-a814-4cae-bd5e-21ef1564d186"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:16:08 crc kubenswrapper[4794]: I0216 17:16:08.584685 4794 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1bd5ce2a-a814-4cae-bd5e-21ef1564d186-util\") on node \"crc\" DevicePath \"\"" Feb 16 17:16:08 crc kubenswrapper[4794]: I0216 17:16:08.799479 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213grn8t" Feb 16 17:16:08 crc kubenswrapper[4794]: I0216 17:16:08.803802 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213grn8t" event={"ID":"1bd5ce2a-a814-4cae-bd5e-21ef1564d186","Type":"ContainerDied","Data":"c2ed0e65973efd06d13e5df836856663ccbb6688bf3bde06e384a7cb5e9f2d67"} Feb 16 17:16:08 crc kubenswrapper[4794]: I0216 17:16:08.803898 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2ed0e65973efd06d13e5df836856663ccbb6688bf3bde06e384a7cb5e9f2d67" Feb 16 17:16:20 crc kubenswrapper[4794]: I0216 17:16:20.635791 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7cfd877d99-ln65b"] Feb 16 17:16:20 crc kubenswrapper[4794]: E0216 17:16:20.636713 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bd5ce2a-a814-4cae-bd5e-21ef1564d186" containerName="extract" Feb 16 17:16:20 crc kubenswrapper[4794]: I0216 17:16:20.636730 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bd5ce2a-a814-4cae-bd5e-21ef1564d186" containerName="extract" Feb 16 17:16:20 crc kubenswrapper[4794]: E0216 17:16:20.636755 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bd5ce2a-a814-4cae-bd5e-21ef1564d186" containerName="util" Feb 16 17:16:20 crc kubenswrapper[4794]: I0216 17:16:20.636762 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bd5ce2a-a814-4cae-bd5e-21ef1564d186" containerName="util" Feb 16 17:16:20 crc kubenswrapper[4794]: E0216 17:16:20.636772 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bd5ce2a-a814-4cae-bd5e-21ef1564d186" containerName="pull" Feb 16 17:16:20 crc kubenswrapper[4794]: I0216 17:16:20.636783 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bd5ce2a-a814-4cae-bd5e-21ef1564d186" containerName="pull" Feb 16 17:16:20 crc kubenswrapper[4794]: I0216 17:16:20.636944 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bd5ce2a-a814-4cae-bd5e-21ef1564d186" containerName="extract" Feb 16 17:16:20 crc kubenswrapper[4794]: I0216 17:16:20.637599 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7cfd877d99-ln65b" Feb 16 17:16:20 crc kubenswrapper[4794]: I0216 17:16:20.640557 4794 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-smgk4" Feb 16 17:16:20 crc kubenswrapper[4794]: I0216 17:16:20.640556 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 16 17:16:20 crc kubenswrapper[4794]: I0216 17:16:20.641461 4794 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 16 17:16:20 crc kubenswrapper[4794]: I0216 17:16:20.641505 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 16 17:16:20 crc kubenswrapper[4794]: I0216 17:16:20.641558 4794 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 16 17:16:20 crc kubenswrapper[4794]: I0216 17:16:20.656245 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7cfd877d99-ln65b"] Feb 16 17:16:20 crc kubenswrapper[4794]: I0216 17:16:20.768246 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/336e2f2e-feed-48c4-8ef5-26630fbf649b-apiservice-cert\") pod \"metallb-operator-controller-manager-7cfd877d99-ln65b\" (UID: \"336e2f2e-feed-48c4-8ef5-26630fbf649b\") " pod="metallb-system/metallb-operator-controller-manager-7cfd877d99-ln65b" Feb 16 17:16:20 crc kubenswrapper[4794]: I0216 17:16:20.768382 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/336e2f2e-feed-48c4-8ef5-26630fbf649b-webhook-cert\") pod \"metallb-operator-controller-manager-7cfd877d99-ln65b\" (UID: \"336e2f2e-feed-48c4-8ef5-26630fbf649b\") " pod="metallb-system/metallb-operator-controller-manager-7cfd877d99-ln65b" Feb 16 17:16:20 crc kubenswrapper[4794]: I0216 17:16:20.768415 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgfch\" (UniqueName: \"kubernetes.io/projected/336e2f2e-feed-48c4-8ef5-26630fbf649b-kube-api-access-bgfch\") pod \"metallb-operator-controller-manager-7cfd877d99-ln65b\" (UID: \"336e2f2e-feed-48c4-8ef5-26630fbf649b\") " pod="metallb-system/metallb-operator-controller-manager-7cfd877d99-ln65b" Feb 16 17:16:20 crc kubenswrapper[4794]: I0216 17:16:20.870185 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/336e2f2e-feed-48c4-8ef5-26630fbf649b-apiservice-cert\") pod \"metallb-operator-controller-manager-7cfd877d99-ln65b\" (UID: \"336e2f2e-feed-48c4-8ef5-26630fbf649b\") " pod="metallb-system/metallb-operator-controller-manager-7cfd877d99-ln65b" Feb 16 17:16:20 crc kubenswrapper[4794]: I0216 17:16:20.870271 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/336e2f2e-feed-48c4-8ef5-26630fbf649b-webhook-cert\") pod \"metallb-operator-controller-manager-7cfd877d99-ln65b\" (UID: \"336e2f2e-feed-48c4-8ef5-26630fbf649b\") " pod="metallb-system/metallb-operator-controller-manager-7cfd877d99-ln65b" Feb 16 17:16:20 crc kubenswrapper[4794]: I0216 17:16:20.871733 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgfch\" (UniqueName: \"kubernetes.io/projected/336e2f2e-feed-48c4-8ef5-26630fbf649b-kube-api-access-bgfch\") pod \"metallb-operator-controller-manager-7cfd877d99-ln65b\" (UID: \"336e2f2e-feed-48c4-8ef5-26630fbf649b\") " pod="metallb-system/metallb-operator-controller-manager-7cfd877d99-ln65b" Feb 16 17:16:20 crc kubenswrapper[4794]: I0216 17:16:20.876415 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/336e2f2e-feed-48c4-8ef5-26630fbf649b-apiservice-cert\") pod \"metallb-operator-controller-manager-7cfd877d99-ln65b\" (UID: \"336e2f2e-feed-48c4-8ef5-26630fbf649b\") " pod="metallb-system/metallb-operator-controller-manager-7cfd877d99-ln65b" Feb 16 17:16:20 crc kubenswrapper[4794]: I0216 17:16:20.876422 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/336e2f2e-feed-48c4-8ef5-26630fbf649b-webhook-cert\") pod \"metallb-operator-controller-manager-7cfd877d99-ln65b\" (UID: \"336e2f2e-feed-48c4-8ef5-26630fbf649b\") " pod="metallb-system/metallb-operator-controller-manager-7cfd877d99-ln65b" Feb 16 17:16:20 crc kubenswrapper[4794]: I0216 17:16:20.887858 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgfch\" (UniqueName: \"kubernetes.io/projected/336e2f2e-feed-48c4-8ef5-26630fbf649b-kube-api-access-bgfch\") pod \"metallb-operator-controller-manager-7cfd877d99-ln65b\" (UID: \"336e2f2e-feed-48c4-8ef5-26630fbf649b\") " pod="metallb-system/metallb-operator-controller-manager-7cfd877d99-ln65b" Feb 16 17:16:20 crc kubenswrapper[4794]: I0216 17:16:20.959970 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7cfd877d99-ln65b" Feb 16 17:16:20 crc kubenswrapper[4794]: I0216 17:16:20.974230 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6c9857685-shg96"] Feb 16 17:16:20 crc kubenswrapper[4794]: I0216 17:16:20.975381 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6c9857685-shg96" Feb 16 17:16:20 crc kubenswrapper[4794]: I0216 17:16:20.979236 4794 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 16 17:16:20 crc kubenswrapper[4794]: I0216 17:16:20.979295 4794 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 16 17:16:20 crc kubenswrapper[4794]: I0216 17:16:20.979236 4794 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-9l5zn" Feb 16 17:16:21 crc kubenswrapper[4794]: I0216 17:16:21.008535 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6c9857685-shg96"] Feb 16 17:16:21 crc kubenswrapper[4794]: I0216 17:16:21.181372 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jmkm\" (UniqueName: \"kubernetes.io/projected/e9e1f0f5-927b-4cc7-94c3-130c0a320750-kube-api-access-4jmkm\") pod \"metallb-operator-webhook-server-6c9857685-shg96\" (UID: \"e9e1f0f5-927b-4cc7-94c3-130c0a320750\") " pod="metallb-system/metallb-operator-webhook-server-6c9857685-shg96" Feb 16 17:16:21 crc kubenswrapper[4794]: I0216 17:16:21.181787 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e9e1f0f5-927b-4cc7-94c3-130c0a320750-apiservice-cert\") pod \"metallb-operator-webhook-server-6c9857685-shg96\" (UID: \"e9e1f0f5-927b-4cc7-94c3-130c0a320750\") " pod="metallb-system/metallb-operator-webhook-server-6c9857685-shg96" Feb 16 17:16:21 crc kubenswrapper[4794]: I0216 17:16:21.181839 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e9e1f0f5-927b-4cc7-94c3-130c0a320750-webhook-cert\") pod \"metallb-operator-webhook-server-6c9857685-shg96\" (UID: \"e9e1f0f5-927b-4cc7-94c3-130c0a320750\") " pod="metallb-system/metallb-operator-webhook-server-6c9857685-shg96" Feb 16 17:16:21 crc kubenswrapper[4794]: I0216 17:16:21.284127 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e9e1f0f5-927b-4cc7-94c3-130c0a320750-apiservice-cert\") pod \"metallb-operator-webhook-server-6c9857685-shg96\" (UID: \"e9e1f0f5-927b-4cc7-94c3-130c0a320750\") " pod="metallb-system/metallb-operator-webhook-server-6c9857685-shg96" Feb 16 17:16:21 crc kubenswrapper[4794]: I0216 17:16:21.284230 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e9e1f0f5-927b-4cc7-94c3-130c0a320750-webhook-cert\") pod \"metallb-operator-webhook-server-6c9857685-shg96\" (UID: \"e9e1f0f5-927b-4cc7-94c3-130c0a320750\") " pod="metallb-system/metallb-operator-webhook-server-6c9857685-shg96" Feb 16 17:16:21 crc kubenswrapper[4794]: I0216 17:16:21.284346 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jmkm\" (UniqueName: \"kubernetes.io/projected/e9e1f0f5-927b-4cc7-94c3-130c0a320750-kube-api-access-4jmkm\") pod \"metallb-operator-webhook-server-6c9857685-shg96\" (UID: \"e9e1f0f5-927b-4cc7-94c3-130c0a320750\") " pod="metallb-system/metallb-operator-webhook-server-6c9857685-shg96" Feb 16 17:16:21 crc kubenswrapper[4794]: I0216 17:16:21.299628 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e9e1f0f5-927b-4cc7-94c3-130c0a320750-apiservice-cert\") pod \"metallb-operator-webhook-server-6c9857685-shg96\" (UID: \"e9e1f0f5-927b-4cc7-94c3-130c0a320750\") " pod="metallb-system/metallb-operator-webhook-server-6c9857685-shg96" Feb 16 17:16:21 crc kubenswrapper[4794]: I0216 17:16:21.324690 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e9e1f0f5-927b-4cc7-94c3-130c0a320750-webhook-cert\") pod \"metallb-operator-webhook-server-6c9857685-shg96\" (UID: \"e9e1f0f5-927b-4cc7-94c3-130c0a320750\") " pod="metallb-system/metallb-operator-webhook-server-6c9857685-shg96" Feb 16 17:16:21 crc kubenswrapper[4794]: I0216 17:16:21.329061 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jmkm\" (UniqueName: \"kubernetes.io/projected/e9e1f0f5-927b-4cc7-94c3-130c0a320750-kube-api-access-4jmkm\") pod \"metallb-operator-webhook-server-6c9857685-shg96\" (UID: \"e9e1f0f5-927b-4cc7-94c3-130c0a320750\") " pod="metallb-system/metallb-operator-webhook-server-6c9857685-shg96" Feb 16 17:16:21 crc kubenswrapper[4794]: I0216 17:16:21.363894 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6c9857685-shg96" Feb 16 17:16:21 crc kubenswrapper[4794]: I0216 17:16:21.556935 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7cfd877d99-ln65b"] Feb 16 17:16:21 crc kubenswrapper[4794]: I0216 17:16:21.851162 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6c9857685-shg96"] Feb 16 17:16:21 crc kubenswrapper[4794]: W0216 17:16:21.857403 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9e1f0f5_927b_4cc7_94c3_130c0a320750.slice/crio-1d307ea6de3ea50753b7250c54a9eb58115113dcc05d4dfc7a7ef6d17346b7d9 WatchSource:0}: Error finding container 1d307ea6de3ea50753b7250c54a9eb58115113dcc05d4dfc7a7ef6d17346b7d9: Status 404 returned error can't find the container with id 1d307ea6de3ea50753b7250c54a9eb58115113dcc05d4dfc7a7ef6d17346b7d9 Feb 16 17:16:21 crc kubenswrapper[4794]: I0216 17:16:21.891756 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7cfd877d99-ln65b" event={"ID":"336e2f2e-feed-48c4-8ef5-26630fbf649b","Type":"ContainerStarted","Data":"e3368832ffc34a93e38540c47491b89845aac411938fa3442e1bc9a24ad30556"} Feb 16 17:16:21 crc kubenswrapper[4794]: I0216 17:16:21.893038 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6c9857685-shg96" event={"ID":"e9e1f0f5-927b-4cc7-94c3-130c0a320750","Type":"ContainerStarted","Data":"1d307ea6de3ea50753b7250c54a9eb58115113dcc05d4dfc7a7ef6d17346b7d9"} Feb 16 17:16:27 crc kubenswrapper[4794]: I0216 17:16:27.938822 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7cfd877d99-ln65b" event={"ID":"336e2f2e-feed-48c4-8ef5-26630fbf649b","Type":"ContainerStarted","Data":"e8198801b20f0885933e55f5df5e3d23995cda1af0680069a30df3fbe345a45f"} Feb 16 17:16:27 crc kubenswrapper[4794]: I0216 17:16:27.939469 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7cfd877d99-ln65b" Feb 16 17:16:27 crc kubenswrapper[4794]: I0216 17:16:27.940697 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6c9857685-shg96" event={"ID":"e9e1f0f5-927b-4cc7-94c3-130c0a320750","Type":"ContainerStarted","Data":"12babc2df709f57075fb6f521c7c28e8d8e9473027239e9452b21c116564751a"} Feb 16 17:16:27 crc kubenswrapper[4794]: I0216 17:16:27.940850 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6c9857685-shg96" Feb 16 17:16:27 crc kubenswrapper[4794]: I0216 17:16:27.959021 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7cfd877d99-ln65b" podStartSLOduration=2.083507663 podStartE2EDuration="7.959006065s" podCreationTimestamp="2026-02-16 17:16:20 +0000 UTC" firstStartedPulling="2026-02-16 17:16:21.567894847 +0000 UTC m=+1007.515989494" lastFinishedPulling="2026-02-16 17:16:27.443393249 +0000 UTC m=+1013.391487896" observedRunningTime="2026-02-16 17:16:27.95741994 +0000 UTC m=+1013.905514587" watchObservedRunningTime="2026-02-16 17:16:27.959006065 +0000 UTC m=+1013.907100712" Feb 16 17:16:27 crc kubenswrapper[4794]: I0216 17:16:27.989997 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6c9857685-shg96" podStartSLOduration=2.394436606 podStartE2EDuration="7.989975059s" podCreationTimestamp="2026-02-16 17:16:20 +0000 UTC" firstStartedPulling="2026-02-16 17:16:21.862472353 +0000 UTC m=+1007.810567000" lastFinishedPulling="2026-02-16 17:16:27.458010806 +0000 UTC m=+1013.406105453" observedRunningTime="2026-02-16 17:16:27.986389607 +0000 UTC m=+1013.934484264" watchObservedRunningTime="2026-02-16 17:16:27.989975059 +0000 UTC m=+1013.938069706" Feb 16 17:16:41 crc kubenswrapper[4794]: I0216 17:16:41.368672 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6c9857685-shg96" Feb 16 17:17:00 crc kubenswrapper[4794]: I0216 17:17:00.964718 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7cfd877d99-ln65b" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.765781 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-sjmrc"] Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.768839 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-sjmrc" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.771676 4794 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.771731 4794 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-4lw5p" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.779405 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.781063 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-pv9br"] Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.781975 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-pv9br" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.787111 4794 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.796366 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-pv9br"] Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.870499 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b432c0dc-a16b-408b-b760-08c20e6a6e05-frr-sockets\") pod \"frr-k8s-sjmrc\" (UID: \"b432c0dc-a16b-408b-b760-08c20e6a6e05\") " pod="metallb-system/frr-k8s-sjmrc" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.870567 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b432c0dc-a16b-408b-b760-08c20e6a6e05-frr-startup\") pod \"frr-k8s-sjmrc\" (UID: \"b432c0dc-a16b-408b-b760-08c20e6a6e05\") " pod="metallb-system/frr-k8s-sjmrc" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.870588 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06b0fb65-95c5-4a34-ae4e-d787cf10733c-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-pv9br\" (UID: \"06b0fb65-95c5-4a34-ae4e-d787cf10733c\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-pv9br" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.871531 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b432c0dc-a16b-408b-b760-08c20e6a6e05-frr-conf\") pod \"frr-k8s-sjmrc\" (UID: \"b432c0dc-a16b-408b-b760-08c20e6a6e05\") " pod="metallb-system/frr-k8s-sjmrc" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.871624 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b432c0dc-a16b-408b-b760-08c20e6a6e05-metrics-certs\") pod \"frr-k8s-sjmrc\" (UID: \"b432c0dc-a16b-408b-b760-08c20e6a6e05\") " pod="metallb-system/frr-k8s-sjmrc" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.871762 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b432c0dc-a16b-408b-b760-08c20e6a6e05-metrics\") pod \"frr-k8s-sjmrc\" (UID: \"b432c0dc-a16b-408b-b760-08c20e6a6e05\") " pod="metallb-system/frr-k8s-sjmrc" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.871825 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpd7d\" (UniqueName: \"kubernetes.io/projected/b432c0dc-a16b-408b-b760-08c20e6a6e05-kube-api-access-gpd7d\") pod \"frr-k8s-sjmrc\" (UID: \"b432c0dc-a16b-408b-b760-08c20e6a6e05\") " pod="metallb-system/frr-k8s-sjmrc" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.871849 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b432c0dc-a16b-408b-b760-08c20e6a6e05-reloader\") pod \"frr-k8s-sjmrc\" (UID: \"b432c0dc-a16b-408b-b760-08c20e6a6e05\") " pod="metallb-system/frr-k8s-sjmrc" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.872054 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gghfp\" (UniqueName: \"kubernetes.io/projected/06b0fb65-95c5-4a34-ae4e-d787cf10733c-kube-api-access-gghfp\") pod \"frr-k8s-webhook-server-78b44bf5bb-pv9br\" (UID: \"06b0fb65-95c5-4a34-ae4e-d787cf10733c\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-pv9br" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.888594 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-pkjkp"] Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.889886 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-pkjkp" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.891803 4794 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.891961 4794 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.892155 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.893464 4794 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-2zl4l" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.895401 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-qmm5b"] Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.897709 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-qmm5b" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.899869 4794 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.921487 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-qmm5b"] Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.973549 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b432c0dc-a16b-408b-b760-08c20e6a6e05-frr-startup\") pod \"frr-k8s-sjmrc\" (UID: \"b432c0dc-a16b-408b-b760-08c20e6a6e05\") " pod="metallb-system/frr-k8s-sjmrc" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.973611 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06b0fb65-95c5-4a34-ae4e-d787cf10733c-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-pv9br\" (UID: \"06b0fb65-95c5-4a34-ae4e-d787cf10733c\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-pv9br" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.973648 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/533c1ec2-44e4-4a34-8f40-5ca4dd3527db-metrics-certs\") pod \"controller-69bbfbf88f-qmm5b\" (UID: \"533c1ec2-44e4-4a34-8f40-5ca4dd3527db\") " pod="metallb-system/controller-69bbfbf88f-qmm5b" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.973684 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b432c0dc-a16b-408b-b760-08c20e6a6e05-frr-conf\") pod \"frr-k8s-sjmrc\" (UID: \"b432c0dc-a16b-408b-b760-08c20e6a6e05\") " pod="metallb-system/frr-k8s-sjmrc" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.973722 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b432c0dc-a16b-408b-b760-08c20e6a6e05-metrics-certs\") pod \"frr-k8s-sjmrc\" (UID: \"b432c0dc-a16b-408b-b760-08c20e6a6e05\") " pod="metallb-system/frr-k8s-sjmrc" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.973763 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0863f5e7-b46f-45a6-866e-a445bddeeed2-metallb-excludel2\") pod \"speaker-pkjkp\" (UID: \"0863f5e7-b46f-45a6-866e-a445bddeeed2\") " pod="metallb-system/speaker-pkjkp" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.973787 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0863f5e7-b46f-45a6-866e-a445bddeeed2-memberlist\") pod \"speaker-pkjkp\" (UID: \"0863f5e7-b46f-45a6-866e-a445bddeeed2\") " pod="metallb-system/speaker-pkjkp" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.973812 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0863f5e7-b46f-45a6-866e-a445bddeeed2-metrics-certs\") pod \"speaker-pkjkp\" (UID: \"0863f5e7-b46f-45a6-866e-a445bddeeed2\") " pod="metallb-system/speaker-pkjkp" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.973835 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/533c1ec2-44e4-4a34-8f40-5ca4dd3527db-cert\") pod \"controller-69bbfbf88f-qmm5b\" (UID: \"533c1ec2-44e4-4a34-8f40-5ca4dd3527db\") " pod="metallb-system/controller-69bbfbf88f-qmm5b" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.973857 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b432c0dc-a16b-408b-b760-08c20e6a6e05-metrics\") pod \"frr-k8s-sjmrc\" (UID: \"b432c0dc-a16b-408b-b760-08c20e6a6e05\") " pod="metallb-system/frr-k8s-sjmrc" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.973898 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpd7d\" (UniqueName: \"kubernetes.io/projected/b432c0dc-a16b-408b-b760-08c20e6a6e05-kube-api-access-gpd7d\") pod \"frr-k8s-sjmrc\" (UID: \"b432c0dc-a16b-408b-b760-08c20e6a6e05\") " pod="metallb-system/frr-k8s-sjmrc" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.973926 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b432c0dc-a16b-408b-b760-08c20e6a6e05-reloader\") pod \"frr-k8s-sjmrc\" (UID: \"b432c0dc-a16b-408b-b760-08c20e6a6e05\") " pod="metallb-system/frr-k8s-sjmrc" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.973990 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gghfp\" (UniqueName: \"kubernetes.io/projected/06b0fb65-95c5-4a34-ae4e-d787cf10733c-kube-api-access-gghfp\") pod \"frr-k8s-webhook-server-78b44bf5bb-pv9br\" (UID: \"06b0fb65-95c5-4a34-ae4e-d787cf10733c\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-pv9br" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.974027 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b432c0dc-a16b-408b-b760-08c20e6a6e05-frr-sockets\") pod \"frr-k8s-sjmrc\" (UID: \"b432c0dc-a16b-408b-b760-08c20e6a6e05\") " pod="metallb-system/frr-k8s-sjmrc" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.974050 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdfr5\" (UniqueName: \"kubernetes.io/projected/533c1ec2-44e4-4a34-8f40-5ca4dd3527db-kube-api-access-zdfr5\") pod \"controller-69bbfbf88f-qmm5b\" (UID: \"533c1ec2-44e4-4a34-8f40-5ca4dd3527db\") " pod="metallb-system/controller-69bbfbf88f-qmm5b" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.974087 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpm6k\" (UniqueName: \"kubernetes.io/projected/0863f5e7-b46f-45a6-866e-a445bddeeed2-kube-api-access-hpm6k\") pod \"speaker-pkjkp\" (UID: \"0863f5e7-b46f-45a6-866e-a445bddeeed2\") " pod="metallb-system/speaker-pkjkp" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.974133 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b432c0dc-a16b-408b-b760-08c20e6a6e05-frr-conf\") pod \"frr-k8s-sjmrc\" (UID: \"b432c0dc-a16b-408b-b760-08c20e6a6e05\") " pod="metallb-system/frr-k8s-sjmrc" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.974347 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b432c0dc-a16b-408b-b760-08c20e6a6e05-metrics\") pod \"frr-k8s-sjmrc\" (UID: \"b432c0dc-a16b-408b-b760-08c20e6a6e05\") " pod="metallb-system/frr-k8s-sjmrc" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.974573 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b432c0dc-a16b-408b-b760-08c20e6a6e05-frr-startup\") pod \"frr-k8s-sjmrc\" (UID: \"b432c0dc-a16b-408b-b760-08c20e6a6e05\") " pod="metallb-system/frr-k8s-sjmrc" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.974710 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b432c0dc-a16b-408b-b760-08c20e6a6e05-frr-sockets\") pod \"frr-k8s-sjmrc\" (UID: \"b432c0dc-a16b-408b-b760-08c20e6a6e05\") " pod="metallb-system/frr-k8s-sjmrc" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.974821 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b432c0dc-a16b-408b-b760-08c20e6a6e05-reloader\") pod \"frr-k8s-sjmrc\" (UID: \"b432c0dc-a16b-408b-b760-08c20e6a6e05\") " pod="metallb-system/frr-k8s-sjmrc" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.979021 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06b0fb65-95c5-4a34-ae4e-d787cf10733c-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-pv9br\" (UID: \"06b0fb65-95c5-4a34-ae4e-d787cf10733c\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-pv9br" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.988959 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b432c0dc-a16b-408b-b760-08c20e6a6e05-metrics-certs\") pod \"frr-k8s-sjmrc\" (UID: \"b432c0dc-a16b-408b-b760-08c20e6a6e05\") " pod="metallb-system/frr-k8s-sjmrc" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.989510 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gghfp\" (UniqueName: \"kubernetes.io/projected/06b0fb65-95c5-4a34-ae4e-d787cf10733c-kube-api-access-gghfp\") pod \"frr-k8s-webhook-server-78b44bf5bb-pv9br\" (UID: \"06b0fb65-95c5-4a34-ae4e-d787cf10733c\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-pv9br" Feb 16 17:17:01 crc kubenswrapper[4794]: I0216 17:17:01.993049 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpd7d\" (UniqueName: \"kubernetes.io/projected/b432c0dc-a16b-408b-b760-08c20e6a6e05-kube-api-access-gpd7d\") pod \"frr-k8s-sjmrc\" (UID: \"b432c0dc-a16b-408b-b760-08c20e6a6e05\") " pod="metallb-system/frr-k8s-sjmrc" Feb 16 17:17:02 crc kubenswrapper[4794]: I0216 17:17:02.075210 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdfr5\" (UniqueName: \"kubernetes.io/projected/533c1ec2-44e4-4a34-8f40-5ca4dd3527db-kube-api-access-zdfr5\") pod \"controller-69bbfbf88f-qmm5b\" (UID: \"533c1ec2-44e4-4a34-8f40-5ca4dd3527db\") " pod="metallb-system/controller-69bbfbf88f-qmm5b" Feb 16 17:17:02 crc kubenswrapper[4794]: I0216 17:17:02.075278 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpm6k\" (UniqueName: \"kubernetes.io/projected/0863f5e7-b46f-45a6-866e-a445bddeeed2-kube-api-access-hpm6k\") pod \"speaker-pkjkp\" (UID: \"0863f5e7-b46f-45a6-866e-a445bddeeed2\") " pod="metallb-system/speaker-pkjkp" Feb 16 17:17:02 crc kubenswrapper[4794]: I0216 17:17:02.075352 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/533c1ec2-44e4-4a34-8f40-5ca4dd3527db-metrics-certs\") pod \"controller-69bbfbf88f-qmm5b\" (UID: \"533c1ec2-44e4-4a34-8f40-5ca4dd3527db\") " pod="metallb-system/controller-69bbfbf88f-qmm5b" Feb 16 17:17:02 crc kubenswrapper[4794]: I0216 17:17:02.075415 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0863f5e7-b46f-45a6-866e-a445bddeeed2-metallb-excludel2\") pod \"speaker-pkjkp\" (UID: \"0863f5e7-b46f-45a6-866e-a445bddeeed2\") " pod="metallb-system/speaker-pkjkp" Feb 16 17:17:02 crc kubenswrapper[4794]: I0216 17:17:02.075443 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0863f5e7-b46f-45a6-866e-a445bddeeed2-memberlist\") pod \"speaker-pkjkp\" (UID: \"0863f5e7-b46f-45a6-866e-a445bddeeed2\") " pod="metallb-system/speaker-pkjkp" Feb 16 17:17:02 crc kubenswrapper[4794]: I0216 17:17:02.075469 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0863f5e7-b46f-45a6-866e-a445bddeeed2-metrics-certs\") pod \"speaker-pkjkp\" (UID: \"0863f5e7-b46f-45a6-866e-a445bddeeed2\") " pod="metallb-system/speaker-pkjkp" Feb 16 17:17:02 crc kubenswrapper[4794]: I0216 17:17:02.075501 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/533c1ec2-44e4-4a34-8f40-5ca4dd3527db-cert\") pod \"controller-69bbfbf88f-qmm5b\" (UID: \"533c1ec2-44e4-4a34-8f40-5ca4dd3527db\") " pod="metallb-system/controller-69bbfbf88f-qmm5b" Feb 16 17:17:02 crc kubenswrapper[4794]: E0216 17:17:02.075721 4794 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 16 17:17:02 crc kubenswrapper[4794]: E0216 17:17:02.075792 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0863f5e7-b46f-45a6-866e-a445bddeeed2-memberlist podName:0863f5e7-b46f-45a6-866e-a445bddeeed2 nodeName:}" failed. No retries permitted until 2026-02-16 17:17:02.575771731 +0000 UTC m=+1048.523866378 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/0863f5e7-b46f-45a6-866e-a445bddeeed2-memberlist") pod "speaker-pkjkp" (UID: "0863f5e7-b46f-45a6-866e-a445bddeeed2") : secret "metallb-memberlist" not found Feb 16 17:17:02 crc kubenswrapper[4794]: E0216 17:17:02.075813 4794 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 16 17:17:02 crc kubenswrapper[4794]: E0216 17:17:02.075849 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/533c1ec2-44e4-4a34-8f40-5ca4dd3527db-metrics-certs podName:533c1ec2-44e4-4a34-8f40-5ca4dd3527db nodeName:}" failed. No retries permitted until 2026-02-16 17:17:02.575839513 +0000 UTC m=+1048.523934160 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/533c1ec2-44e4-4a34-8f40-5ca4dd3527db-metrics-certs") pod "controller-69bbfbf88f-qmm5b" (UID: "533c1ec2-44e4-4a34-8f40-5ca4dd3527db") : secret "controller-certs-secret" not found Feb 16 17:17:02 crc kubenswrapper[4794]: E0216 17:17:02.075990 4794 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Feb 16 17:17:02 crc kubenswrapper[4794]: E0216 17:17:02.076125 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0863f5e7-b46f-45a6-866e-a445bddeeed2-metrics-certs podName:0863f5e7-b46f-45a6-866e-a445bddeeed2 nodeName:}" failed. No retries permitted until 2026-02-16 17:17:02.576104051 +0000 UTC m=+1048.524198698 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0863f5e7-b46f-45a6-866e-a445bddeeed2-metrics-certs") pod "speaker-pkjkp" (UID: "0863f5e7-b46f-45a6-866e-a445bddeeed2") : secret "speaker-certs-secret" not found Feb 16 17:17:02 crc kubenswrapper[4794]: I0216 17:17:02.076218 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0863f5e7-b46f-45a6-866e-a445bddeeed2-metallb-excludel2\") pod \"speaker-pkjkp\" (UID: \"0863f5e7-b46f-45a6-866e-a445bddeeed2\") " pod="metallb-system/speaker-pkjkp" Feb 16 17:17:02 crc kubenswrapper[4794]: I0216 17:17:02.076968 4794 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 16 17:17:02 crc kubenswrapper[4794]: I0216 17:17:02.089315 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/533c1ec2-44e4-4a34-8f40-5ca4dd3527db-cert\") pod \"controller-69bbfbf88f-qmm5b\" (UID: \"533c1ec2-44e4-4a34-8f40-5ca4dd3527db\") " pod="metallb-system/controller-69bbfbf88f-qmm5b" Feb 16 17:17:02 crc kubenswrapper[4794]: I0216 17:17:02.090344 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-sjmrc" Feb 16 17:17:02 crc kubenswrapper[4794]: I0216 17:17:02.101461 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdfr5\" (UniqueName: \"kubernetes.io/projected/533c1ec2-44e4-4a34-8f40-5ca4dd3527db-kube-api-access-zdfr5\") pod \"controller-69bbfbf88f-qmm5b\" (UID: \"533c1ec2-44e4-4a34-8f40-5ca4dd3527db\") " pod="metallb-system/controller-69bbfbf88f-qmm5b" Feb 16 17:17:02 crc kubenswrapper[4794]: I0216 17:17:02.105951 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-pv9br" Feb 16 17:17:02 crc kubenswrapper[4794]: I0216 17:17:02.106750 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpm6k\" (UniqueName: \"kubernetes.io/projected/0863f5e7-b46f-45a6-866e-a445bddeeed2-kube-api-access-hpm6k\") pod \"speaker-pkjkp\" (UID: \"0863f5e7-b46f-45a6-866e-a445bddeeed2\") " pod="metallb-system/speaker-pkjkp" Feb 16 17:17:02 crc kubenswrapper[4794]: I0216 17:17:02.583852 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/533c1ec2-44e4-4a34-8f40-5ca4dd3527db-metrics-certs\") pod \"controller-69bbfbf88f-qmm5b\" (UID: \"533c1ec2-44e4-4a34-8f40-5ca4dd3527db\") " pod="metallb-system/controller-69bbfbf88f-qmm5b" Feb 16 17:17:02 crc kubenswrapper[4794]: I0216 17:17:02.583926 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0863f5e7-b46f-45a6-866e-a445bddeeed2-memberlist\") pod \"speaker-pkjkp\" (UID: \"0863f5e7-b46f-45a6-866e-a445bddeeed2\") " pod="metallb-system/speaker-pkjkp" Feb 16 17:17:02 crc kubenswrapper[4794]: I0216 17:17:02.583949 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0863f5e7-b46f-45a6-866e-a445bddeeed2-metrics-certs\") pod \"speaker-pkjkp\" (UID: \"0863f5e7-b46f-45a6-866e-a445bddeeed2\") " pod="metallb-system/speaker-pkjkp" Feb 16 17:17:02 crc kubenswrapper[4794]: E0216 17:17:02.586070 4794 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 16 17:17:02 crc kubenswrapper[4794]: E0216 17:17:02.586232 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0863f5e7-b46f-45a6-866e-a445bddeeed2-memberlist podName:0863f5e7-b46f-45a6-866e-a445bddeeed2 nodeName:}" failed. No retries permitted until 2026-02-16 17:17:03.58620932 +0000 UTC m=+1049.534303987 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/0863f5e7-b46f-45a6-866e-a445bddeeed2-memberlist") pod "speaker-pkjkp" (UID: "0863f5e7-b46f-45a6-866e-a445bddeeed2") : secret "metallb-memberlist" not found Feb 16 17:17:02 crc kubenswrapper[4794]: I0216 17:17:02.588937 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0863f5e7-b46f-45a6-866e-a445bddeeed2-metrics-certs\") pod \"speaker-pkjkp\" (UID: \"0863f5e7-b46f-45a6-866e-a445bddeeed2\") " pod="metallb-system/speaker-pkjkp" Feb 16 17:17:02 crc kubenswrapper[4794]: I0216 17:17:02.590071 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/533c1ec2-44e4-4a34-8f40-5ca4dd3527db-metrics-certs\") pod \"controller-69bbfbf88f-qmm5b\" (UID: \"533c1ec2-44e4-4a34-8f40-5ca4dd3527db\") " pod="metallb-system/controller-69bbfbf88f-qmm5b" Feb 16 17:17:02 crc kubenswrapper[4794]: I0216 17:17:02.598767 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-pv9br"] Feb 16 17:17:02 crc kubenswrapper[4794]: W0216 17:17:02.600043 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06b0fb65_95c5_4a34_ae4e_d787cf10733c.slice/crio-14fef0ef2f69604fc705923b1b62074ef287d6e45edd19b04e081435dfc88871 WatchSource:0}: Error finding container 14fef0ef2f69604fc705923b1b62074ef287d6e45edd19b04e081435dfc88871: Status 404 returned error can't find the container with id 14fef0ef2f69604fc705923b1b62074ef287d6e45edd19b04e081435dfc88871 Feb 16 17:17:02 crc kubenswrapper[4794]: I0216 17:17:02.814401 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-qmm5b" Feb 16 17:17:03 crc kubenswrapper[4794]: I0216 17:17:03.237536 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sjmrc" event={"ID":"b432c0dc-a16b-408b-b760-08c20e6a6e05","Type":"ContainerStarted","Data":"d8553f3b29cc9828b874cc1891e61153cb9a212c187034336156041dbba4a8de"} Feb 16 17:17:03 crc kubenswrapper[4794]: I0216 17:17:03.239039 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-pv9br" event={"ID":"06b0fb65-95c5-4a34-ae4e-d787cf10733c","Type":"ContainerStarted","Data":"14fef0ef2f69604fc705923b1b62074ef287d6e45edd19b04e081435dfc88871"} Feb 16 17:17:03 crc kubenswrapper[4794]: I0216 17:17:03.268777 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-qmm5b"] Feb 16 17:17:03 crc kubenswrapper[4794]: I0216 17:17:03.610404 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0863f5e7-b46f-45a6-866e-a445bddeeed2-memberlist\") pod \"speaker-pkjkp\" (UID: \"0863f5e7-b46f-45a6-866e-a445bddeeed2\") " pod="metallb-system/speaker-pkjkp" Feb 16 17:17:03 crc kubenswrapper[4794]: I0216 17:17:03.619239 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0863f5e7-b46f-45a6-866e-a445bddeeed2-memberlist\") pod \"speaker-pkjkp\" (UID: \"0863f5e7-b46f-45a6-866e-a445bddeeed2\") " pod="metallb-system/speaker-pkjkp" Feb 16 17:17:03 crc kubenswrapper[4794]: I0216 17:17:03.707317 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-pkjkp" Feb 16 17:17:03 crc kubenswrapper[4794]: W0216 17:17:03.745977 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0863f5e7_b46f_45a6_866e_a445bddeeed2.slice/crio-08b412bf59900af88ef67589b98ee425394bed815c7bd99dd8b0d7efbf35eb01 WatchSource:0}: Error finding container 08b412bf59900af88ef67589b98ee425394bed815c7bd99dd8b0d7efbf35eb01: Status 404 returned error can't find the container with id 08b412bf59900af88ef67589b98ee425394bed815c7bd99dd8b0d7efbf35eb01 Feb 16 17:17:04 crc kubenswrapper[4794]: I0216 17:17:04.285145 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pkjkp" event={"ID":"0863f5e7-b46f-45a6-866e-a445bddeeed2","Type":"ContainerStarted","Data":"6c8b1d34b8b4eb4fa7d65f12240e1dfc5754c55efabef92555c2afbe6ebfcc2e"} Feb 16 17:17:04 crc kubenswrapper[4794]: I0216 17:17:04.285195 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pkjkp" event={"ID":"0863f5e7-b46f-45a6-866e-a445bddeeed2","Type":"ContainerStarted","Data":"08b412bf59900af88ef67589b98ee425394bed815c7bd99dd8b0d7efbf35eb01"} Feb 16 17:17:04 crc kubenswrapper[4794]: I0216 17:17:04.297916 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-qmm5b" event={"ID":"533c1ec2-44e4-4a34-8f40-5ca4dd3527db","Type":"ContainerStarted","Data":"a9d61a0f27f26c2d2ea50a1a43f4b8de70ea26d164b2ba427d30e5b77207a0cd"} Feb 16 17:17:04 crc kubenswrapper[4794]: I0216 17:17:04.298004 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-qmm5b" event={"ID":"533c1ec2-44e4-4a34-8f40-5ca4dd3527db","Type":"ContainerStarted","Data":"94e671a58f0766a719e3451d1dc45e1b891b46382d492adbf1e6fd4cb6ec7a5d"} Feb 16 17:17:04 crc kubenswrapper[4794]: I0216 17:17:04.298037 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-qmm5b" event={"ID":"533c1ec2-44e4-4a34-8f40-5ca4dd3527db","Type":"ContainerStarted","Data":"e4825fb1e9771207597811a6798573413ce60818a1f2ca49062b16d34a2c87ac"} Feb 16 17:17:04 crc kubenswrapper[4794]: I0216 17:17:04.300234 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-qmm5b" Feb 16 17:17:04 crc kubenswrapper[4794]: I0216 17:17:04.327638 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-qmm5b" podStartSLOduration=3.327612242 podStartE2EDuration="3.327612242s" podCreationTimestamp="2026-02-16 17:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:17:04.327389935 +0000 UTC m=+1050.275484582" watchObservedRunningTime="2026-02-16 17:17:04.327612242 +0000 UTC m=+1050.275706889" Feb 16 17:17:05 crc kubenswrapper[4794]: I0216 17:17:05.313469 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pkjkp" event={"ID":"0863f5e7-b46f-45a6-866e-a445bddeeed2","Type":"ContainerStarted","Data":"9f8ba8cb93f0d7bd8172545cd8e9bb2a392477809d94edf451bbbc7608994ec1"} Feb 16 17:17:05 crc kubenswrapper[4794]: I0216 17:17:05.313871 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-pkjkp" Feb 16 17:17:05 crc kubenswrapper[4794]: I0216 17:17:05.344806 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-pkjkp" podStartSLOduration=4.344773962 podStartE2EDuration="4.344773962s" podCreationTimestamp="2026-02-16 17:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:17:05.334027835 +0000 UTC m=+1051.282122472" watchObservedRunningTime="2026-02-16 17:17:05.344773962 +0000 UTC m=+1051.292868609" Feb 16 17:17:10 crc kubenswrapper[4794]: I0216 17:17:10.357475 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-pv9br" event={"ID":"06b0fb65-95c5-4a34-ae4e-d787cf10733c","Type":"ContainerStarted","Data":"d90822c5da61d98d49b5a38eddac891cfc28261d2cad944d887efc65425df7a5"} Feb 16 17:17:10 crc kubenswrapper[4794]: I0216 17:17:10.358211 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-pv9br" Feb 16 17:17:10 crc kubenswrapper[4794]: I0216 17:17:10.359705 4794 generic.go:334] "Generic (PLEG): container finished" podID="b432c0dc-a16b-408b-b760-08c20e6a6e05" containerID="a4f60dc3790dcfc91aded29c7e8cba0294253dc7476650bf14c756addb1dcfd7" exitCode=0 Feb 16 17:17:10 crc kubenswrapper[4794]: I0216 17:17:10.359741 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sjmrc" event={"ID":"b432c0dc-a16b-408b-b760-08c20e6a6e05","Type":"ContainerDied","Data":"a4f60dc3790dcfc91aded29c7e8cba0294253dc7476650bf14c756addb1dcfd7"} Feb 16 17:17:10 crc kubenswrapper[4794]: I0216 17:17:10.384021 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-pv9br" podStartSLOduration=2.091163952 podStartE2EDuration="9.384002717s" podCreationTimestamp="2026-02-16 17:17:01 +0000 UTC" firstStartedPulling="2026-02-16 17:17:02.604408739 +0000 UTC m=+1048.552503386" lastFinishedPulling="2026-02-16 17:17:09.897247504 +0000 UTC m=+1055.845342151" observedRunningTime="2026-02-16 17:17:10.379944891 +0000 UTC m=+1056.328039538" watchObservedRunningTime="2026-02-16 17:17:10.384002717 +0000 UTC m=+1056.332097364" Feb 16 17:17:11 crc kubenswrapper[4794]: I0216 17:17:11.377707 4794 generic.go:334] "Generic (PLEG): container finished" podID="b432c0dc-a16b-408b-b760-08c20e6a6e05" containerID="4dc4e0b6f0434ce5e05bab1af64cad951a08685e8583463999cf59942db078bc" exitCode=0 Feb 16 17:17:11 crc kubenswrapper[4794]: I0216 17:17:11.377814 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sjmrc" event={"ID":"b432c0dc-a16b-408b-b760-08c20e6a6e05","Type":"ContainerDied","Data":"4dc4e0b6f0434ce5e05bab1af64cad951a08685e8583463999cf59942db078bc"} Feb 16 17:17:12 crc kubenswrapper[4794]: I0216 17:17:12.387588 4794 generic.go:334] "Generic (PLEG): container finished" podID="b432c0dc-a16b-408b-b760-08c20e6a6e05" containerID="1bbe572b30aec2b954fbdc7efce3e62ddcc269860633221770e3edf52b51312b" exitCode=0 Feb 16 17:17:12 crc kubenswrapper[4794]: I0216 17:17:12.387675 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sjmrc" event={"ID":"b432c0dc-a16b-408b-b760-08c20e6a6e05","Type":"ContainerDied","Data":"1bbe572b30aec2b954fbdc7efce3e62ddcc269860633221770e3edf52b51312b"} Feb 16 17:17:13 crc kubenswrapper[4794]: I0216 17:17:13.400224 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sjmrc" event={"ID":"b432c0dc-a16b-408b-b760-08c20e6a6e05","Type":"ContainerStarted","Data":"f8138d43cd24c4c1ee20df3661f2bfada265775cd8effd1d7d24634b402658f4"} Feb 16 17:17:13 crc kubenswrapper[4794]: I0216 17:17:13.400529 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sjmrc" event={"ID":"b432c0dc-a16b-408b-b760-08c20e6a6e05","Type":"ContainerStarted","Data":"0f1ab4a84d0b4e0229da5b4b400de1fda51147b74ef82602f0779b3ee083073f"} Feb 16 17:17:13 crc kubenswrapper[4794]: I0216 17:17:13.400539 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sjmrc" event={"ID":"b432c0dc-a16b-408b-b760-08c20e6a6e05","Type":"ContainerStarted","Data":"82789785550ce1376a93826742c27cf0d9ed53618e22e49cb8a1a32d0e8a26eb"} Feb 16 17:17:13 crc kubenswrapper[4794]: I0216 17:17:13.400551 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sjmrc" event={"ID":"b432c0dc-a16b-408b-b760-08c20e6a6e05","Type":"ContainerStarted","Data":"ff8d695406a5a0baf5861d0b95f952a3a2498cc6ae17fd4e85e4dd6dc4293bf1"} Feb 16 17:17:13 crc kubenswrapper[4794]: I0216 17:17:13.400560 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sjmrc" event={"ID":"b432c0dc-a16b-408b-b760-08c20e6a6e05","Type":"ContainerStarted","Data":"97f616e5d0a36c2f6459ccab87f2a86320503c28ae15b67a8118d44096639ebf"} Feb 16 17:17:14 crc kubenswrapper[4794]: I0216 17:17:14.413037 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sjmrc" event={"ID":"b432c0dc-a16b-408b-b760-08c20e6a6e05","Type":"ContainerStarted","Data":"2995ceb0344207c9d5675d0c8db80613b32c068721e845f2361f41e1296a3474"} Feb 16 17:17:14 crc kubenswrapper[4794]: I0216 17:17:14.413865 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-sjmrc" Feb 16 17:17:14 crc kubenswrapper[4794]: I0216 17:17:14.440694 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-sjmrc" podStartSLOduration=5.854475292 podStartE2EDuration="13.440671209s" podCreationTimestamp="2026-02-16 17:17:01 +0000 UTC" firstStartedPulling="2026-02-16 17:17:02.307385582 +0000 UTC m=+1048.255480229" lastFinishedPulling="2026-02-16 17:17:09.893581499 +0000 UTC m=+1055.841676146" observedRunningTime="2026-02-16 17:17:14.433187066 +0000 UTC m=+1060.381281713" watchObservedRunningTime="2026-02-16 17:17:14.440671209 +0000 UTC m=+1060.388765866" Feb 16 17:17:17 crc kubenswrapper[4794]: I0216 17:17:17.091076 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-sjmrc" Feb 16 17:17:17 crc kubenswrapper[4794]: I0216 17:17:17.135407 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-sjmrc" Feb 16 17:17:20 crc kubenswrapper[4794]: I0216 17:17:20.140552 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:17:20 crc kubenswrapper[4794]: I0216 17:17:20.140896 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:17:22 crc kubenswrapper[4794]: I0216 17:17:22.095490 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-sjmrc" Feb 16 17:17:22 crc kubenswrapper[4794]: I0216 17:17:22.114743 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-pv9br" Feb 16 17:17:22 crc kubenswrapper[4794]: I0216 17:17:22.819207 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-qmm5b" Feb 16 17:17:23 crc kubenswrapper[4794]: I0216 17:17:23.710457 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-pkjkp" Feb 16 17:17:26 crc kubenswrapper[4794]: I0216 17:17:26.359892 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-xkdfj"] Feb 16 17:17:26 crc kubenswrapper[4794]: I0216 17:17:26.362759 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xkdfj" Feb 16 17:17:26 crc kubenswrapper[4794]: I0216 17:17:26.366645 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xkdfj"] Feb 16 17:17:26 crc kubenswrapper[4794]: I0216 17:17:26.376738 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 16 17:17:26 crc kubenswrapper[4794]: I0216 17:17:26.376847 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-7kbht" Feb 16 17:17:26 crc kubenswrapper[4794]: I0216 17:17:26.376913 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 16 17:17:26 crc kubenswrapper[4794]: I0216 17:17:26.453387 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrgx5\" (UniqueName: \"kubernetes.io/projected/a9ad8b05-6f22-432a-b192-f1ea8f52b425-kube-api-access-jrgx5\") pod \"openstack-operator-index-xkdfj\" (UID: \"a9ad8b05-6f22-432a-b192-f1ea8f52b425\") " pod="openstack-operators/openstack-operator-index-xkdfj" Feb 16 17:17:26 crc kubenswrapper[4794]: I0216 17:17:26.556172 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrgx5\" (UniqueName: \"kubernetes.io/projected/a9ad8b05-6f22-432a-b192-f1ea8f52b425-kube-api-access-jrgx5\") pod \"openstack-operator-index-xkdfj\" (UID: \"a9ad8b05-6f22-432a-b192-f1ea8f52b425\") " pod="openstack-operators/openstack-operator-index-xkdfj" Feb 16 17:17:26 crc kubenswrapper[4794]: I0216 17:17:26.585027 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrgx5\" (UniqueName: \"kubernetes.io/projected/a9ad8b05-6f22-432a-b192-f1ea8f52b425-kube-api-access-jrgx5\") pod \"openstack-operator-index-xkdfj\" (UID: \"a9ad8b05-6f22-432a-b192-f1ea8f52b425\") " pod="openstack-operators/openstack-operator-index-xkdfj" Feb 16 17:17:26 crc kubenswrapper[4794]: I0216 17:17:26.698248 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xkdfj" Feb 16 17:17:27 crc kubenswrapper[4794]: I0216 17:17:27.101478 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xkdfj"] Feb 16 17:17:27 crc kubenswrapper[4794]: W0216 17:17:27.106699 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9ad8b05_6f22_432a_b192_f1ea8f52b425.slice/crio-aafa392e5f62e17d298a197979e40f626a3f0ba411ac2c58330a7e7b21db60e0 WatchSource:0}: Error finding container aafa392e5f62e17d298a197979e40f626a3f0ba411ac2c58330a7e7b21db60e0: Status 404 returned error can't find the container with id aafa392e5f62e17d298a197979e40f626a3f0ba411ac2c58330a7e7b21db60e0 Feb 16 17:17:27 crc kubenswrapper[4794]: I0216 17:17:27.521122 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xkdfj" event={"ID":"a9ad8b05-6f22-432a-b192-f1ea8f52b425","Type":"ContainerStarted","Data":"aafa392e5f62e17d298a197979e40f626a3f0ba411ac2c58330a7e7b21db60e0"} Feb 16 17:17:29 crc kubenswrapper[4794]: I0216 17:17:29.732474 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-xkdfj"] Feb 16 17:17:30 crc kubenswrapper[4794]: I0216 17:17:30.344941 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-wrrdh"] Feb 16 17:17:30 crc kubenswrapper[4794]: I0216 17:17:30.346827 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wrrdh" Feb 16 17:17:30 crc kubenswrapper[4794]: I0216 17:17:30.414692 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-wrrdh"] Feb 16 17:17:30 crc kubenswrapper[4794]: I0216 17:17:30.419749 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kstmj\" (UniqueName: \"kubernetes.io/projected/48aadbe0-8241-422e-a086-b1e1c0d2d9bd-kube-api-access-kstmj\") pod \"openstack-operator-index-wrrdh\" (UID: \"48aadbe0-8241-422e-a086-b1e1c0d2d9bd\") " pod="openstack-operators/openstack-operator-index-wrrdh" Feb 16 17:17:30 crc kubenswrapper[4794]: I0216 17:17:30.521092 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kstmj\" (UniqueName: \"kubernetes.io/projected/48aadbe0-8241-422e-a086-b1e1c0d2d9bd-kube-api-access-kstmj\") pod \"openstack-operator-index-wrrdh\" (UID: \"48aadbe0-8241-422e-a086-b1e1c0d2d9bd\") " pod="openstack-operators/openstack-operator-index-wrrdh" Feb 16 17:17:30 crc kubenswrapper[4794]: I0216 17:17:30.537846 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kstmj\" (UniqueName: \"kubernetes.io/projected/48aadbe0-8241-422e-a086-b1e1c0d2d9bd-kube-api-access-kstmj\") pod \"openstack-operator-index-wrrdh\" (UID: \"48aadbe0-8241-422e-a086-b1e1c0d2d9bd\") " pod="openstack-operators/openstack-operator-index-wrrdh" Feb 16 17:17:30 crc kubenswrapper[4794]: I0216 17:17:30.673909 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xkdfj" event={"ID":"a9ad8b05-6f22-432a-b192-f1ea8f52b425","Type":"ContainerStarted","Data":"8b7f7842d8282c09140763408e96e36949619ef85ebc12f730b6b6157b8982c4"} Feb 16 17:17:30 crc kubenswrapper[4794]: I0216 17:17:30.674033 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-xkdfj" podUID="a9ad8b05-6f22-432a-b192-f1ea8f52b425" containerName="registry-server" containerID="cri-o://8b7f7842d8282c09140763408e96e36949619ef85ebc12f730b6b6157b8982c4" gracePeriod=2 Feb 16 17:17:30 crc kubenswrapper[4794]: I0216 17:17:30.700606 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-xkdfj" podStartSLOduration=2.017162524 podStartE2EDuration="4.700588852s" podCreationTimestamp="2026-02-16 17:17:26 +0000 UTC" firstStartedPulling="2026-02-16 17:17:27.108633395 +0000 UTC m=+1073.056728042" lastFinishedPulling="2026-02-16 17:17:29.792059723 +0000 UTC m=+1075.740154370" observedRunningTime="2026-02-16 17:17:30.697214296 +0000 UTC m=+1076.645308943" watchObservedRunningTime="2026-02-16 17:17:30.700588852 +0000 UTC m=+1076.648683499" Feb 16 17:17:30 crc kubenswrapper[4794]: I0216 17:17:30.722022 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wrrdh" Feb 16 17:17:31 crc kubenswrapper[4794]: I0216 17:17:31.150631 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-wrrdh"] Feb 16 17:17:31 crc kubenswrapper[4794]: W0216 17:17:31.160720 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48aadbe0_8241_422e_a086_b1e1c0d2d9bd.slice/crio-f26621069ee3773998c543132f8d43136018f4d1e90b8176d9ba1761c377093b WatchSource:0}: Error finding container f26621069ee3773998c543132f8d43136018f4d1e90b8176d9ba1761c377093b: Status 404 returned error can't find the container with id f26621069ee3773998c543132f8d43136018f4d1e90b8176d9ba1761c377093b Feb 16 17:17:31 crc kubenswrapper[4794]: I0216 17:17:31.218404 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xkdfj" Feb 16 17:17:31 crc kubenswrapper[4794]: I0216 17:17:31.238911 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrgx5\" (UniqueName: \"kubernetes.io/projected/a9ad8b05-6f22-432a-b192-f1ea8f52b425-kube-api-access-jrgx5\") pod \"a9ad8b05-6f22-432a-b192-f1ea8f52b425\" (UID: \"a9ad8b05-6f22-432a-b192-f1ea8f52b425\") " Feb 16 17:17:31 crc kubenswrapper[4794]: I0216 17:17:31.245407 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9ad8b05-6f22-432a-b192-f1ea8f52b425-kube-api-access-jrgx5" (OuterVolumeSpecName: "kube-api-access-jrgx5") pod "a9ad8b05-6f22-432a-b192-f1ea8f52b425" (UID: "a9ad8b05-6f22-432a-b192-f1ea8f52b425"). InnerVolumeSpecName "kube-api-access-jrgx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:17:31 crc kubenswrapper[4794]: I0216 17:17:31.343890 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrgx5\" (UniqueName: \"kubernetes.io/projected/a9ad8b05-6f22-432a-b192-f1ea8f52b425-kube-api-access-jrgx5\") on node \"crc\" DevicePath \"\"" Feb 16 17:17:31 crc kubenswrapper[4794]: I0216 17:17:31.682722 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wrrdh" event={"ID":"48aadbe0-8241-422e-a086-b1e1c0d2d9bd","Type":"ContainerStarted","Data":"339f4ec71e28b252953366ab6c90f0bf425aec6b383f36661b5218bb30e289ac"} Feb 16 17:17:31 crc kubenswrapper[4794]: I0216 17:17:31.682767 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wrrdh" event={"ID":"48aadbe0-8241-422e-a086-b1e1c0d2d9bd","Type":"ContainerStarted","Data":"f26621069ee3773998c543132f8d43136018f4d1e90b8176d9ba1761c377093b"} Feb 16 17:17:31 crc kubenswrapper[4794]: I0216 17:17:31.684809 4794 generic.go:334] "Generic (PLEG): container finished" podID="a9ad8b05-6f22-432a-b192-f1ea8f52b425" containerID="8b7f7842d8282c09140763408e96e36949619ef85ebc12f730b6b6157b8982c4" exitCode=0 Feb 16 17:17:31 crc kubenswrapper[4794]: I0216 17:17:31.684844 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xkdfj" event={"ID":"a9ad8b05-6f22-432a-b192-f1ea8f52b425","Type":"ContainerDied","Data":"8b7f7842d8282c09140763408e96e36949619ef85ebc12f730b6b6157b8982c4"} Feb 16 17:17:31 crc kubenswrapper[4794]: I0216 17:17:31.684865 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xkdfj" event={"ID":"a9ad8b05-6f22-432a-b192-f1ea8f52b425","Type":"ContainerDied","Data":"aafa392e5f62e17d298a197979e40f626a3f0ba411ac2c58330a7e7b21db60e0"} Feb 16 17:17:31 crc kubenswrapper[4794]: I0216 17:17:31.684874 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xkdfj" Feb 16 17:17:31 crc kubenswrapper[4794]: I0216 17:17:31.684884 4794 scope.go:117] "RemoveContainer" containerID="8b7f7842d8282c09140763408e96e36949619ef85ebc12f730b6b6157b8982c4" Feb 16 17:17:31 crc kubenswrapper[4794]: I0216 17:17:31.702703 4794 scope.go:117] "RemoveContainer" containerID="8b7f7842d8282c09140763408e96e36949619ef85ebc12f730b6b6157b8982c4" Feb 16 17:17:31 crc kubenswrapper[4794]: E0216 17:17:31.703986 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b7f7842d8282c09140763408e96e36949619ef85ebc12f730b6b6157b8982c4\": container with ID starting with 8b7f7842d8282c09140763408e96e36949619ef85ebc12f730b6b6157b8982c4 not found: ID does not exist" containerID="8b7f7842d8282c09140763408e96e36949619ef85ebc12f730b6b6157b8982c4" Feb 16 17:17:31 crc kubenswrapper[4794]: I0216 17:17:31.704022 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b7f7842d8282c09140763408e96e36949619ef85ebc12f730b6b6157b8982c4"} err="failed to get container status \"8b7f7842d8282c09140763408e96e36949619ef85ebc12f730b6b6157b8982c4\": rpc error: code = NotFound desc = could not find container \"8b7f7842d8282c09140763408e96e36949619ef85ebc12f730b6b6157b8982c4\": container with ID starting with 8b7f7842d8282c09140763408e96e36949619ef85ebc12f730b6b6157b8982c4 not found: ID does not exist" Feb 16 17:17:31 crc kubenswrapper[4794]: I0216 17:17:31.709579 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-wrrdh" podStartSLOduration=1.665818442 podStartE2EDuration="1.70955994s" podCreationTimestamp="2026-02-16 17:17:30 +0000 UTC" firstStartedPulling="2026-02-16 17:17:31.164525564 +0000 UTC m=+1077.112620211" lastFinishedPulling="2026-02-16 17:17:31.208267062 +0000 UTC m=+1077.156361709" observedRunningTime="2026-02-16 17:17:31.70151261 +0000 UTC m=+1077.649607257" watchObservedRunningTime="2026-02-16 17:17:31.70955994 +0000 UTC m=+1077.657654587" Feb 16 17:17:31 crc kubenswrapper[4794]: I0216 17:17:31.716842 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-xkdfj"] Feb 16 17:17:31 crc kubenswrapper[4794]: I0216 17:17:31.723218 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-xkdfj"] Feb 16 17:17:32 crc kubenswrapper[4794]: I0216 17:17:32.816789 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9ad8b05-6f22-432a-b192-f1ea8f52b425" path="/var/lib/kubelet/pods/a9ad8b05-6f22-432a-b192-f1ea8f52b425/volumes" Feb 16 17:17:40 crc kubenswrapper[4794]: I0216 17:17:40.723173 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-wrrdh" Feb 16 17:17:40 crc kubenswrapper[4794]: I0216 17:17:40.724041 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-wrrdh" Feb 16 17:17:40 crc kubenswrapper[4794]: I0216 17:17:40.759712 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-wrrdh" Feb 16 17:17:40 crc kubenswrapper[4794]: I0216 17:17:40.801738 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-wrrdh" Feb 16 17:17:46 crc kubenswrapper[4794]: I0216 17:17:46.670109 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98gswkr"] Feb 16 17:17:46 crc kubenswrapper[4794]: E0216 17:17:46.671849 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9ad8b05-6f22-432a-b192-f1ea8f52b425" containerName="registry-server" Feb 16 17:17:46 crc kubenswrapper[4794]: I0216 17:17:46.671871 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9ad8b05-6f22-432a-b192-f1ea8f52b425" containerName="registry-server" Feb 16 17:17:46 crc kubenswrapper[4794]: I0216 17:17:46.672048 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9ad8b05-6f22-432a-b192-f1ea8f52b425" containerName="registry-server" Feb 16 17:17:46 crc kubenswrapper[4794]: I0216 17:17:46.673640 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98gswkr" Feb 16 17:17:46 crc kubenswrapper[4794]: I0216 17:17:46.675601 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-g8xrz" Feb 16 17:17:46 crc kubenswrapper[4794]: I0216 17:17:46.679602 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98gswkr"] Feb 16 17:17:46 crc kubenswrapper[4794]: I0216 17:17:46.809911 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2ff4cec4-468f-41bf-a84a-4cdbc3e236fb-bundle\") pod \"60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98gswkr\" (UID: \"2ff4cec4-468f-41bf-a84a-4cdbc3e236fb\") " pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98gswkr" Feb 16 17:17:46 crc kubenswrapper[4794]: I0216 17:17:46.809994 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfnj7\" (UniqueName: \"kubernetes.io/projected/2ff4cec4-468f-41bf-a84a-4cdbc3e236fb-kube-api-access-hfnj7\") pod \"60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98gswkr\" (UID: \"2ff4cec4-468f-41bf-a84a-4cdbc3e236fb\") " pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98gswkr" Feb 16 17:17:46 crc kubenswrapper[4794]: I0216 17:17:46.810042 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2ff4cec4-468f-41bf-a84a-4cdbc3e236fb-util\") pod \"60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98gswkr\" (UID: \"2ff4cec4-468f-41bf-a84a-4cdbc3e236fb\") " pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98gswkr" Feb 16 17:17:46 crc kubenswrapper[4794]: I0216 17:17:46.911741 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfnj7\" (UniqueName: \"kubernetes.io/projected/2ff4cec4-468f-41bf-a84a-4cdbc3e236fb-kube-api-access-hfnj7\") pod \"60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98gswkr\" (UID: \"2ff4cec4-468f-41bf-a84a-4cdbc3e236fb\") " pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98gswkr" Feb 16 17:17:46 crc kubenswrapper[4794]: I0216 17:17:46.911904 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2ff4cec4-468f-41bf-a84a-4cdbc3e236fb-util\") pod \"60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98gswkr\" (UID: \"2ff4cec4-468f-41bf-a84a-4cdbc3e236fb\") " pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98gswkr" Feb 16 17:17:46 crc kubenswrapper[4794]: I0216 17:17:46.912084 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2ff4cec4-468f-41bf-a84a-4cdbc3e236fb-bundle\") pod \"60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98gswkr\" (UID: \"2ff4cec4-468f-41bf-a84a-4cdbc3e236fb\") " pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98gswkr" Feb 16 17:17:46 crc kubenswrapper[4794]: I0216 17:17:46.912770 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2ff4cec4-468f-41bf-a84a-4cdbc3e236fb-util\") pod \"60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98gswkr\" (UID: \"2ff4cec4-468f-41bf-a84a-4cdbc3e236fb\") " pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98gswkr" Feb 16 17:17:46 crc kubenswrapper[4794]: I0216 17:17:46.913115 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2ff4cec4-468f-41bf-a84a-4cdbc3e236fb-bundle\") pod \"60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98gswkr\" (UID: \"2ff4cec4-468f-41bf-a84a-4cdbc3e236fb\") " pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98gswkr" Feb 16 17:17:46 crc kubenswrapper[4794]: I0216 17:17:46.930590 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfnj7\" (UniqueName: \"kubernetes.io/projected/2ff4cec4-468f-41bf-a84a-4cdbc3e236fb-kube-api-access-hfnj7\") pod \"60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98gswkr\" (UID: \"2ff4cec4-468f-41bf-a84a-4cdbc3e236fb\") " pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98gswkr" Feb 16 17:17:47 crc kubenswrapper[4794]: I0216 17:17:47.002523 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98gswkr" Feb 16 17:17:47 crc kubenswrapper[4794]: I0216 17:17:47.450583 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98gswkr"] Feb 16 17:17:47 crc kubenswrapper[4794]: I0216 17:17:47.815774 4794 generic.go:334] "Generic (PLEG): container finished" podID="2ff4cec4-468f-41bf-a84a-4cdbc3e236fb" containerID="49ab171e60532b1ed006148bb3dbed5926d5699a0edf88849063048d38e4a26b" exitCode=0 Feb 16 17:17:47 crc kubenswrapper[4794]: I0216 17:17:47.815975 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98gswkr" event={"ID":"2ff4cec4-468f-41bf-a84a-4cdbc3e236fb","Type":"ContainerDied","Data":"49ab171e60532b1ed006148bb3dbed5926d5699a0edf88849063048d38e4a26b"} Feb 16 17:17:47 crc kubenswrapper[4794]: I0216 17:17:47.817247 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98gswkr" event={"ID":"2ff4cec4-468f-41bf-a84a-4cdbc3e236fb","Type":"ContainerStarted","Data":"32a32b435136d2fcb6ff7e0e20371ce024df28e345a4b74f01b2474b3da9a5ba"} Feb 16 17:17:48 crc kubenswrapper[4794]: I0216 17:17:48.826719 4794 generic.go:334] "Generic (PLEG): container finished" podID="2ff4cec4-468f-41bf-a84a-4cdbc3e236fb" containerID="f0fa72b6e64172a5fb42a4c26129bf10388c7ab57eb318129bf07049ef80be9c" exitCode=0 Feb 16 17:17:48 crc kubenswrapper[4794]: I0216 17:17:48.826841 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98gswkr" event={"ID":"2ff4cec4-468f-41bf-a84a-4cdbc3e236fb","Type":"ContainerDied","Data":"f0fa72b6e64172a5fb42a4c26129bf10388c7ab57eb318129bf07049ef80be9c"} Feb 16 17:17:49 crc kubenswrapper[4794]: I0216 17:17:49.836114 4794 generic.go:334] "Generic (PLEG): container finished" podID="2ff4cec4-468f-41bf-a84a-4cdbc3e236fb" containerID="1ccd4d9422cefc2e57c82483fb0e94e65c7d80ded2dce6eeeb9c0bc351095a63" exitCode=0 Feb 16 17:17:49 crc kubenswrapper[4794]: I0216 17:17:49.836325 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98gswkr" event={"ID":"2ff4cec4-468f-41bf-a84a-4cdbc3e236fb","Type":"ContainerDied","Data":"1ccd4d9422cefc2e57c82483fb0e94e65c7d80ded2dce6eeeb9c0bc351095a63"} Feb 16 17:17:50 crc kubenswrapper[4794]: I0216 17:17:50.140919 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:17:50 crc kubenswrapper[4794]: I0216 17:17:50.140987 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:17:51 crc kubenswrapper[4794]: I0216 17:17:51.254785 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98gswkr" Feb 16 17:17:51 crc kubenswrapper[4794]: I0216 17:17:51.390686 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2ff4cec4-468f-41bf-a84a-4cdbc3e236fb-util\") pod \"2ff4cec4-468f-41bf-a84a-4cdbc3e236fb\" (UID: \"2ff4cec4-468f-41bf-a84a-4cdbc3e236fb\") " Feb 16 17:17:51 crc kubenswrapper[4794]: I0216 17:17:51.391028 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfnj7\" (UniqueName: \"kubernetes.io/projected/2ff4cec4-468f-41bf-a84a-4cdbc3e236fb-kube-api-access-hfnj7\") pod \"2ff4cec4-468f-41bf-a84a-4cdbc3e236fb\" (UID: \"2ff4cec4-468f-41bf-a84a-4cdbc3e236fb\") " Feb 16 17:17:51 crc kubenswrapper[4794]: I0216 17:17:51.391072 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2ff4cec4-468f-41bf-a84a-4cdbc3e236fb-bundle\") pod \"2ff4cec4-468f-41bf-a84a-4cdbc3e236fb\" (UID: \"2ff4cec4-468f-41bf-a84a-4cdbc3e236fb\") " Feb 16 17:17:51 crc kubenswrapper[4794]: I0216 17:17:51.392083 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ff4cec4-468f-41bf-a84a-4cdbc3e236fb-bundle" (OuterVolumeSpecName: "bundle") pod "2ff4cec4-468f-41bf-a84a-4cdbc3e236fb" (UID: "2ff4cec4-468f-41bf-a84a-4cdbc3e236fb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:17:51 crc kubenswrapper[4794]: I0216 17:17:51.402751 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ff4cec4-468f-41bf-a84a-4cdbc3e236fb-kube-api-access-hfnj7" (OuterVolumeSpecName: "kube-api-access-hfnj7") pod "2ff4cec4-468f-41bf-a84a-4cdbc3e236fb" (UID: "2ff4cec4-468f-41bf-a84a-4cdbc3e236fb"). InnerVolumeSpecName "kube-api-access-hfnj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:17:51 crc kubenswrapper[4794]: I0216 17:17:51.412317 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ff4cec4-468f-41bf-a84a-4cdbc3e236fb-util" (OuterVolumeSpecName: "util") pod "2ff4cec4-468f-41bf-a84a-4cdbc3e236fb" (UID: "2ff4cec4-468f-41bf-a84a-4cdbc3e236fb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:17:51 crc kubenswrapper[4794]: I0216 17:17:51.493020 4794 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2ff4cec4-468f-41bf-a84a-4cdbc3e236fb-util\") on node \"crc\" DevicePath \"\"" Feb 16 17:17:51 crc kubenswrapper[4794]: I0216 17:17:51.493059 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfnj7\" (UniqueName: \"kubernetes.io/projected/2ff4cec4-468f-41bf-a84a-4cdbc3e236fb-kube-api-access-hfnj7\") on node \"crc\" DevicePath \"\"" Feb 16 17:17:51 crc kubenswrapper[4794]: I0216 17:17:51.493070 4794 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2ff4cec4-468f-41bf-a84a-4cdbc3e236fb-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:17:51 crc kubenswrapper[4794]: I0216 17:17:51.855517 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98gswkr" event={"ID":"2ff4cec4-468f-41bf-a84a-4cdbc3e236fb","Type":"ContainerDied","Data":"32a32b435136d2fcb6ff7e0e20371ce024df28e345a4b74f01b2474b3da9a5ba"} Feb 16 17:17:51 crc kubenswrapper[4794]: I0216 17:17:51.855552 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98gswkr" Feb 16 17:17:51 crc kubenswrapper[4794]: I0216 17:17:51.855567 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32a32b435136d2fcb6ff7e0e20371ce024df28e345a4b74f01b2474b3da9a5ba" Feb 16 17:17:58 crc kubenswrapper[4794]: I0216 17:17:58.838640 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6f655b9d6d-cn7sg"] Feb 16 17:17:58 crc kubenswrapper[4794]: E0216 17:17:58.839345 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ff4cec4-468f-41bf-a84a-4cdbc3e236fb" containerName="util" Feb 16 17:17:58 crc kubenswrapper[4794]: I0216 17:17:58.839361 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ff4cec4-468f-41bf-a84a-4cdbc3e236fb" containerName="util" Feb 16 17:17:58 crc kubenswrapper[4794]: E0216 17:17:58.839387 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ff4cec4-468f-41bf-a84a-4cdbc3e236fb" containerName="extract" Feb 16 17:17:58 crc kubenswrapper[4794]: I0216 17:17:58.839396 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ff4cec4-468f-41bf-a84a-4cdbc3e236fb" containerName="extract" Feb 16 17:17:58 crc kubenswrapper[4794]: E0216 17:17:58.839421 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ff4cec4-468f-41bf-a84a-4cdbc3e236fb" containerName="pull" Feb 16 17:17:58 crc kubenswrapper[4794]: I0216 17:17:58.839429 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ff4cec4-468f-41bf-a84a-4cdbc3e236fb" containerName="pull" Feb 16 17:17:58 crc kubenswrapper[4794]: I0216 17:17:58.839634 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ff4cec4-468f-41bf-a84a-4cdbc3e236fb" containerName="extract" Feb 16 17:17:58 crc kubenswrapper[4794]: I0216 17:17:58.840259 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6f655b9d6d-cn7sg" Feb 16 17:17:58 crc kubenswrapper[4794]: I0216 17:17:58.843586 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-5j5mm" Feb 16 17:17:58 crc kubenswrapper[4794]: I0216 17:17:58.871654 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6f655b9d6d-cn7sg"] Feb 16 17:17:58 crc kubenswrapper[4794]: I0216 17:17:58.922557 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97n46\" (UniqueName: \"kubernetes.io/projected/873461df-875e-4238-89df-41d618d290bc-kube-api-access-97n46\") pod \"openstack-operator-controller-init-6f655b9d6d-cn7sg\" (UID: \"873461df-875e-4238-89df-41d618d290bc\") " pod="openstack-operators/openstack-operator-controller-init-6f655b9d6d-cn7sg" Feb 16 17:17:59 crc kubenswrapper[4794]: I0216 17:17:59.024059 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97n46\" (UniqueName: \"kubernetes.io/projected/873461df-875e-4238-89df-41d618d290bc-kube-api-access-97n46\") pod \"openstack-operator-controller-init-6f655b9d6d-cn7sg\" (UID: \"873461df-875e-4238-89df-41d618d290bc\") " pod="openstack-operators/openstack-operator-controller-init-6f655b9d6d-cn7sg" Feb 16 17:17:59 crc kubenswrapper[4794]: I0216 17:17:59.048066 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97n46\" (UniqueName: \"kubernetes.io/projected/873461df-875e-4238-89df-41d618d290bc-kube-api-access-97n46\") pod \"openstack-operator-controller-init-6f655b9d6d-cn7sg\" (UID: \"873461df-875e-4238-89df-41d618d290bc\") " pod="openstack-operators/openstack-operator-controller-init-6f655b9d6d-cn7sg" Feb 16 17:17:59 crc kubenswrapper[4794]: I0216 17:17:59.160458 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6f655b9d6d-cn7sg" Feb 16 17:17:59 crc kubenswrapper[4794]: I0216 17:17:59.615250 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6f655b9d6d-cn7sg"] Feb 16 17:17:59 crc kubenswrapper[4794]: I0216 17:17:59.931616 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6f655b9d6d-cn7sg" event={"ID":"873461df-875e-4238-89df-41d618d290bc","Type":"ContainerStarted","Data":"abc699610b9362f1e13e458357730232bf8518f6144e2ea1df93239dd0d2c2c8"} Feb 16 17:18:03 crc kubenswrapper[4794]: I0216 17:18:03.965458 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6f655b9d6d-cn7sg" event={"ID":"873461df-875e-4238-89df-41d618d290bc","Type":"ContainerStarted","Data":"d53204cc1bdd59450a8d0607bbe93f45becb26bc2d5ea638c813946aa1405c7e"} Feb 16 17:18:03 crc kubenswrapper[4794]: I0216 17:18:03.966207 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6f655b9d6d-cn7sg" Feb 16 17:18:04 crc kubenswrapper[4794]: I0216 17:18:03.999287 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6f655b9d6d-cn7sg" podStartSLOduration=2.201000552 podStartE2EDuration="5.999270198s" podCreationTimestamp="2026-02-16 17:17:58 +0000 UTC" firstStartedPulling="2026-02-16 17:17:59.621207205 +0000 UTC m=+1105.569301852" lastFinishedPulling="2026-02-16 17:18:03.419476841 +0000 UTC m=+1109.367571498" observedRunningTime="2026-02-16 17:18:03.994811141 +0000 UTC m=+1109.942905808" watchObservedRunningTime="2026-02-16 17:18:03.999270198 +0000 UTC m=+1109.947364845" Feb 16 17:18:09 crc kubenswrapper[4794]: I0216 17:18:09.164943 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6f655b9d6d-cn7sg" Feb 16 17:18:20 crc kubenswrapper[4794]: I0216 17:18:20.140759 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:18:20 crc kubenswrapper[4794]: I0216 17:18:20.141203 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:18:20 crc kubenswrapper[4794]: I0216 17:18:20.141243 4794 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" Feb 16 17:18:20 crc kubenswrapper[4794]: I0216 17:18:20.141874 4794 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cce390b6213c7330d230e979677c08327d065b64facb3363518840eb14ee0ef8"} pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 17:18:20 crc kubenswrapper[4794]: I0216 17:18:20.141916 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" containerID="cri-o://cce390b6213c7330d230e979677c08327d065b64facb3363518840eb14ee0ef8" gracePeriod=600 Feb 16 17:18:21 crc kubenswrapper[4794]: I0216 17:18:21.089279 4794 generic.go:334] "Generic (PLEG): container finished" podID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerID="cce390b6213c7330d230e979677c08327d065b64facb3363518840eb14ee0ef8" exitCode=0 Feb 16 17:18:21 crc kubenswrapper[4794]: I0216 17:18:21.089829 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerDied","Data":"cce390b6213c7330d230e979677c08327d065b64facb3363518840eb14ee0ef8"} Feb 16 17:18:21 crc kubenswrapper[4794]: I0216 17:18:21.089857 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerStarted","Data":"07948fa6ee2afc937a020c1d294030183c36d82f2764d0a7fd3e60ea347005ea"} Feb 16 17:18:21 crc kubenswrapper[4794]: I0216 17:18:21.089875 4794 scope.go:117] "RemoveContainer" containerID="3aa97207ca6eb1342d7e8e60d0b01510075367f6246c193f5626cd5253489630" Feb 16 17:18:28 crc kubenswrapper[4794]: I0216 17:18:28.862837 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-j56v5"] Feb 16 17:18:28 crc kubenswrapper[4794]: I0216 17:18:28.864415 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-j56v5" Feb 16 17:18:28 crc kubenswrapper[4794]: I0216 17:18:28.867109 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-5m2x7" Feb 16 17:18:28 crc kubenswrapper[4794]: I0216 17:18:28.871075 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-f4x45"] Feb 16 17:18:28 crc kubenswrapper[4794]: I0216 17:18:28.871997 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-f4x45" Feb 16 17:18:28 crc kubenswrapper[4794]: I0216 17:18:28.873783 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-j6fdp" Feb 16 17:18:28 crc kubenswrapper[4794]: I0216 17:18:28.887115 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-f4x45"] Feb 16 17:18:28 crc kubenswrapper[4794]: I0216 17:18:28.897647 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-j56v5"] Feb 16 17:18:28 crc kubenswrapper[4794]: I0216 17:18:28.906892 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-cq6lq"] Feb 16 17:18:28 crc kubenswrapper[4794]: I0216 17:18:28.915163 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-cq6lq" Feb 16 17:18:28 crc kubenswrapper[4794]: I0216 17:18:28.918003 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-2h2hg" Feb 16 17:18:28 crc kubenswrapper[4794]: I0216 17:18:28.940179 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-w7smz"] Feb 16 17:18:28 crc kubenswrapper[4794]: I0216 17:18:28.941540 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-w7smz" Feb 16 17:18:28 crc kubenswrapper[4794]: I0216 17:18:28.949014 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-9mj87" Feb 16 17:18:28 crc kubenswrapper[4794]: I0216 17:18:28.958591 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-cq6lq"] Feb 16 17:18:28 crc kubenswrapper[4794]: I0216 17:18:28.972493 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44cck\" (UniqueName: \"kubernetes.io/projected/f7f924f9-9e09-4b23-91f2-7ac446f44405-kube-api-access-44cck\") pod \"barbican-operator-controller-manager-868647ff47-j56v5\" (UID: \"f7f924f9-9e09-4b23-91f2-7ac446f44405\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-j56v5" Feb 16 17:18:28 crc kubenswrapper[4794]: I0216 17:18:28.972862 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5wc8\" (UniqueName: \"kubernetes.io/projected/5d75b8f6-2376-48f7-90eb-de0bec6cf251-kube-api-access-z5wc8\") pod \"designate-operator-controller-manager-6d8bf5c495-cq6lq\" (UID: \"5d75b8f6-2376-48f7-90eb-de0bec6cf251\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-cq6lq" Feb 16 17:18:28 crc kubenswrapper[4794]: I0216 17:18:28.972956 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txb5s\" (UniqueName: \"kubernetes.io/projected/a78b821e-c246-42b4-9576-603f0889965f-kube-api-access-txb5s\") pod \"cinder-operator-controller-manager-5d946d989d-f4x45\" (UID: \"a78b821e-c246-42b4-9576-603f0889965f\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-f4x45" Feb 16 17:18:28 crc kubenswrapper[4794]: I0216 17:18:28.982370 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-w7smz"] Feb 16 17:18:28 crc kubenswrapper[4794]: I0216 17:18:28.990381 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-t22f4"] Feb 16 17:18:28 crc kubenswrapper[4794]: I0216 17:18:28.991701 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-t22f4" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.007256 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-bz9k6" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.041386 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-t22f4"] Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.043259 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-r5hls"] Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.049054 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-r5hls" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.054272 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-ljq59" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.081014 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p59k6\" (UniqueName: \"kubernetes.io/projected/5616fc58-e868-46a9-bad9-58cb130759de-kube-api-access-p59k6\") pod \"heat-operator-controller-manager-69f49c598c-t22f4\" (UID: \"5616fc58-e868-46a9-bad9-58cb130759de\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-t22f4" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.081063 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbq2b\" (UniqueName: \"kubernetes.io/projected/becddb1f-01f4-4141-a6da-86771dcf2c70-kube-api-access-zbq2b\") pod \"glance-operator-controller-manager-77987464f4-w7smz\" (UID: \"becddb1f-01f4-4141-a6da-86771dcf2c70\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-w7smz" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.081151 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5wc8\" (UniqueName: \"kubernetes.io/projected/5d75b8f6-2376-48f7-90eb-de0bec6cf251-kube-api-access-z5wc8\") pod \"designate-operator-controller-manager-6d8bf5c495-cq6lq\" (UID: \"5d75b8f6-2376-48f7-90eb-de0bec6cf251\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-cq6lq" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.081179 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txb5s\" (UniqueName: \"kubernetes.io/projected/a78b821e-c246-42b4-9576-603f0889965f-kube-api-access-txb5s\") pod \"cinder-operator-controller-manager-5d946d989d-f4x45\" (UID: \"a78b821e-c246-42b4-9576-603f0889965f\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-f4x45" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.081203 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44cck\" (UniqueName: \"kubernetes.io/projected/f7f924f9-9e09-4b23-91f2-7ac446f44405-kube-api-access-44cck\") pod \"barbican-operator-controller-manager-868647ff47-j56v5\" (UID: \"f7f924f9-9e09-4b23-91f2-7ac446f44405\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-j56v5" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.090373 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-r5hls"] Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.102380 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-45nh7"] Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.103647 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-45nh7" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.108342 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-4djph"] Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.128978 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44cck\" (UniqueName: \"kubernetes.io/projected/f7f924f9-9e09-4b23-91f2-7ac446f44405-kube-api-access-44cck\") pod \"barbican-operator-controller-manager-868647ff47-j56v5\" (UID: \"f7f924f9-9e09-4b23-91f2-7ac446f44405\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-j56v5" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.129592 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5wc8\" (UniqueName: \"kubernetes.io/projected/5d75b8f6-2376-48f7-90eb-de0bec6cf251-kube-api-access-z5wc8\") pod \"designate-operator-controller-manager-6d8bf5c495-cq6lq\" (UID: \"5d75b8f6-2376-48f7-90eb-de0bec6cf251\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-cq6lq" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.130006 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.141636 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-4djph" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.143273 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-vfqgw" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.149726 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txb5s\" (UniqueName: \"kubernetes.io/projected/a78b821e-c246-42b4-9576-603f0889965f-kube-api-access-txb5s\") pod \"cinder-operator-controller-manager-5d946d989d-f4x45\" (UID: \"a78b821e-c246-42b4-9576-603f0889965f\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-f4x45" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.155744 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-qksdn" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.183237 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3da72c4e-1963-406a-9dff-f0bc43f154bd-cert\") pod \"infra-operator-controller-manager-79d975b745-45nh7\" (UID: \"3da72c4e-1963-406a-9dff-f0bc43f154bd\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-45nh7" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.183283 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p59k6\" (UniqueName: \"kubernetes.io/projected/5616fc58-e868-46a9-bad9-58cb130759de-kube-api-access-p59k6\") pod \"heat-operator-controller-manager-69f49c598c-t22f4\" (UID: \"5616fc58-e868-46a9-bad9-58cb130759de\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-t22f4" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.183322 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbq2b\" (UniqueName: \"kubernetes.io/projected/becddb1f-01f4-4141-a6da-86771dcf2c70-kube-api-access-zbq2b\") pod \"glance-operator-controller-manager-77987464f4-w7smz\" (UID: \"becddb1f-01f4-4141-a6da-86771dcf2c70\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-w7smz" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.183358 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkz4p\" (UniqueName: \"kubernetes.io/projected/ef354ee7-16e4-4b4d-98c5-0f08fc370717-kube-api-access-fkz4p\") pod \"keystone-operator-controller-manager-b4d948c87-4djph\" (UID: \"ef354ee7-16e4-4b4d-98c5-0f08fc370717\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-4djph" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.183418 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncbbn\" (UniqueName: \"kubernetes.io/projected/2f2fd1c7-b7ec-4807-a859-b1d5efb8c58e-kube-api-access-ncbbn\") pod \"horizon-operator-controller-manager-5b9b8895d5-r5hls\" (UID: \"2f2fd1c7-b7ec-4807-a859-b1d5efb8c58e\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-r5hls" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.183439 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7hqs\" (UniqueName: \"kubernetes.io/projected/3da72c4e-1963-406a-9dff-f0bc43f154bd-kube-api-access-d7hqs\") pod \"infra-operator-controller-manager-79d975b745-45nh7\" (UID: \"3da72c4e-1963-406a-9dff-f0bc43f154bd\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-45nh7" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.188897 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-j56v5" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.198804 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-f4x45" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.223267 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-5dghp"] Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.225061 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5dghp" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.230002 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-b7dx9" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.242569 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-cq6lq" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.249597 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p59k6\" (UniqueName: \"kubernetes.io/projected/5616fc58-e868-46a9-bad9-58cb130759de-kube-api-access-p59k6\") pod \"heat-operator-controller-manager-69f49c598c-t22f4\" (UID: \"5616fc58-e868-46a9-bad9-58cb130759de\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-t22f4" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.251590 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbq2b\" (UniqueName: \"kubernetes.io/projected/becddb1f-01f4-4141-a6da-86771dcf2c70-kube-api-access-zbq2b\") pod \"glance-operator-controller-manager-77987464f4-w7smz\" (UID: \"becddb1f-01f4-4141-a6da-86771dcf2c70\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-w7smz" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.257020 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-45nh7"] Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.271313 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-4djph"] Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.272875 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-w7smz" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.280981 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-5dghp"] Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.285181 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncbbn\" (UniqueName: \"kubernetes.io/projected/2f2fd1c7-b7ec-4807-a859-b1d5efb8c58e-kube-api-access-ncbbn\") pod \"horizon-operator-controller-manager-5b9b8895d5-r5hls\" (UID: \"2f2fd1c7-b7ec-4807-a859-b1d5efb8c58e\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-r5hls" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.285212 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7hqs\" (UniqueName: \"kubernetes.io/projected/3da72c4e-1963-406a-9dff-f0bc43f154bd-kube-api-access-d7hqs\") pod \"infra-operator-controller-manager-79d975b745-45nh7\" (UID: \"3da72c4e-1963-406a-9dff-f0bc43f154bd\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-45nh7" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.285289 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3da72c4e-1963-406a-9dff-f0bc43f154bd-cert\") pod \"infra-operator-controller-manager-79d975b745-45nh7\" (UID: \"3da72c4e-1963-406a-9dff-f0bc43f154bd\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-45nh7" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.285341 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkz4p\" (UniqueName: \"kubernetes.io/projected/ef354ee7-16e4-4b4d-98c5-0f08fc370717-kube-api-access-fkz4p\") pod \"keystone-operator-controller-manager-b4d948c87-4djph\" (UID: \"ef354ee7-16e4-4b4d-98c5-0f08fc370717\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-4djph" Feb 16 17:18:29 crc kubenswrapper[4794]: E0216 17:18:29.285863 4794 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 17:18:29 crc kubenswrapper[4794]: E0216 17:18:29.285906 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3da72c4e-1963-406a-9dff-f0bc43f154bd-cert podName:3da72c4e-1963-406a-9dff-f0bc43f154bd nodeName:}" failed. No retries permitted until 2026-02-16 17:18:29.785892687 +0000 UTC m=+1135.733987334 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3da72c4e-1963-406a-9dff-f0bc43f154bd-cert") pod "infra-operator-controller-manager-79d975b745-45nh7" (UID: "3da72c4e-1963-406a-9dff-f0bc43f154bd") : secret "infra-operator-webhook-server-cert" not found Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.290237 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-wcdnq"] Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.291576 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wcdnq" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.301321 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-x99jf"] Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.302756 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-x99jf" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.325177 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-t22f4" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.330552 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-rd2nz" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.331267 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-knmrd" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.345342 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncbbn\" (UniqueName: \"kubernetes.io/projected/2f2fd1c7-b7ec-4807-a859-b1d5efb8c58e-kube-api-access-ncbbn\") pod \"horizon-operator-controller-manager-5b9b8895d5-r5hls\" (UID: \"2f2fd1c7-b7ec-4807-a859-b1d5efb8c58e\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-r5hls" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.354386 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7hqs\" (UniqueName: \"kubernetes.io/projected/3da72c4e-1963-406a-9dff-f0bc43f154bd-kube-api-access-d7hqs\") pod \"infra-operator-controller-manager-79d975b745-45nh7\" (UID: \"3da72c4e-1963-406a-9dff-f0bc43f154bd\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-45nh7" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.354482 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-wcdnq"] Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.356125 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkz4p\" (UniqueName: \"kubernetes.io/projected/ef354ee7-16e4-4b4d-98c5-0f08fc370717-kube-api-access-fkz4p\") pod \"keystone-operator-controller-manager-b4d948c87-4djph\" (UID: \"ef354ee7-16e4-4b4d-98c5-0f08fc370717\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-4djph" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.358633 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-4djph" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.382037 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-r5hls" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.398517 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wcd7\" (UniqueName: \"kubernetes.io/projected/b428664f-1819-45d4-8040-1c0c35e31c5d-kube-api-access-2wcd7\") pod \"mariadb-operator-controller-manager-6994f66f48-wcdnq\" (UID: \"b428664f-1819-45d4-8040-1c0c35e31c5d\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wcdnq" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.398912 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgt7n\" (UniqueName: \"kubernetes.io/projected/eaa0af70-cd40-4e75-9ddf-83a5a2190d83-kube-api-access-dgt7n\") pod \"ironic-operator-controller-manager-554564d7fc-5dghp\" (UID: \"eaa0af70-cd40-4e75-9ddf-83a5a2190d83\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5dghp" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.414662 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-x99jf"] Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.434431 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-p59dn"] Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.474452 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-p59dn" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.483348 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-p59dn"] Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.495114 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-b8rp6" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.501074 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn87t\" (UniqueName: \"kubernetes.io/projected/bba3e236-f18b-4293-b517-897936db8b05-kube-api-access-wn87t\") pod \"manila-operator-controller-manager-54f6768c69-x99jf\" (UID: \"bba3e236-f18b-4293-b517-897936db8b05\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-x99jf" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.501175 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wcd7\" (UniqueName: \"kubernetes.io/projected/b428664f-1819-45d4-8040-1c0c35e31c5d-kube-api-access-2wcd7\") pod \"mariadb-operator-controller-manager-6994f66f48-wcdnq\" (UID: \"b428664f-1819-45d4-8040-1c0c35e31c5d\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wcdnq" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.501270 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgt7n\" (UniqueName: \"kubernetes.io/projected/eaa0af70-cd40-4e75-9ddf-83a5a2190d83-kube-api-access-dgt7n\") pod \"ironic-operator-controller-manager-554564d7fc-5dghp\" (UID: \"eaa0af70-cd40-4e75-9ddf-83a5a2190d83\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5dghp" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.527205 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-q5f7s"] Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.528542 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-q5f7s" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.534039 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-dklql" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.552530 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgt7n\" (UniqueName: \"kubernetes.io/projected/eaa0af70-cd40-4e75-9ddf-83a5a2190d83-kube-api-access-dgt7n\") pod \"ironic-operator-controller-manager-554564d7fc-5dghp\" (UID: \"eaa0af70-cd40-4e75-9ddf-83a5a2190d83\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5dghp" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.561937 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wcd7\" (UniqueName: \"kubernetes.io/projected/b428664f-1819-45d4-8040-1c0c35e31c5d-kube-api-access-2wcd7\") pod \"mariadb-operator-controller-manager-6994f66f48-wcdnq\" (UID: \"b428664f-1819-45d4-8040-1c0c35e31c5d\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wcdnq" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.608502 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnkg9\" (UniqueName: \"kubernetes.io/projected/7eaab997-2552-42b3-b638-a92220374d2d-kube-api-access-qnkg9\") pod \"neutron-operator-controller-manager-64ddbf8bb-p59dn\" (UID: \"7eaab997-2552-42b3-b638-a92220374d2d\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-p59dn" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.608536 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn87t\" (UniqueName: \"kubernetes.io/projected/bba3e236-f18b-4293-b517-897936db8b05-kube-api-access-wn87t\") pod \"manila-operator-controller-manager-54f6768c69-x99jf\" (UID: \"bba3e236-f18b-4293-b517-897936db8b05\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-x99jf" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.638025 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn87t\" (UniqueName: \"kubernetes.io/projected/bba3e236-f18b-4293-b517-897936db8b05-kube-api-access-wn87t\") pod \"manila-operator-controller-manager-54f6768c69-x99jf\" (UID: \"bba3e236-f18b-4293-b517-897936db8b05\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-x99jf" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.638479 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-q5f7s"] Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.684921 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5dghp" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.699522 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-s79lr"] Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.700895 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-s79lr" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.715613 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktlfv\" (UniqueName: \"kubernetes.io/projected/44bf8e87-8212-4680-bcdc-bf1ca6d94d35-kube-api-access-ktlfv\") pod \"nova-operator-controller-manager-567668f5cf-q5f7s\" (UID: \"44bf8e87-8212-4680-bcdc-bf1ca6d94d35\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-q5f7s" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.715728 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnkg9\" (UniqueName: \"kubernetes.io/projected/7eaab997-2552-42b3-b638-a92220374d2d-kube-api-access-qnkg9\") pod \"neutron-operator-controller-manager-64ddbf8bb-p59dn\" (UID: \"7eaab997-2552-42b3-b638-a92220374d2d\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-p59dn" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.717772 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-gd4wk" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.734603 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wcdnq" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.755118 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-s79lr"] Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.756139 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnkg9\" (UniqueName: \"kubernetes.io/projected/7eaab997-2552-42b3-b638-a92220374d2d-kube-api-access-qnkg9\") pod \"neutron-operator-controller-manager-64ddbf8bb-p59dn\" (UID: \"7eaab997-2552-42b3-b638-a92220374d2d\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-p59dn" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.775735 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-x99jf" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.777666 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ch4rq9"] Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.778834 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ch4rq9" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.790752 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-mz9sq" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.790989 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.820038 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwphs\" (UniqueName: \"kubernetes.io/projected/f7637fa0-4e0c-41e1-a8e7-ba9442495cfc-kube-api-access-wwphs\") pod \"octavia-operator-controller-manager-69f8888797-s79lr\" (UID: \"f7637fa0-4e0c-41e1-a8e7-ba9442495cfc\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-s79lr" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.820163 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3da72c4e-1963-406a-9dff-f0bc43f154bd-cert\") pod \"infra-operator-controller-manager-79d975b745-45nh7\" (UID: \"3da72c4e-1963-406a-9dff-f0bc43f154bd\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-45nh7" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.820532 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktlfv\" (UniqueName: \"kubernetes.io/projected/44bf8e87-8212-4680-bcdc-bf1ca6d94d35-kube-api-access-ktlfv\") pod \"nova-operator-controller-manager-567668f5cf-q5f7s\" (UID: \"44bf8e87-8212-4680-bcdc-bf1ca6d94d35\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-q5f7s" Feb 16 17:18:29 crc kubenswrapper[4794]: E0216 17:18:29.824747 4794 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 17:18:29 crc kubenswrapper[4794]: E0216 17:18:29.824898 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3da72c4e-1963-406a-9dff-f0bc43f154bd-cert podName:3da72c4e-1963-406a-9dff-f0bc43f154bd nodeName:}" failed. No retries permitted until 2026-02-16 17:18:30.82487425 +0000 UTC m=+1136.772968907 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3da72c4e-1963-406a-9dff-f0bc43f154bd-cert") pod "infra-operator-controller-manager-79d975b745-45nh7" (UID: "3da72c4e-1963-406a-9dff-f0bc43f154bd") : secret "infra-operator-webhook-server-cert" not found Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.858367 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktlfv\" (UniqueName: \"kubernetes.io/projected/44bf8e87-8212-4680-bcdc-bf1ca6d94d35-kube-api-access-ktlfv\") pod \"nova-operator-controller-manager-567668f5cf-q5f7s\" (UID: \"44bf8e87-8212-4680-bcdc-bf1ca6d94d35\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-q5f7s" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.863804 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-tttfw"] Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.864943 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-tttfw" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.871790 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-px5vw" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.888822 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-p59dn" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.890375 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ch4rq9"] Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.903811 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-g5kbr"] Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.905348 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-g5kbr" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.912331 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-495ck" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.922296 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b50615c5-2b75-4b07-9f72-4c70baa57bf3-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ch4rq9\" (UID: \"b50615c5-2b75-4b07-9f72-4c70baa57bf3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ch4rq9" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.922465 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwphs\" (UniqueName: \"kubernetes.io/projected/f7637fa0-4e0c-41e1-a8e7-ba9442495cfc-kube-api-access-wwphs\") pod \"octavia-operator-controller-manager-69f8888797-s79lr\" (UID: \"f7637fa0-4e0c-41e1-a8e7-ba9442495cfc\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-s79lr" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.922607 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvtnr\" (UniqueName: \"kubernetes.io/projected/b50615c5-2b75-4b07-9f72-4c70baa57bf3-kube-api-access-kvtnr\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ch4rq9\" (UID: \"b50615c5-2b75-4b07-9f72-4c70baa57bf3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ch4rq9" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.930984 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-q5f7s" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.932171 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-tttfw"] Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.974171 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwphs\" (UniqueName: \"kubernetes.io/projected/f7637fa0-4e0c-41e1-a8e7-ba9442495cfc-kube-api-access-wwphs\") pod \"octavia-operator-controller-manager-69f8888797-s79lr\" (UID: \"f7637fa0-4e0c-41e1-a8e7-ba9442495cfc\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-s79lr" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.974512 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-g5kbr"] Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.985099 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-b5tg6"] Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.986093 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-b5tg6" Feb 16 17:18:29 crc kubenswrapper[4794]: I0216 17:18:29.991640 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-mrv9f" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.000339 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5884f785c-9wnws"] Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.003888 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5884f785c-9wnws" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.006856 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-wz5c2" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.010757 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-b5tg6"] Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.023712 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b50615c5-2b75-4b07-9f72-4c70baa57bf3-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ch4rq9\" (UID: \"b50615c5-2b75-4b07-9f72-4c70baa57bf3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ch4rq9" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.023834 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w44cd\" (UniqueName: \"kubernetes.io/projected/c566e561-8069-4311-a79f-71130f9b54d7-kube-api-access-w44cd\") pod \"placement-operator-controller-manager-8497b45c89-g5kbr\" (UID: \"c566e561-8069-4311-a79f-71130f9b54d7\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-g5kbr" Feb 16 17:18:30 crc kubenswrapper[4794]: E0216 17:18:30.023880 4794 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.023929 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qpkb\" (UniqueName: \"kubernetes.io/projected/f4ca9db4-7b81-4c54-b6df-f5c4a8475a15-kube-api-access-8qpkb\") pod \"ovn-operator-controller-manager-d44cf6b75-tttfw\" (UID: \"f4ca9db4-7b81-4c54-b6df-f5c4a8475a15\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-tttfw" Feb 16 17:18:30 crc kubenswrapper[4794]: E0216 17:18:30.023977 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b50615c5-2b75-4b07-9f72-4c70baa57bf3-cert podName:b50615c5-2b75-4b07-9f72-4c70baa57bf3 nodeName:}" failed. No retries permitted until 2026-02-16 17:18:30.523955062 +0000 UTC m=+1136.472049709 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b50615c5-2b75-4b07-9f72-4c70baa57bf3-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ch4rq9" (UID: "b50615c5-2b75-4b07-9f72-4c70baa57bf3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.025529 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvtnr\" (UniqueName: \"kubernetes.io/projected/b50615c5-2b75-4b07-9f72-4c70baa57bf3-kube-api-access-kvtnr\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ch4rq9\" (UID: \"b50615c5-2b75-4b07-9f72-4c70baa57bf3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ch4rq9" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.028072 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-946dc"] Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.039670 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-946dc" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.042293 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-fcr5p" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.052337 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvtnr\" (UniqueName: \"kubernetes.io/projected/b50615c5-2b75-4b07-9f72-4c70baa57bf3-kube-api-access-kvtnr\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ch4rq9\" (UID: \"b50615c5-2b75-4b07-9f72-4c70baa57bf3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ch4rq9" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.054548 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5884f785c-9wnws"] Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.093608 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-s79lr" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.098151 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-946dc"] Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.123947 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-qnr9g"] Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.125904 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-qnr9g" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.128087 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxv79\" (UniqueName: \"kubernetes.io/projected/4d912db4-c2c9-4103-ba2c-26f1dc0cc4a6-kube-api-access-jxv79\") pod \"swift-operator-controller-manager-68f46476f-b5tg6\" (UID: \"4d912db4-c2c9-4103-ba2c-26f1dc0cc4a6\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-b5tg6" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.128152 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w44cd\" (UniqueName: \"kubernetes.io/projected/c566e561-8069-4311-a79f-71130f9b54d7-kube-api-access-w44cd\") pod \"placement-operator-controller-manager-8497b45c89-g5kbr\" (UID: \"c566e561-8069-4311-a79f-71130f9b54d7\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-g5kbr" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.128237 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qpkb\" (UniqueName: \"kubernetes.io/projected/f4ca9db4-7b81-4c54-b6df-f5c4a8475a15-kube-api-access-8qpkb\") pod \"ovn-operator-controller-manager-d44cf6b75-tttfw\" (UID: \"f4ca9db4-7b81-4c54-b6df-f5c4a8475a15\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-tttfw" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.128286 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlplh\" (UniqueName: \"kubernetes.io/projected/116e8deb-7236-4751-95ee-9b839f228f55-kube-api-access-tlplh\") pod \"test-operator-controller-manager-7866795846-946dc\" (UID: \"116e8deb-7236-4751-95ee-9b839f228f55\") " pod="openstack-operators/test-operator-controller-manager-7866795846-946dc" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.128488 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-6cxz2" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.130688 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xmqd\" (UniqueName: \"kubernetes.io/projected/ba76f31a-473e-48b7-873a-a2251f664d4b-kube-api-access-8xmqd\") pod \"telemetry-operator-controller-manager-5884f785c-9wnws\" (UID: \"ba76f31a-473e-48b7-873a-a2251f664d4b\") " pod="openstack-operators/telemetry-operator-controller-manager-5884f785c-9wnws" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.134242 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-qnr9g"] Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.147749 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w44cd\" (UniqueName: \"kubernetes.io/projected/c566e561-8069-4311-a79f-71130f9b54d7-kube-api-access-w44cd\") pod \"placement-operator-controller-manager-8497b45c89-g5kbr\" (UID: \"c566e561-8069-4311-a79f-71130f9b54d7\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-g5kbr" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.154487 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qpkb\" (UniqueName: \"kubernetes.io/projected/f4ca9db4-7b81-4c54-b6df-f5c4a8475a15-kube-api-access-8qpkb\") pod \"ovn-operator-controller-manager-d44cf6b75-tttfw\" (UID: \"f4ca9db4-7b81-4c54-b6df-f5c4a8475a15\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-tttfw" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.179984 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6f58b764dd-9nlr2"] Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.181540 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-9nlr2" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.189737 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.189755 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-t8425" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.190233 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.194657 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6f58b764dd-9nlr2"] Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.211163 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-tttfw" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.225415 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-f4x45" event={"ID":"a78b821e-c246-42b4-9576-603f0889965f","Type":"ContainerStarted","Data":"003abbd097a86e84d4d2da646698d88debb59ad8e9a33a6fcfcde469c97eaae8"} Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.236712 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxv79\" (UniqueName: \"kubernetes.io/projected/4d912db4-c2c9-4103-ba2c-26f1dc0cc4a6-kube-api-access-jxv79\") pod \"swift-operator-controller-manager-68f46476f-b5tg6\" (UID: \"4d912db4-c2c9-4103-ba2c-26f1dc0cc4a6\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-b5tg6" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.236845 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4ncv\" (UniqueName: \"kubernetes.io/projected/0ae6e41c-d0dc-4437-8b0c-1cd271cdbd6f-kube-api-access-k4ncv\") pod \"watcher-operator-controller-manager-5db88f68c-qnr9g\" (UID: \"0ae6e41c-d0dc-4437-8b0c-1cd271cdbd6f\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-qnr9g" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.236884 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlplh\" (UniqueName: \"kubernetes.io/projected/116e8deb-7236-4751-95ee-9b839f228f55-kube-api-access-tlplh\") pod \"test-operator-controller-manager-7866795846-946dc\" (UID: \"116e8deb-7236-4751-95ee-9b839f228f55\") " pod="openstack-operators/test-operator-controller-manager-7866795846-946dc" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.236984 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xmqd\" (UniqueName: \"kubernetes.io/projected/ba76f31a-473e-48b7-873a-a2251f664d4b-kube-api-access-8xmqd\") pod \"telemetry-operator-controller-manager-5884f785c-9wnws\" (UID: \"ba76f31a-473e-48b7-873a-a2251f664d4b\") " pod="openstack-operators/telemetry-operator-controller-manager-5884f785c-9wnws" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.242778 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-g5kbr" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.261539 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlplh\" (UniqueName: \"kubernetes.io/projected/116e8deb-7236-4751-95ee-9b839f228f55-kube-api-access-tlplh\") pod \"test-operator-controller-manager-7866795846-946dc\" (UID: \"116e8deb-7236-4751-95ee-9b839f228f55\") " pod="openstack-operators/test-operator-controller-manager-7866795846-946dc" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.262593 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xmqd\" (UniqueName: \"kubernetes.io/projected/ba76f31a-473e-48b7-873a-a2251f664d4b-kube-api-access-8xmqd\") pod \"telemetry-operator-controller-manager-5884f785c-9wnws\" (UID: \"ba76f31a-473e-48b7-873a-a2251f664d4b\") " pod="openstack-operators/telemetry-operator-controller-manager-5884f785c-9wnws" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.271362 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxv79\" (UniqueName: \"kubernetes.io/projected/4d912db4-c2c9-4103-ba2c-26f1dc0cc4a6-kube-api-access-jxv79\") pod \"swift-operator-controller-manager-68f46476f-b5tg6\" (UID: \"4d912db4-c2c9-4103-ba2c-26f1dc0cc4a6\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-b5tg6" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.287410 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-64c7v"] Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.288866 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-64c7v" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.293187 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-zgppt" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.310190 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-64c7v"] Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.341421 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3f66b30a-9191-494c-9d74-86e92acdc455-webhook-certs\") pod \"openstack-operator-controller-manager-6f58b764dd-9nlr2\" (UID: \"3f66b30a-9191-494c-9d74-86e92acdc455\") " pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-9nlr2" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.341577 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4ncv\" (UniqueName: \"kubernetes.io/projected/0ae6e41c-d0dc-4437-8b0c-1cd271cdbd6f-kube-api-access-k4ncv\") pod \"watcher-operator-controller-manager-5db88f68c-qnr9g\" (UID: \"0ae6e41c-d0dc-4437-8b0c-1cd271cdbd6f\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-qnr9g" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.341631 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssfmf\" (UniqueName: \"kubernetes.io/projected/3f66b30a-9191-494c-9d74-86e92acdc455-kube-api-access-ssfmf\") pod \"openstack-operator-controller-manager-6f58b764dd-9nlr2\" (UID: \"3f66b30a-9191-494c-9d74-86e92acdc455\") " pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-9nlr2" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.341661 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3f66b30a-9191-494c-9d74-86e92acdc455-metrics-certs\") pod \"openstack-operator-controller-manager-6f58b764dd-9nlr2\" (UID: \"3f66b30a-9191-494c-9d74-86e92acdc455\") " pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-9nlr2" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.345318 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-b5tg6" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.360102 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4ncv\" (UniqueName: \"kubernetes.io/projected/0ae6e41c-d0dc-4437-8b0c-1cd271cdbd6f-kube-api-access-k4ncv\") pod \"watcher-operator-controller-manager-5db88f68c-qnr9g\" (UID: \"0ae6e41c-d0dc-4437-8b0c-1cd271cdbd6f\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-qnr9g" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.361606 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5884f785c-9wnws" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.373361 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-f4x45"] Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.389494 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-qnr9g" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.444695 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssfmf\" (UniqueName: \"kubernetes.io/projected/3f66b30a-9191-494c-9d74-86e92acdc455-kube-api-access-ssfmf\") pod \"openstack-operator-controller-manager-6f58b764dd-9nlr2\" (UID: \"3f66b30a-9191-494c-9d74-86e92acdc455\") " pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-9nlr2" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.444755 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3f66b30a-9191-494c-9d74-86e92acdc455-metrics-certs\") pod \"openstack-operator-controller-manager-6f58b764dd-9nlr2\" (UID: \"3f66b30a-9191-494c-9d74-86e92acdc455\") " pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-9nlr2" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.444843 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3f66b30a-9191-494c-9d74-86e92acdc455-webhook-certs\") pod \"openstack-operator-controller-manager-6f58b764dd-9nlr2\" (UID: \"3f66b30a-9191-494c-9d74-86e92acdc455\") " pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-9nlr2" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.444998 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7bwq\" (UniqueName: \"kubernetes.io/projected/89a0d9ab-217b-4bc4-ad65-6a66001fe891-kube-api-access-b7bwq\") pod \"rabbitmq-cluster-operator-manager-668c99d594-64c7v\" (UID: \"89a0d9ab-217b-4bc4-ad65-6a66001fe891\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-64c7v" Feb 16 17:18:30 crc kubenswrapper[4794]: E0216 17:18:30.445473 4794 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 17:18:30 crc kubenswrapper[4794]: E0216 17:18:30.445511 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f66b30a-9191-494c-9d74-86e92acdc455-metrics-certs podName:3f66b30a-9191-494c-9d74-86e92acdc455 nodeName:}" failed. No retries permitted until 2026-02-16 17:18:30.945497244 +0000 UTC m=+1136.893591891 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3f66b30a-9191-494c-9d74-86e92acdc455-metrics-certs") pod "openstack-operator-controller-manager-6f58b764dd-9nlr2" (UID: "3f66b30a-9191-494c-9d74-86e92acdc455") : secret "metrics-server-cert" not found Feb 16 17:18:30 crc kubenswrapper[4794]: E0216 17:18:30.445642 4794 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 17:18:30 crc kubenswrapper[4794]: E0216 17:18:30.445664 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f66b30a-9191-494c-9d74-86e92acdc455-webhook-certs podName:3f66b30a-9191-494c-9d74-86e92acdc455 nodeName:}" failed. No retries permitted until 2026-02-16 17:18:30.945656258 +0000 UTC m=+1136.893750905 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3f66b30a-9191-494c-9d74-86e92acdc455-webhook-certs") pod "openstack-operator-controller-manager-6f58b764dd-9nlr2" (UID: "3f66b30a-9191-494c-9d74-86e92acdc455") : secret "webhook-server-cert" not found Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.448895 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-946dc" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.475226 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssfmf\" (UniqueName: \"kubernetes.io/projected/3f66b30a-9191-494c-9d74-86e92acdc455-kube-api-access-ssfmf\") pod \"openstack-operator-controller-manager-6f58b764dd-9nlr2\" (UID: \"3f66b30a-9191-494c-9d74-86e92acdc455\") " pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-9nlr2" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.547186 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b50615c5-2b75-4b07-9f72-4c70baa57bf3-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ch4rq9\" (UID: \"b50615c5-2b75-4b07-9f72-4c70baa57bf3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ch4rq9" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.547282 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7bwq\" (UniqueName: \"kubernetes.io/projected/89a0d9ab-217b-4bc4-ad65-6a66001fe891-kube-api-access-b7bwq\") pod \"rabbitmq-cluster-operator-manager-668c99d594-64c7v\" (UID: \"89a0d9ab-217b-4bc4-ad65-6a66001fe891\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-64c7v" Feb 16 17:18:30 crc kubenswrapper[4794]: E0216 17:18:30.547647 4794 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 17:18:30 crc kubenswrapper[4794]: E0216 17:18:30.547729 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b50615c5-2b75-4b07-9f72-4c70baa57bf3-cert podName:b50615c5-2b75-4b07-9f72-4c70baa57bf3 nodeName:}" failed. No retries permitted until 2026-02-16 17:18:31.547711531 +0000 UTC m=+1137.495806178 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b50615c5-2b75-4b07-9f72-4c70baa57bf3-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ch4rq9" (UID: "b50615c5-2b75-4b07-9f72-4c70baa57bf3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.568538 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7bwq\" (UniqueName: \"kubernetes.io/projected/89a0d9ab-217b-4bc4-ad65-6a66001fe891-kube-api-access-b7bwq\") pod \"rabbitmq-cluster-operator-manager-668c99d594-64c7v\" (UID: \"89a0d9ab-217b-4bc4-ad65-6a66001fe891\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-64c7v" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.719493 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-64c7v" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.852189 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3da72c4e-1963-406a-9dff-f0bc43f154bd-cert\") pod \"infra-operator-controller-manager-79d975b745-45nh7\" (UID: \"3da72c4e-1963-406a-9dff-f0bc43f154bd\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-45nh7" Feb 16 17:18:30 crc kubenswrapper[4794]: E0216 17:18:30.852312 4794 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 17:18:30 crc kubenswrapper[4794]: E0216 17:18:30.852375 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3da72c4e-1963-406a-9dff-f0bc43f154bd-cert podName:3da72c4e-1963-406a-9dff-f0bc43f154bd nodeName:}" failed. No retries permitted until 2026-02-16 17:18:32.852357605 +0000 UTC m=+1138.800452252 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3da72c4e-1963-406a-9dff-f0bc43f154bd-cert") pod "infra-operator-controller-manager-79d975b745-45nh7" (UID: "3da72c4e-1963-406a-9dff-f0bc43f154bd") : secret "infra-operator-webhook-server-cert" not found Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.951735 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-j56v5"] Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.953826 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3f66b30a-9191-494c-9d74-86e92acdc455-metrics-certs\") pod \"openstack-operator-controller-manager-6f58b764dd-9nlr2\" (UID: \"3f66b30a-9191-494c-9d74-86e92acdc455\") " pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-9nlr2" Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.953913 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3f66b30a-9191-494c-9d74-86e92acdc455-webhook-certs\") pod \"openstack-operator-controller-manager-6f58b764dd-9nlr2\" (UID: \"3f66b30a-9191-494c-9d74-86e92acdc455\") " pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-9nlr2" Feb 16 17:18:30 crc kubenswrapper[4794]: E0216 17:18:30.954100 4794 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 17:18:30 crc kubenswrapper[4794]: E0216 17:18:30.954179 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f66b30a-9191-494c-9d74-86e92acdc455-webhook-certs podName:3f66b30a-9191-494c-9d74-86e92acdc455 nodeName:}" failed. No retries permitted until 2026-02-16 17:18:31.9541594 +0000 UTC m=+1137.902254047 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3f66b30a-9191-494c-9d74-86e92acdc455-webhook-certs") pod "openstack-operator-controller-manager-6f58b764dd-9nlr2" (UID: "3f66b30a-9191-494c-9d74-86e92acdc455") : secret "webhook-server-cert" not found Feb 16 17:18:30 crc kubenswrapper[4794]: E0216 17:18:30.954591 4794 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 17:18:30 crc kubenswrapper[4794]: E0216 17:18:30.954642 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f66b30a-9191-494c-9d74-86e92acdc455-metrics-certs podName:3f66b30a-9191-494c-9d74-86e92acdc455 nodeName:}" failed. No retries permitted until 2026-02-16 17:18:31.954630404 +0000 UTC m=+1137.902725131 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3f66b30a-9191-494c-9d74-86e92acdc455-metrics-certs") pod "openstack-operator-controller-manager-6f58b764dd-9nlr2" (UID: "3f66b30a-9191-494c-9d74-86e92acdc455") : secret "metrics-server-cert" not found Feb 16 17:18:30 crc kubenswrapper[4794]: I0216 17:18:30.989996 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-4djph"] Feb 16 17:18:31 crc kubenswrapper[4794]: W0216 17:18:31.003798 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef354ee7_16e4_4b4d_98c5_0f08fc370717.slice/crio-307a121770e67459f101c37ff8bbe22d01ba0e24e81d8181754f2a93265a5e1f WatchSource:0}: Error finding container 307a121770e67459f101c37ff8bbe22d01ba0e24e81d8181754f2a93265a5e1f: Status 404 returned error can't find the container with id 307a121770e67459f101c37ff8bbe22d01ba0e24e81d8181754f2a93265a5e1f Feb 16 17:18:31 crc kubenswrapper[4794]: W0216 17:18:31.012940 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d75b8f6_2376_48f7_90eb_de0bec6cf251.slice/crio-c44989f60b3e4b42811611423c70a96b5aa701819fc6f5261815e1bf68e05e4a WatchSource:0}: Error finding container c44989f60b3e4b42811611423c70a96b5aa701819fc6f5261815e1bf68e05e4a: Status 404 returned error can't find the container with id c44989f60b3e4b42811611423c70a96b5aa701819fc6f5261815e1bf68e05e4a Feb 16 17:18:31 crc kubenswrapper[4794]: I0216 17:18:31.013061 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-cq6lq"] Feb 16 17:18:31 crc kubenswrapper[4794]: I0216 17:18:31.026696 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-t22f4"] Feb 16 17:18:31 crc kubenswrapper[4794]: I0216 17:18:31.067416 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-w7smz"] Feb 16 17:18:31 crc kubenswrapper[4794]: I0216 17:18:31.079891 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-5dghp"] Feb 16 17:18:31 crc kubenswrapper[4794]: I0216 17:18:31.087538 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-r5hls"] Feb 16 17:18:31 crc kubenswrapper[4794]: I0216 17:18:31.094737 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-x99jf"] Feb 16 17:18:31 crc kubenswrapper[4794]: I0216 17:18:31.248464 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5dghp" event={"ID":"eaa0af70-cd40-4e75-9ddf-83a5a2190d83","Type":"ContainerStarted","Data":"23b6b9491e72c0764a94aebd8baa4960a2e1e5bc5b64580d76ef4b64273ffefe"} Feb 16 17:18:31 crc kubenswrapper[4794]: I0216 17:18:31.250599 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-r5hls" event={"ID":"2f2fd1c7-b7ec-4807-a859-b1d5efb8c58e","Type":"ContainerStarted","Data":"f8b32c5c435851cc5ea31c72e16851d3b99ec3b13feaf2e48853d9cd506a78a6"} Feb 16 17:18:31 crc kubenswrapper[4794]: I0216 17:18:31.251245 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-t22f4" event={"ID":"5616fc58-e868-46a9-bad9-58cb130759de","Type":"ContainerStarted","Data":"738241113afa03a83b744024afd5fbfd4faee74fa42a0636ec6ede6e317f8023"} Feb 16 17:18:31 crc kubenswrapper[4794]: I0216 17:18:31.252787 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-4djph" event={"ID":"ef354ee7-16e4-4b4d-98c5-0f08fc370717","Type":"ContainerStarted","Data":"307a121770e67459f101c37ff8bbe22d01ba0e24e81d8181754f2a93265a5e1f"} Feb 16 17:18:31 crc kubenswrapper[4794]: I0216 17:18:31.260325 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-cq6lq" event={"ID":"5d75b8f6-2376-48f7-90eb-de0bec6cf251","Type":"ContainerStarted","Data":"c44989f60b3e4b42811611423c70a96b5aa701819fc6f5261815e1bf68e05e4a"} Feb 16 17:18:31 crc kubenswrapper[4794]: I0216 17:18:31.261428 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-w7smz" event={"ID":"becddb1f-01f4-4141-a6da-86771dcf2c70","Type":"ContainerStarted","Data":"76ec769e86180f7dd58e9797e4ad743255566399500756b81cc380995e66b0be"} Feb 16 17:18:31 crc kubenswrapper[4794]: I0216 17:18:31.262240 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-x99jf" event={"ID":"bba3e236-f18b-4293-b517-897936db8b05","Type":"ContainerStarted","Data":"353a426ab26f1875d6d06934e466d422bc188c3f191795ebfc7bc72af2ec9fd7"} Feb 16 17:18:31 crc kubenswrapper[4794]: I0216 17:18:31.263208 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-j56v5" event={"ID":"f7f924f9-9e09-4b23-91f2-7ac446f44405","Type":"ContainerStarted","Data":"33266e97543f2d2615759120d816864cfd12460f7b8e6509fe95a5920f38182e"} Feb 16 17:18:31 crc kubenswrapper[4794]: I0216 17:18:31.585057 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b50615c5-2b75-4b07-9f72-4c70baa57bf3-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ch4rq9\" (UID: \"b50615c5-2b75-4b07-9f72-4c70baa57bf3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ch4rq9" Feb 16 17:18:31 crc kubenswrapper[4794]: E0216 17:18:31.585322 4794 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 17:18:31 crc kubenswrapper[4794]: E0216 17:18:31.585372 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b50615c5-2b75-4b07-9f72-4c70baa57bf3-cert podName:b50615c5-2b75-4b07-9f72-4c70baa57bf3 nodeName:}" failed. No retries permitted until 2026-02-16 17:18:33.585358385 +0000 UTC m=+1139.533453032 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b50615c5-2b75-4b07-9f72-4c70baa57bf3-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ch4rq9" (UID: "b50615c5-2b75-4b07-9f72-4c70baa57bf3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 17:18:31 crc kubenswrapper[4794]: I0216 17:18:31.761633 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-s79lr"] Feb 16 17:18:31 crc kubenswrapper[4794]: I0216 17:18:31.804644 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-q5f7s"] Feb 16 17:18:31 crc kubenswrapper[4794]: I0216 17:18:31.822286 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-wcdnq"] Feb 16 17:18:31 crc kubenswrapper[4794]: I0216 17:18:31.841601 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-b5tg6"] Feb 16 17:18:31 crc kubenswrapper[4794]: I0216 17:18:31.858815 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-tttfw"] Feb 16 17:18:31 crc kubenswrapper[4794]: I0216 17:18:31.867900 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-p59dn"] Feb 16 17:18:31 crc kubenswrapper[4794]: I0216 17:18:31.874281 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-qnr9g"] Feb 16 17:18:31 crc kubenswrapper[4794]: I0216 17:18:31.882462 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5884f785c-9wnws"] Feb 16 17:18:31 crc kubenswrapper[4794]: I0216 17:18:31.890377 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-946dc"] Feb 16 17:18:31 crc kubenswrapper[4794]: I0216 17:18:31.898495 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-g5kbr"] Feb 16 17:18:31 crc kubenswrapper[4794]: I0216 17:18:31.907321 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-64c7v"] Feb 16 17:18:31 crc kubenswrapper[4794]: I0216 17:18:31.991259 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3f66b30a-9191-494c-9d74-86e92acdc455-metrics-certs\") pod \"openstack-operator-controller-manager-6f58b764dd-9nlr2\" (UID: \"3f66b30a-9191-494c-9d74-86e92acdc455\") " pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-9nlr2" Feb 16 17:18:31 crc kubenswrapper[4794]: I0216 17:18:31.991354 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3f66b30a-9191-494c-9d74-86e92acdc455-webhook-certs\") pod \"openstack-operator-controller-manager-6f58b764dd-9nlr2\" (UID: \"3f66b30a-9191-494c-9d74-86e92acdc455\") " pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-9nlr2" Feb 16 17:18:31 crc kubenswrapper[4794]: E0216 17:18:31.991491 4794 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 17:18:31 crc kubenswrapper[4794]: E0216 17:18:31.991539 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f66b30a-9191-494c-9d74-86e92acdc455-webhook-certs podName:3f66b30a-9191-494c-9d74-86e92acdc455 nodeName:}" failed. No retries permitted until 2026-02-16 17:18:33.991524968 +0000 UTC m=+1139.939619605 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3f66b30a-9191-494c-9d74-86e92acdc455-webhook-certs") pod "openstack-operator-controller-manager-6f58b764dd-9nlr2" (UID: "3f66b30a-9191-494c-9d74-86e92acdc455") : secret "webhook-server-cert" not found Feb 16 17:18:31 crc kubenswrapper[4794]: E0216 17:18:31.991729 4794 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 17:18:31 crc kubenswrapper[4794]: E0216 17:18:31.991805 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f66b30a-9191-494c-9d74-86e92acdc455-metrics-certs podName:3f66b30a-9191-494c-9d74-86e92acdc455 nodeName:}" failed. No retries permitted until 2026-02-16 17:18:33.991783385 +0000 UTC m=+1139.939878032 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3f66b30a-9191-494c-9d74-86e92acdc455-metrics-certs") pod "openstack-operator-controller-manager-6f58b764dd-9nlr2" (UID: "3f66b30a-9191-494c-9d74-86e92acdc455") : secret "metrics-server-cert" not found Feb 16 17:18:32 crc kubenswrapper[4794]: I0216 17:18:32.912082 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3da72c4e-1963-406a-9dff-f0bc43f154bd-cert\") pod \"infra-operator-controller-manager-79d975b745-45nh7\" (UID: \"3da72c4e-1963-406a-9dff-f0bc43f154bd\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-45nh7" Feb 16 17:18:32 crc kubenswrapper[4794]: E0216 17:18:32.912431 4794 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 17:18:32 crc kubenswrapper[4794]: E0216 17:18:32.912577 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3da72c4e-1963-406a-9dff-f0bc43f154bd-cert podName:3da72c4e-1963-406a-9dff-f0bc43f154bd nodeName:}" failed. No retries permitted until 2026-02-16 17:18:36.912561325 +0000 UTC m=+1142.860655972 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3da72c4e-1963-406a-9dff-f0bc43f154bd-cert") pod "infra-operator-controller-manager-79d975b745-45nh7" (UID: "3da72c4e-1963-406a-9dff-f0bc43f154bd") : secret "infra-operator-webhook-server-cert" not found Feb 16 17:18:33 crc kubenswrapper[4794]: W0216 17:18:33.198681 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7eaab997_2552_42b3_b638_a92220374d2d.slice/crio-98af2bb06c502ff5e0ca6de534680cf73eeb5bdfa82d9b7ff3a656017da2f0a5 WatchSource:0}: Error finding container 98af2bb06c502ff5e0ca6de534680cf73eeb5bdfa82d9b7ff3a656017da2f0a5: Status 404 returned error can't find the container with id 98af2bb06c502ff5e0ca6de534680cf73eeb5bdfa82d9b7ff3a656017da2f0a5 Feb 16 17:18:33 crc kubenswrapper[4794]: W0216 17:18:33.204918 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d912db4_c2c9_4103_ba2c_26f1dc0cc4a6.slice/crio-2f9b16a4bdfc5bbe886fc319cb5c9af5d5087b08fceba629b8dbd4ee3f62907e WatchSource:0}: Error finding container 2f9b16a4bdfc5bbe886fc319cb5c9af5d5087b08fceba629b8dbd4ee3f62907e: Status 404 returned error can't find the container with id 2f9b16a4bdfc5bbe886fc319cb5c9af5d5087b08fceba629b8dbd4ee3f62907e Feb 16 17:18:33 crc kubenswrapper[4794]: W0216 17:18:33.222359 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ae6e41c_d0dc_4437_8b0c_1cd271cdbd6f.slice/crio-2c4eb32b18aec421673d0be2d0d9a8568cb2d7a0e5d17fe396f6856b882821a0 WatchSource:0}: Error finding container 2c4eb32b18aec421673d0be2d0d9a8568cb2d7a0e5d17fe396f6856b882821a0: Status 404 returned error can't find the container with id 2c4eb32b18aec421673d0be2d0d9a8568cb2d7a0e5d17fe396f6856b882821a0 Feb 16 17:18:33 crc kubenswrapper[4794]: W0216 17:18:33.228739 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc566e561_8069_4311_a79f_71130f9b54d7.slice/crio-58c7f87737821a4a78788fc5cb22ae477016ff658ab50d1f39349ed47ec3a4e3 WatchSource:0}: Error finding container 58c7f87737821a4a78788fc5cb22ae477016ff658ab50d1f39349ed47ec3a4e3: Status 404 returned error can't find the container with id 58c7f87737821a4a78788fc5cb22ae477016ff658ab50d1f39349ed47ec3a4e3 Feb 16 17:18:33 crc kubenswrapper[4794]: W0216 17:18:33.235984 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4ca9db4_7b81_4c54_b6df_f5c4a8475a15.slice/crio-011a72cb8185e81a89ef6aefe76fc90002f69730829601fc9e3bdd0fcffcec9f WatchSource:0}: Error finding container 011a72cb8185e81a89ef6aefe76fc90002f69730829601fc9e3bdd0fcffcec9f: Status 404 returned error can't find the container with id 011a72cb8185e81a89ef6aefe76fc90002f69730829601fc9e3bdd0fcffcec9f Feb 16 17:18:33 crc kubenswrapper[4794]: I0216 17:18:33.287415 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-b5tg6" event={"ID":"4d912db4-c2c9-4103-ba2c-26f1dc0cc4a6","Type":"ContainerStarted","Data":"2f9b16a4bdfc5bbe886fc319cb5c9af5d5087b08fceba629b8dbd4ee3f62907e"} Feb 16 17:18:33 crc kubenswrapper[4794]: I0216 17:18:33.288935 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-q5f7s" event={"ID":"44bf8e87-8212-4680-bcdc-bf1ca6d94d35","Type":"ContainerStarted","Data":"5f7f078b6d626a2b2ab5df8c326102bcfa90599398867d29f1803fcebc9953c8"} Feb 16 17:18:33 crc kubenswrapper[4794]: I0216 17:18:33.290015 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-s79lr" event={"ID":"f7637fa0-4e0c-41e1-a8e7-ba9442495cfc","Type":"ContainerStarted","Data":"de21f04b56518096780922693fbf4ba6a437a3cc71815f876dabfb0cd2f4fe91"} Feb 16 17:18:33 crc kubenswrapper[4794]: I0216 17:18:33.290809 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-946dc" event={"ID":"116e8deb-7236-4751-95ee-9b839f228f55","Type":"ContainerStarted","Data":"df5ac6238791b2b2e6c4d462a3379cc979b2f4e4c96c0abb7c4ba3d4173dd8da"} Feb 16 17:18:33 crc kubenswrapper[4794]: I0216 17:18:33.291820 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-g5kbr" event={"ID":"c566e561-8069-4311-a79f-71130f9b54d7","Type":"ContainerStarted","Data":"58c7f87737821a4a78788fc5cb22ae477016ff658ab50d1f39349ed47ec3a4e3"} Feb 16 17:18:33 crc kubenswrapper[4794]: I0216 17:18:33.293503 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-p59dn" event={"ID":"7eaab997-2552-42b3-b638-a92220374d2d","Type":"ContainerStarted","Data":"98af2bb06c502ff5e0ca6de534680cf73eeb5bdfa82d9b7ff3a656017da2f0a5"} Feb 16 17:18:33 crc kubenswrapper[4794]: I0216 17:18:33.295629 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-tttfw" event={"ID":"f4ca9db4-7b81-4c54-b6df-f5c4a8475a15","Type":"ContainerStarted","Data":"011a72cb8185e81a89ef6aefe76fc90002f69730829601fc9e3bdd0fcffcec9f"} Feb 16 17:18:33 crc kubenswrapper[4794]: I0216 17:18:33.297819 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-qnr9g" event={"ID":"0ae6e41c-d0dc-4437-8b0c-1cd271cdbd6f","Type":"ContainerStarted","Data":"2c4eb32b18aec421673d0be2d0d9a8568cb2d7a0e5d17fe396f6856b882821a0"} Feb 16 17:18:33 crc kubenswrapper[4794]: I0216 17:18:33.625541 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b50615c5-2b75-4b07-9f72-4c70baa57bf3-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ch4rq9\" (UID: \"b50615c5-2b75-4b07-9f72-4c70baa57bf3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ch4rq9" Feb 16 17:18:33 crc kubenswrapper[4794]: E0216 17:18:33.626093 4794 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 17:18:33 crc kubenswrapper[4794]: E0216 17:18:33.626147 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b50615c5-2b75-4b07-9f72-4c70baa57bf3-cert podName:b50615c5-2b75-4b07-9f72-4c70baa57bf3 nodeName:}" failed. No retries permitted until 2026-02-16 17:18:37.626133431 +0000 UTC m=+1143.574228078 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b50615c5-2b75-4b07-9f72-4c70baa57bf3-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ch4rq9" (UID: "b50615c5-2b75-4b07-9f72-4c70baa57bf3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 17:18:34 crc kubenswrapper[4794]: I0216 17:18:34.033948 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3f66b30a-9191-494c-9d74-86e92acdc455-metrics-certs\") pod \"openstack-operator-controller-manager-6f58b764dd-9nlr2\" (UID: \"3f66b30a-9191-494c-9d74-86e92acdc455\") " pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-9nlr2" Feb 16 17:18:34 crc kubenswrapper[4794]: I0216 17:18:34.034021 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3f66b30a-9191-494c-9d74-86e92acdc455-webhook-certs\") pod \"openstack-operator-controller-manager-6f58b764dd-9nlr2\" (UID: \"3f66b30a-9191-494c-9d74-86e92acdc455\") " pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-9nlr2" Feb 16 17:18:34 crc kubenswrapper[4794]: E0216 17:18:34.034164 4794 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 17:18:34 crc kubenswrapper[4794]: E0216 17:18:34.034168 4794 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 17:18:34 crc kubenswrapper[4794]: E0216 17:18:34.034249 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f66b30a-9191-494c-9d74-86e92acdc455-webhook-certs podName:3f66b30a-9191-494c-9d74-86e92acdc455 nodeName:}" failed. No retries permitted until 2026-02-16 17:18:38.034233729 +0000 UTC m=+1143.982328376 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3f66b30a-9191-494c-9d74-86e92acdc455-webhook-certs") pod "openstack-operator-controller-manager-6f58b764dd-9nlr2" (UID: "3f66b30a-9191-494c-9d74-86e92acdc455") : secret "webhook-server-cert" not found Feb 16 17:18:34 crc kubenswrapper[4794]: E0216 17:18:34.034268 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f66b30a-9191-494c-9d74-86e92acdc455-metrics-certs podName:3f66b30a-9191-494c-9d74-86e92acdc455 nodeName:}" failed. No retries permitted until 2026-02-16 17:18:38.03426186 +0000 UTC m=+1143.982356507 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3f66b30a-9191-494c-9d74-86e92acdc455-metrics-certs") pod "openstack-operator-controller-manager-6f58b764dd-9nlr2" (UID: "3f66b30a-9191-494c-9d74-86e92acdc455") : secret "metrics-server-cert" not found Feb 16 17:18:37 crc kubenswrapper[4794]: I0216 17:18:37.000036 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3da72c4e-1963-406a-9dff-f0bc43f154bd-cert\") pod \"infra-operator-controller-manager-79d975b745-45nh7\" (UID: \"3da72c4e-1963-406a-9dff-f0bc43f154bd\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-45nh7" Feb 16 17:18:37 crc kubenswrapper[4794]: E0216 17:18:37.000434 4794 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 17:18:37 crc kubenswrapper[4794]: E0216 17:18:37.000586 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3da72c4e-1963-406a-9dff-f0bc43f154bd-cert podName:3da72c4e-1963-406a-9dff-f0bc43f154bd nodeName:}" failed. No retries permitted until 2026-02-16 17:18:45.00057067 +0000 UTC m=+1150.948665327 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3da72c4e-1963-406a-9dff-f0bc43f154bd-cert") pod "infra-operator-controller-manager-79d975b745-45nh7" (UID: "3da72c4e-1963-406a-9dff-f0bc43f154bd") : secret "infra-operator-webhook-server-cert" not found Feb 16 17:18:37 crc kubenswrapper[4794]: W0216 17:18:37.507203 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba76f31a_473e_48b7_873a_a2251f664d4b.slice/crio-0c3846341dd2d88c0c94ac00be08c6235146df59f6703b471b227835b263d8da WatchSource:0}: Error finding container 0c3846341dd2d88c0c94ac00be08c6235146df59f6703b471b227835b263d8da: Status 404 returned error can't find the container with id 0c3846341dd2d88c0c94ac00be08c6235146df59f6703b471b227835b263d8da Feb 16 17:18:37 crc kubenswrapper[4794]: I0216 17:18:37.714615 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b50615c5-2b75-4b07-9f72-4c70baa57bf3-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ch4rq9\" (UID: \"b50615c5-2b75-4b07-9f72-4c70baa57bf3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ch4rq9" Feb 16 17:18:37 crc kubenswrapper[4794]: E0216 17:18:37.714892 4794 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 17:18:37 crc kubenswrapper[4794]: E0216 17:18:37.715211 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b50615c5-2b75-4b07-9f72-4c70baa57bf3-cert podName:b50615c5-2b75-4b07-9f72-4c70baa57bf3 nodeName:}" failed. No retries permitted until 2026-02-16 17:18:45.715171485 +0000 UTC m=+1151.663266132 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b50615c5-2b75-4b07-9f72-4c70baa57bf3-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ch4rq9" (UID: "b50615c5-2b75-4b07-9f72-4c70baa57bf3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 17:18:38 crc kubenswrapper[4794]: I0216 17:18:38.122567 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3f66b30a-9191-494c-9d74-86e92acdc455-metrics-certs\") pod \"openstack-operator-controller-manager-6f58b764dd-9nlr2\" (UID: \"3f66b30a-9191-494c-9d74-86e92acdc455\") " pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-9nlr2" Feb 16 17:18:38 crc kubenswrapper[4794]: I0216 17:18:38.122655 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3f66b30a-9191-494c-9d74-86e92acdc455-webhook-certs\") pod \"openstack-operator-controller-manager-6f58b764dd-9nlr2\" (UID: \"3f66b30a-9191-494c-9d74-86e92acdc455\") " pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-9nlr2" Feb 16 17:18:38 crc kubenswrapper[4794]: E0216 17:18:38.122807 4794 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 17:18:38 crc kubenswrapper[4794]: E0216 17:18:38.122870 4794 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 17:18:38 crc kubenswrapper[4794]: E0216 17:18:38.122905 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f66b30a-9191-494c-9d74-86e92acdc455-metrics-certs podName:3f66b30a-9191-494c-9d74-86e92acdc455 nodeName:}" failed. No retries permitted until 2026-02-16 17:18:46.122879671 +0000 UTC m=+1152.070974368 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3f66b30a-9191-494c-9d74-86e92acdc455-metrics-certs") pod "openstack-operator-controller-manager-6f58b764dd-9nlr2" (UID: "3f66b30a-9191-494c-9d74-86e92acdc455") : secret "metrics-server-cert" not found Feb 16 17:18:38 crc kubenswrapper[4794]: E0216 17:18:38.122949 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f66b30a-9191-494c-9d74-86e92acdc455-webhook-certs podName:3f66b30a-9191-494c-9d74-86e92acdc455 nodeName:}" failed. No retries permitted until 2026-02-16 17:18:46.122922612 +0000 UTC m=+1152.071017309 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3f66b30a-9191-494c-9d74-86e92acdc455-webhook-certs") pod "openstack-operator-controller-manager-6f58b764dd-9nlr2" (UID: "3f66b30a-9191-494c-9d74-86e92acdc455") : secret "webhook-server-cert" not found Feb 16 17:18:38 crc kubenswrapper[4794]: I0216 17:18:38.343652 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5884f785c-9wnws" event={"ID":"ba76f31a-473e-48b7-873a-a2251f664d4b","Type":"ContainerStarted","Data":"0c3846341dd2d88c0c94ac00be08c6235146df59f6703b471b227835b263d8da"} Feb 16 17:18:38 crc kubenswrapper[4794]: I0216 17:18:38.344668 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wcdnq" event={"ID":"b428664f-1819-45d4-8040-1c0c35e31c5d","Type":"ContainerStarted","Data":"d4a04c0f85571015d241631a4609aebed6a90a827211b8437877735b75faab1f"} Feb 16 17:18:38 crc kubenswrapper[4794]: I0216 17:18:38.347559 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-64c7v" event={"ID":"89a0d9ab-217b-4bc4-ad65-6a66001fe891","Type":"ContainerStarted","Data":"36e7c65784d7598dfbafb0e62a179acec74192f2a0db92bf9cf77cdecd4371b1"} Feb 16 17:18:45 crc kubenswrapper[4794]: I0216 17:18:45.091071 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3da72c4e-1963-406a-9dff-f0bc43f154bd-cert\") pod \"infra-operator-controller-manager-79d975b745-45nh7\" (UID: \"3da72c4e-1963-406a-9dff-f0bc43f154bd\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-45nh7" Feb 16 17:18:45 crc kubenswrapper[4794]: E0216 17:18:45.091223 4794 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 17:18:45 crc kubenswrapper[4794]: E0216 17:18:45.091786 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3da72c4e-1963-406a-9dff-f0bc43f154bd-cert podName:3da72c4e-1963-406a-9dff-f0bc43f154bd nodeName:}" failed. No retries permitted until 2026-02-16 17:19:01.091766628 +0000 UTC m=+1167.039861275 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3da72c4e-1963-406a-9dff-f0bc43f154bd-cert") pod "infra-operator-controller-manager-79d975b745-45nh7" (UID: "3da72c4e-1963-406a-9dff-f0bc43f154bd") : secret "infra-operator-webhook-server-cert" not found Feb 16 17:18:45 crc kubenswrapper[4794]: I0216 17:18:45.804221 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b50615c5-2b75-4b07-9f72-4c70baa57bf3-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ch4rq9\" (UID: \"b50615c5-2b75-4b07-9f72-4c70baa57bf3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ch4rq9" Feb 16 17:18:45 crc kubenswrapper[4794]: I0216 17:18:45.812218 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b50615c5-2b75-4b07-9f72-4c70baa57bf3-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ch4rq9\" (UID: \"b50615c5-2b75-4b07-9f72-4c70baa57bf3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ch4rq9" Feb 16 17:18:46 crc kubenswrapper[4794]: I0216 17:18:46.019960 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-mz9sq" Feb 16 17:18:46 crc kubenswrapper[4794]: I0216 17:18:46.027543 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ch4rq9" Feb 16 17:18:46 crc kubenswrapper[4794]: I0216 17:18:46.212548 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3f66b30a-9191-494c-9d74-86e92acdc455-webhook-certs\") pod \"openstack-operator-controller-manager-6f58b764dd-9nlr2\" (UID: \"3f66b30a-9191-494c-9d74-86e92acdc455\") " pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-9nlr2" Feb 16 17:18:46 crc kubenswrapper[4794]: I0216 17:18:46.213533 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3f66b30a-9191-494c-9d74-86e92acdc455-metrics-certs\") pod \"openstack-operator-controller-manager-6f58b764dd-9nlr2\" (UID: \"3f66b30a-9191-494c-9d74-86e92acdc455\") " pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-9nlr2" Feb 16 17:18:46 crc kubenswrapper[4794]: I0216 17:18:46.217376 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3f66b30a-9191-494c-9d74-86e92acdc455-webhook-certs\") pod \"openstack-operator-controller-manager-6f58b764dd-9nlr2\" (UID: \"3f66b30a-9191-494c-9d74-86e92acdc455\") " pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-9nlr2" Feb 16 17:18:46 crc kubenswrapper[4794]: I0216 17:18:46.219767 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3f66b30a-9191-494c-9d74-86e92acdc455-metrics-certs\") pod \"openstack-operator-controller-manager-6f58b764dd-9nlr2\" (UID: \"3f66b30a-9191-494c-9d74-86e92acdc455\") " pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-9nlr2" Feb 16 17:18:46 crc kubenswrapper[4794]: I0216 17:18:46.310054 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-t8425" Feb 16 17:18:46 crc kubenswrapper[4794]: I0216 17:18:46.318911 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-9nlr2" Feb 16 17:18:51 crc kubenswrapper[4794]: E0216 17:18:51.527963 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" Feb 16 17:18:51 crc kubenswrapper[4794]: E0216 17:18:51.528511 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ncbbn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5b9b8895d5-r5hls_openstack-operators(2f2fd1c7-b7ec-4807-a859-b1d5efb8c58e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 17:18:51 crc kubenswrapper[4794]: E0216 17:18:51.529701 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-r5hls" podUID="2f2fd1c7-b7ec-4807-a859-b1d5efb8c58e" Feb 16 17:18:51 crc kubenswrapper[4794]: E0216 17:18:51.812566 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:dd78726f4471d205f4488eb4a1bee0b8625867539f4e86f0a6d78162ba5a36e9: Get \"http://38.102.83.64:5001/v2/openstack-k8s-operators/telemetry-operator/blobs/sha256:dd78726f4471d205f4488eb4a1bee0b8625867539f4e86f0a6d78162ba5a36e9\": context canceled" image="38.102.83.64:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1" Feb 16 17:18:51 crc kubenswrapper[4794]: E0216 17:18:51.812935 4794 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = reading blob sha256:dd78726f4471d205f4488eb4a1bee0b8625867539f4e86f0a6d78162ba5a36e9: Get \"http://38.102.83.64:5001/v2/openstack-k8s-operators/telemetry-operator/blobs/sha256:dd78726f4471d205f4488eb4a1bee0b8625867539f4e86f0a6d78162ba5a36e9\": context canceled" image="38.102.83.64:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1" Feb 16 17:18:51 crc kubenswrapper[4794]: E0216 17:18:51.813107 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.64:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8xmqd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5884f785c-9wnws_openstack-operators(ba76f31a-473e-48b7-873a-a2251f664d4b): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:dd78726f4471d205f4488eb4a1bee0b8625867539f4e86f0a6d78162ba5a36e9: Get \"http://38.102.83.64:5001/v2/openstack-k8s-operators/telemetry-operator/blobs/sha256:dd78726f4471d205f4488eb4a1bee0b8625867539f4e86f0a6d78162ba5a36e9\": context canceled" logger="UnhandledError" Feb 16 17:18:51 crc kubenswrapper[4794]: E0216 17:18:51.819817 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:dd78726f4471d205f4488eb4a1bee0b8625867539f4e86f0a6d78162ba5a36e9: Get \\\"http://38.102.83.64:5001/v2/openstack-k8s-operators/telemetry-operator/blobs/sha256:dd78726f4471d205f4488eb4a1bee0b8625867539f4e86f0a6d78162ba5a36e9\\\": context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-5884f785c-9wnws" podUID="ba76f31a-473e-48b7-873a-a2251f664d4b" Feb 16 17:18:52 crc kubenswrapper[4794]: E0216 17:18:52.464013 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.64:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5884f785c-9wnws" podUID="ba76f31a-473e-48b7-873a-a2251f664d4b" Feb 16 17:18:52 crc kubenswrapper[4794]: E0216 17:18:52.464471 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-r5hls" podUID="2f2fd1c7-b7ec-4807-a859-b1d5efb8c58e" Feb 16 17:18:53 crc kubenswrapper[4794]: E0216 17:18:53.171348 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:9f4bff248214d12c7254dc3c25ef82bd14ff143e2a06d159f2a8cc1c9e6ef1fd: Get \"https://quay.io/v2/openstack-k8s-operators/rabbitmq-cluster-operator/blobs/sha256:9f4bff248214d12c7254dc3c25ef82bd14ff143e2a06d159f2a8cc1c9e6ef1fd\": context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 16 17:18:53 crc kubenswrapper[4794]: E0216 17:18:53.171528 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b7bwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-64c7v_openstack-operators(89a0d9ab-217b-4bc4-ad65-6a66001fe891): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:9f4bff248214d12c7254dc3c25ef82bd14ff143e2a06d159f2a8cc1c9e6ef1fd: Get \"https://quay.io/v2/openstack-k8s-operators/rabbitmq-cluster-operator/blobs/sha256:9f4bff248214d12c7254dc3c25ef82bd14ff143e2a06d159f2a8cc1c9e6ef1fd\": context canceled" logger="UnhandledError" Feb 16 17:18:53 crc kubenswrapper[4794]: E0216 17:18:53.172801 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:9f4bff248214d12c7254dc3c25ef82bd14ff143e2a06d159f2a8cc1c9e6ef1fd: Get \\\"https://quay.io/v2/openstack-k8s-operators/rabbitmq-cluster-operator/blobs/sha256:9f4bff248214d12c7254dc3c25ef82bd14ff143e2a06d159f2a8cc1c9e6ef1fd\\\": context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-64c7v" podUID="89a0d9ab-217b-4bc4-ad65-6a66001fe891" Feb 16 17:18:53 crc kubenswrapper[4794]: E0216 17:18:53.472564 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-64c7v" podUID="89a0d9ab-217b-4bc4-ad65-6a66001fe891" Feb 16 17:18:53 crc kubenswrapper[4794]: E0216 17:18:53.731776 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf" Feb 16 17:18:53 crc kubenswrapper[4794]: E0216 17:18:53.731982 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qnkg9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-64ddbf8bb-p59dn_openstack-operators(7eaab997-2552-42b3-b638-a92220374d2d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 17:18:53 crc kubenswrapper[4794]: E0216 17:18:53.733373 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-p59dn" podUID="7eaab997-2552-42b3-b638-a92220374d2d" Feb 16 17:18:54 crc kubenswrapper[4794]: E0216 17:18:54.245012 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6" Feb 16 17:18:54 crc kubenswrapper[4794]: E0216 17:18:54.245204 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tlplh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-946dc_openstack-operators(116e8deb-7236-4751-95ee-9b839f228f55): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 17:18:54 crc kubenswrapper[4794]: E0216 17:18:54.246409 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-7866795846-946dc" podUID="116e8deb-7236-4751-95ee-9b839f228f55" Feb 16 17:18:54 crc kubenswrapper[4794]: E0216 17:18:54.479516 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-946dc" podUID="116e8deb-7236-4751-95ee-9b839f228f55" Feb 16 17:18:54 crc kubenswrapper[4794]: E0216 17:18:54.479842 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-p59dn" podUID="7eaab997-2552-42b3-b638-a92220374d2d" Feb 16 17:18:54 crc kubenswrapper[4794]: E0216 17:18:54.775974 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0" Feb 16 17:18:54 crc kubenswrapper[4794]: E0216 17:18:54.776107 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k4ncv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-qnr9g_openstack-operators(0ae6e41c-d0dc-4437-8b0c-1cd271cdbd6f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 17:18:54 crc kubenswrapper[4794]: E0216 17:18:54.777282 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-qnr9g" podUID="0ae6e41c-d0dc-4437-8b0c-1cd271cdbd6f" Feb 16 17:18:55 crc kubenswrapper[4794]: E0216 17:18:55.486552 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-qnr9g" podUID="0ae6e41c-d0dc-4437-8b0c-1cd271cdbd6f" Feb 16 17:18:55 crc kubenswrapper[4794]: E0216 17:18:55.794597 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" Feb 16 17:18:55 crc kubenswrapper[4794]: E0216 17:18:55.794746 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w44cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-g5kbr_openstack-operators(c566e561-8069-4311-a79f-71130f9b54d7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 17:18:55 crc kubenswrapper[4794]: E0216 17:18:55.796368 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-g5kbr" podUID="c566e561-8069-4311-a79f-71130f9b54d7" Feb 16 17:18:56 crc kubenswrapper[4794]: E0216 17:18:56.495319 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-g5kbr" podUID="c566e561-8069-4311-a79f-71130f9b54d7" Feb 16 17:18:59 crc kubenswrapper[4794]: E0216 17:18:59.674164 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" Feb 16 17:18:59 crc kubenswrapper[4794]: E0216 17:18:59.674654 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jxv79,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-b5tg6_openstack-operators(4d912db4-c2c9-4103-ba2c-26f1dc0cc4a6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 17:18:59 crc kubenswrapper[4794]: E0216 17:18:59.675834 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-b5tg6" podUID="4d912db4-c2c9-4103-ba2c-26f1dc0cc4a6" Feb 16 17:19:00 crc kubenswrapper[4794]: E0216 17:19:00.535955 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-b5tg6" podUID="4d912db4-c2c9-4103-ba2c-26f1dc0cc4a6" Feb 16 17:19:01 crc kubenswrapper[4794]: I0216 17:19:01.121783 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3da72c4e-1963-406a-9dff-f0bc43f154bd-cert\") pod \"infra-operator-controller-manager-79d975b745-45nh7\" (UID: \"3da72c4e-1963-406a-9dff-f0bc43f154bd\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-45nh7" Feb 16 17:19:01 crc kubenswrapper[4794]: I0216 17:19:01.128266 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3da72c4e-1963-406a-9dff-f0bc43f154bd-cert\") pod \"infra-operator-controller-manager-79d975b745-45nh7\" (UID: \"3da72c4e-1963-406a-9dff-f0bc43f154bd\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-45nh7" Feb 16 17:19:01 crc kubenswrapper[4794]: E0216 17:19:01.290015 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 16 17:19:01 crc kubenswrapper[4794]: E0216 17:19:01.290231 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fkz4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-4djph_openstack-operators(ef354ee7-16e4-4b4d-98c5-0f08fc370717): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 17:19:01 crc kubenswrapper[4794]: E0216 17:19:01.292254 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-4djph" podUID="ef354ee7-16e4-4b4d-98c5-0f08fc370717" Feb 16 17:19:01 crc kubenswrapper[4794]: I0216 17:19:01.406906 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-vfqgw" Feb 16 17:19:01 crc kubenswrapper[4794]: I0216 17:19:01.414656 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-45nh7" Feb 16 17:19:01 crc kubenswrapper[4794]: E0216 17:19:01.534910 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-4djph" podUID="ef354ee7-16e4-4b4d-98c5-0f08fc370717" Feb 16 17:19:01 crc kubenswrapper[4794]: E0216 17:19:01.854606 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34" Feb 16 17:19:01 crc kubenswrapper[4794]: E0216 17:19:01.855198 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wwphs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69f8888797-s79lr_openstack-operators(f7637fa0-4e0c-41e1-a8e7-ba9442495cfc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 17:19:01 crc kubenswrapper[4794]: E0216 17:19:01.856413 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-s79lr" podUID="f7637fa0-4e0c-41e1-a8e7-ba9442495cfc" Feb 16 17:19:02 crc kubenswrapper[4794]: E0216 17:19:02.399805 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 16 17:19:02 crc kubenswrapper[4794]: E0216 17:19:02.400616 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ktlfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-q5f7s_openstack-operators(44bf8e87-8212-4680-bcdc-bf1ca6d94d35): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 17:19:02 crc kubenswrapper[4794]: E0216 17:19:02.405966 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-q5f7s" podUID="44bf8e87-8212-4680-bcdc-bf1ca6d94d35" Feb 16 17:19:02 crc kubenswrapper[4794]: E0216 17:19:02.578708 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-s79lr" podUID="f7637fa0-4e0c-41e1-a8e7-ba9442495cfc" Feb 16 17:19:02 crc kubenswrapper[4794]: E0216 17:19:02.580187 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-q5f7s" podUID="44bf8e87-8212-4680-bcdc-bf1ca6d94d35" Feb 16 17:19:02 crc kubenswrapper[4794]: I0216 17:19:02.980530 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ch4rq9"] Feb 16 17:19:03 crc kubenswrapper[4794]: I0216 17:19:03.038250 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6f58b764dd-9nlr2"] Feb 16 17:19:03 crc kubenswrapper[4794]: I0216 17:19:03.164925 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-45nh7"] Feb 16 17:19:03 crc kubenswrapper[4794]: W0216 17:19:03.168583 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3da72c4e_1963_406a_9dff_f0bc43f154bd.slice/crio-1bd01c1ac779cc52b85680da66a5b36d810b914250aac9c65c3dcb3710cdea29 WatchSource:0}: Error finding container 1bd01c1ac779cc52b85680da66a5b36d810b914250aac9c65c3dcb3710cdea29: Status 404 returned error can't find the container with id 1bd01c1ac779cc52b85680da66a5b36d810b914250aac9c65c3dcb3710cdea29 Feb 16 17:19:03 crc kubenswrapper[4794]: I0216 17:19:03.591946 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-9nlr2" event={"ID":"3f66b30a-9191-494c-9d74-86e92acdc455","Type":"ContainerStarted","Data":"bd85230489adfc0024cd3e597bb40332d24de4513133644a196f084a60214f70"} Feb 16 17:19:03 crc kubenswrapper[4794]: I0216 17:19:03.592347 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-9nlr2" event={"ID":"3f66b30a-9191-494c-9d74-86e92acdc455","Type":"ContainerStarted","Data":"0971a8abdac6641fb6076d67270e580c35f094166903c397d5661cde2a7a427f"} Feb 16 17:19:03 crc kubenswrapper[4794]: I0216 17:19:03.592411 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-9nlr2" Feb 16 17:19:03 crc kubenswrapper[4794]: I0216 17:19:03.610315 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-cq6lq" event={"ID":"5d75b8f6-2376-48f7-90eb-de0bec6cf251","Type":"ContainerStarted","Data":"2bfbd2ed7658411a55e6e9138bc6d341a96b7da3bf0b8731ea5c9f79317f5f0c"} Feb 16 17:19:03 crc kubenswrapper[4794]: I0216 17:19:03.611258 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-cq6lq" Feb 16 17:19:03 crc kubenswrapper[4794]: I0216 17:19:03.627548 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-w7smz" event={"ID":"becddb1f-01f4-4141-a6da-86771dcf2c70","Type":"ContainerStarted","Data":"ae58f3f0f1e00de912220c1f57a6e979fe5b15b1b48d3951ab3ddcc35c5d0440"} Feb 16 17:19:03 crc kubenswrapper[4794]: I0216 17:19:03.627685 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-w7smz" Feb 16 17:19:03 crc kubenswrapper[4794]: I0216 17:19:03.630858 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-tttfw" event={"ID":"f4ca9db4-7b81-4c54-b6df-f5c4a8475a15","Type":"ContainerStarted","Data":"0c16278f4bc9e88ce622e2f242ec29c8059ae29cd37517ce29fdc9edeb9b02da"} Feb 16 17:19:03 crc kubenswrapper[4794]: I0216 17:19:03.631736 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-tttfw" Feb 16 17:19:03 crc kubenswrapper[4794]: I0216 17:19:03.639363 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-45nh7" event={"ID":"3da72c4e-1963-406a-9dff-f0bc43f154bd","Type":"ContainerStarted","Data":"1bd01c1ac779cc52b85680da66a5b36d810b914250aac9c65c3dcb3710cdea29"} Feb 16 17:19:03 crc kubenswrapper[4794]: I0216 17:19:03.640388 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wcdnq" event={"ID":"b428664f-1819-45d4-8040-1c0c35e31c5d","Type":"ContainerStarted","Data":"039c6bfaab2f7c10701773ccc5e6ccbba114d75c8fa570b57afefa996fa36538"} Feb 16 17:19:03 crc kubenswrapper[4794]: I0216 17:19:03.640764 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-9nlr2" podStartSLOduration=34.64074224 podStartE2EDuration="34.64074224s" podCreationTimestamp="2026-02-16 17:18:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:19:03.630465117 +0000 UTC m=+1169.578559764" watchObservedRunningTime="2026-02-16 17:19:03.64074224 +0000 UTC m=+1169.588836887" Feb 16 17:19:03 crc kubenswrapper[4794]: I0216 17:19:03.641134 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wcdnq" Feb 16 17:19:03 crc kubenswrapper[4794]: I0216 17:19:03.641992 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-f4x45" event={"ID":"a78b821e-c246-42b4-9576-603f0889965f","Type":"ContainerStarted","Data":"b10bccca8a2c12e9e5055cd0a1d27462a034241a99880f6fca4c97bdad7d4388"} Feb 16 17:19:03 crc kubenswrapper[4794]: I0216 17:19:03.642495 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-f4x45" Feb 16 17:19:03 crc kubenswrapper[4794]: I0216 17:19:03.651100 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-x99jf" event={"ID":"bba3e236-f18b-4293-b517-897936db8b05","Type":"ContainerStarted","Data":"7c58a83a9a951040407c3da64cf2d3c8baf530555d5d33c750689a9dac43edc6"} Feb 16 17:19:03 crc kubenswrapper[4794]: I0216 17:19:03.652036 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-x99jf" Feb 16 17:19:03 crc kubenswrapper[4794]: I0216 17:19:03.676175 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-j56v5" event={"ID":"f7f924f9-9e09-4b23-91f2-7ac446f44405","Type":"ContainerStarted","Data":"fa59431200b39ef73fe7e0a430a3a8b848e0d4700c5861c17a2e1d064c183a9a"} Feb 16 17:19:03 crc kubenswrapper[4794]: I0216 17:19:03.676870 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-j56v5" Feb 16 17:19:03 crc kubenswrapper[4794]: I0216 17:19:03.685289 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-cq6lq" podStartSLOduration=11.930804801 podStartE2EDuration="35.685261321s" podCreationTimestamp="2026-02-16 17:18:28 +0000 UTC" firstStartedPulling="2026-02-16 17:18:31.015998515 +0000 UTC m=+1136.964093162" lastFinishedPulling="2026-02-16 17:18:54.770455035 +0000 UTC m=+1160.718549682" observedRunningTime="2026-02-16 17:19:03.669879952 +0000 UTC m=+1169.617974599" watchObservedRunningTime="2026-02-16 17:19:03.685261321 +0000 UTC m=+1169.633355968" Feb 16 17:19:03 crc kubenswrapper[4794]: I0216 17:19:03.702492 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5dghp" event={"ID":"eaa0af70-cd40-4e75-9ddf-83a5a2190d83","Type":"ContainerStarted","Data":"5a4d83194f1073a568a94d6d71205f03b52f3e43460eff39cccbc033ff567c58"} Feb 16 17:19:03 crc kubenswrapper[4794]: I0216 17:19:03.703569 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5dghp" Feb 16 17:19:03 crc kubenswrapper[4794]: I0216 17:19:03.715108 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-w7smz" podStartSLOduration=11.963507265 podStartE2EDuration="35.715066332s" podCreationTimestamp="2026-02-16 17:18:28 +0000 UTC" firstStartedPulling="2026-02-16 17:18:31.017981682 +0000 UTC m=+1136.966076329" lastFinishedPulling="2026-02-16 17:18:54.769540749 +0000 UTC m=+1160.717635396" observedRunningTime="2026-02-16 17:19:03.695985177 +0000 UTC m=+1169.644079844" watchObservedRunningTime="2026-02-16 17:19:03.715066332 +0000 UTC m=+1169.663160989" Feb 16 17:19:03 crc kubenswrapper[4794]: I0216 17:19:03.718476 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-t22f4" event={"ID":"5616fc58-e868-46a9-bad9-58cb130759de","Type":"ContainerStarted","Data":"834c3e93fb570501c87947d674974054c7bf78b15f06ca9d7886a2b4d65bc862"} Feb 16 17:19:03 crc kubenswrapper[4794]: I0216 17:19:03.718697 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-t22f4" Feb 16 17:19:03 crc kubenswrapper[4794]: I0216 17:19:03.728449 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ch4rq9" event={"ID":"b50615c5-2b75-4b07-9f72-4c70baa57bf3","Type":"ContainerStarted","Data":"65e6d61041a2416e99e649daae21aaa74df82110cb517d2fc4133d33fbe9f739"} Feb 16 17:19:03 crc kubenswrapper[4794]: I0216 17:19:03.745163 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-f4x45" podStartSLOduration=11.132794767 podStartE2EDuration="35.74513057s" podCreationTimestamp="2026-02-16 17:18:28 +0000 UTC" firstStartedPulling="2026-02-16 17:18:30.158081681 +0000 UTC m=+1136.106176328" lastFinishedPulling="2026-02-16 17:18:54.770417484 +0000 UTC m=+1160.718512131" observedRunningTime="2026-02-16 17:19:03.728485465 +0000 UTC m=+1169.676580112" watchObservedRunningTime="2026-02-16 17:19:03.74513057 +0000 UTC m=+1169.693225367" Feb 16 17:19:03 crc kubenswrapper[4794]: I0216 17:19:03.800031 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-j56v5" podStartSLOduration=12.022717735 podStartE2EDuration="35.800009056s" podCreationTimestamp="2026-02-16 17:18:28 +0000 UTC" firstStartedPulling="2026-02-16 17:18:30.990993922 +0000 UTC m=+1136.939088579" lastFinishedPulling="2026-02-16 17:18:54.768285253 +0000 UTC m=+1160.716379900" observedRunningTime="2026-02-16 17:19:03.767274652 +0000 UTC m=+1169.715369299" watchObservedRunningTime="2026-02-16 17:19:03.800009056 +0000 UTC m=+1169.748103703" Feb 16 17:19:03 crc kubenswrapper[4794]: I0216 17:19:03.814146 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-tttfw" podStartSLOduration=6.78685075 podStartE2EDuration="34.814120199s" podCreationTimestamp="2026-02-16 17:18:29 +0000 UTC" firstStartedPulling="2026-02-16 17:18:33.239129606 +0000 UTC m=+1139.187224253" lastFinishedPulling="2026-02-16 17:19:01.266399055 +0000 UTC m=+1167.214493702" observedRunningTime="2026-02-16 17:19:03.798982387 +0000 UTC m=+1169.747077044" watchObservedRunningTime="2026-02-16 17:19:03.814120199 +0000 UTC m=+1169.762214846" Feb 16 17:19:03 crc kubenswrapper[4794]: I0216 17:19:03.822105 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wcdnq" podStartSLOduration=9.906345252 podStartE2EDuration="34.822087026s" podCreationTimestamp="2026-02-16 17:18:29 +0000 UTC" firstStartedPulling="2026-02-16 17:18:37.524970737 +0000 UTC m=+1143.473065374" lastFinishedPulling="2026-02-16 17:19:02.440712501 +0000 UTC m=+1168.388807148" observedRunningTime="2026-02-16 17:19:03.812479762 +0000 UTC m=+1169.760574409" watchObservedRunningTime="2026-02-16 17:19:03.822087026 +0000 UTC m=+1169.770181673" Feb 16 17:19:03 crc kubenswrapper[4794]: I0216 17:19:03.850928 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-x99jf" podStartSLOduration=11.154362833 podStartE2EDuration="34.850905979s" podCreationTimestamp="2026-02-16 17:18:29 +0000 UTC" firstStartedPulling="2026-02-16 17:18:31.073505887 +0000 UTC m=+1137.021600534" lastFinishedPulling="2026-02-16 17:18:54.770049033 +0000 UTC m=+1160.718143680" observedRunningTime="2026-02-16 17:19:03.84568288 +0000 UTC m=+1169.793777527" watchObservedRunningTime="2026-02-16 17:19:03.850905979 +0000 UTC m=+1169.799000626" Feb 16 17:19:03 crc kubenswrapper[4794]: I0216 17:19:03.926696 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-t22f4" podStartSLOduration=12.181726682 podStartE2EDuration="35.926678171s" podCreationTimestamp="2026-02-16 17:18:28 +0000 UTC" firstStartedPulling="2026-02-16 17:18:31.025493196 +0000 UTC m=+1136.973587843" lastFinishedPulling="2026-02-16 17:18:54.770444685 +0000 UTC m=+1160.718539332" observedRunningTime="2026-02-16 17:19:03.912123826 +0000 UTC m=+1169.860218473" watchObservedRunningTime="2026-02-16 17:19:03.926678171 +0000 UTC m=+1169.874772818" Feb 16 17:19:03 crc kubenswrapper[4794]: I0216 17:19:03.994660 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5dghp" podStartSLOduration=10.222816796 podStartE2EDuration="34.994630431s" podCreationTimestamp="2026-02-16 17:18:29 +0000 UTC" firstStartedPulling="2026-02-16 17:18:31.016202971 +0000 UTC m=+1136.964297618" lastFinishedPulling="2026-02-16 17:18:55.788016606 +0000 UTC m=+1161.736111253" observedRunningTime="2026-02-16 17:19:03.994531538 +0000 UTC m=+1169.942626185" watchObservedRunningTime="2026-02-16 17:19:03.994630431 +0000 UTC m=+1169.942725078" Feb 16 17:19:09 crc kubenswrapper[4794]: I0216 17:19:09.197880 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-j56v5" Feb 16 17:19:09 crc kubenswrapper[4794]: I0216 17:19:09.201327 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-f4x45" Feb 16 17:19:09 crc kubenswrapper[4794]: I0216 17:19:09.246729 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-cq6lq" Feb 16 17:19:09 crc kubenswrapper[4794]: I0216 17:19:09.285178 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-w7smz" Feb 16 17:19:09 crc kubenswrapper[4794]: I0216 17:19:09.334553 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-t22f4" Feb 16 17:19:09 crc kubenswrapper[4794]: I0216 17:19:09.688704 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5dghp" Feb 16 17:19:09 crc kubenswrapper[4794]: I0216 17:19:09.740987 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wcdnq" Feb 16 17:19:09 crc kubenswrapper[4794]: I0216 17:19:09.791765 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-x99jf" Feb 16 17:19:09 crc kubenswrapper[4794]: I0216 17:19:09.801105 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-qnr9g" event={"ID":"0ae6e41c-d0dc-4437-8b0c-1cd271cdbd6f","Type":"ContainerStarted","Data":"827a49fa6bdd58cbdda2866b43ddf27fcf9f2ab917f8a4ed2d1471e58d41fa68"} Feb 16 17:19:09 crc kubenswrapper[4794]: I0216 17:19:09.802001 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-qnr9g" Feb 16 17:19:09 crc kubenswrapper[4794]: I0216 17:19:09.803593 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-946dc" event={"ID":"116e8deb-7236-4751-95ee-9b839f228f55","Type":"ContainerStarted","Data":"8dcbab4ea4317957d794ec60d1fc6c12519bfe983291b976fa7999147b7b4068"} Feb 16 17:19:09 crc kubenswrapper[4794]: I0216 17:19:09.804199 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-946dc" Feb 16 17:19:09 crc kubenswrapper[4794]: I0216 17:19:09.808883 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-g5kbr" event={"ID":"c566e561-8069-4311-a79f-71130f9b54d7","Type":"ContainerStarted","Data":"6c87aea80526c367a8b8418796b199b2eb5a9ecfd9e44bd241c129985b5c217f"} Feb 16 17:19:09 crc kubenswrapper[4794]: I0216 17:19:09.809714 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-g5kbr" Feb 16 17:19:09 crc kubenswrapper[4794]: I0216 17:19:09.832637 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-r5hls" event={"ID":"2f2fd1c7-b7ec-4807-a859-b1d5efb8c58e","Type":"ContainerStarted","Data":"ed042394b7fe9877f3a9b8cf2151daf2205f602b733b73e8359a3990d0ef44fb"} Feb 16 17:19:09 crc kubenswrapper[4794]: I0216 17:19:09.833531 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-r5hls" Feb 16 17:19:09 crc kubenswrapper[4794]: I0216 17:19:09.852200 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-p59dn" event={"ID":"7eaab997-2552-42b3-b638-a92220374d2d","Type":"ContainerStarted","Data":"3392cb769b071c20ed4051d2e32ff658f3b7547543ba4202d8874085a8f893a6"} Feb 16 17:19:09 crc kubenswrapper[4794]: I0216 17:19:09.853100 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-p59dn" Feb 16 17:19:09 crc kubenswrapper[4794]: I0216 17:19:09.882453 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5884f785c-9wnws" event={"ID":"ba76f31a-473e-48b7-873a-a2251f664d4b","Type":"ContainerStarted","Data":"6999c7904358a00dc7415355a3a8c6ada402b30b10bd38f75621ce3d014df6a4"} Feb 16 17:19:09 crc kubenswrapper[4794]: I0216 17:19:09.883348 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5884f785c-9wnws" Feb 16 17:19:09 crc kubenswrapper[4794]: I0216 17:19:09.890230 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-45nh7" event={"ID":"3da72c4e-1963-406a-9dff-f0bc43f154bd","Type":"ContainerStarted","Data":"0f8bcfcd92ade8850bd1d8f951ab71ff8999a870ab0b8905ca13775099a5b926"} Feb 16 17:19:09 crc kubenswrapper[4794]: I0216 17:19:09.891053 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-45nh7" Feb 16 17:19:09 crc kubenswrapper[4794]: I0216 17:19:09.901105 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-g5kbr" podStartSLOduration=4.784370757 podStartE2EDuration="40.901089986s" podCreationTimestamp="2026-02-16 17:18:29 +0000 UTC" firstStartedPulling="2026-02-16 17:18:33.231060245 +0000 UTC m=+1139.179154892" lastFinishedPulling="2026-02-16 17:19:09.347779474 +0000 UTC m=+1175.295874121" observedRunningTime="2026-02-16 17:19:09.897855803 +0000 UTC m=+1175.845950450" watchObservedRunningTime="2026-02-16 17:19:09.901089986 +0000 UTC m=+1175.849184633" Feb 16 17:19:09 crc kubenswrapper[4794]: I0216 17:19:09.901913 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-946dc" podStartSLOduration=5.097663189 podStartE2EDuration="40.901908149s" podCreationTimestamp="2026-02-16 17:18:29 +0000 UTC" firstStartedPulling="2026-02-16 17:18:33.248155323 +0000 UTC m=+1139.196249970" lastFinishedPulling="2026-02-16 17:19:09.052400293 +0000 UTC m=+1175.000494930" observedRunningTime="2026-02-16 17:19:09.85745202 +0000 UTC m=+1175.805546667" watchObservedRunningTime="2026-02-16 17:19:09.901908149 +0000 UTC m=+1175.850002796" Feb 16 17:19:09 crc kubenswrapper[4794]: I0216 17:19:09.916488 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ch4rq9" event={"ID":"b50615c5-2b75-4b07-9f72-4c70baa57bf3","Type":"ContainerStarted","Data":"00bde442e0d5970f8e0597f3bc6e60d80e754e90a955af4aab8e5468bf35062b"} Feb 16 17:19:09 crc kubenswrapper[4794]: I0216 17:19:09.918823 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ch4rq9" Feb 16 17:19:09 crc kubenswrapper[4794]: I0216 17:19:09.960774 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-qnr9g" podStartSLOduration=5.13796234 podStartE2EDuration="40.960750719s" podCreationTimestamp="2026-02-16 17:18:29 +0000 UTC" firstStartedPulling="2026-02-16 17:18:33.231431356 +0000 UTC m=+1139.179526003" lastFinishedPulling="2026-02-16 17:19:09.054219745 +0000 UTC m=+1175.002314382" observedRunningTime="2026-02-16 17:19:09.936678051 +0000 UTC m=+1175.884772708" watchObservedRunningTime="2026-02-16 17:19:09.960750719 +0000 UTC m=+1175.908845366" Feb 16 17:19:10 crc kubenswrapper[4794]: I0216 17:19:10.009836 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ch4rq9" podStartSLOduration=35.374655357 podStartE2EDuration="41.009816389s" podCreationTimestamp="2026-02-16 17:18:29 +0000 UTC" firstStartedPulling="2026-02-16 17:19:03.052173072 +0000 UTC m=+1169.000267719" lastFinishedPulling="2026-02-16 17:19:08.687334084 +0000 UTC m=+1174.635428751" observedRunningTime="2026-02-16 17:19:09.987730469 +0000 UTC m=+1175.935825136" watchObservedRunningTime="2026-02-16 17:19:10.009816389 +0000 UTC m=+1175.957911036" Feb 16 17:19:10 crc kubenswrapper[4794]: I0216 17:19:10.037274 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-45nh7" podStartSLOduration=36.526217302 podStartE2EDuration="42.037251522s" podCreationTimestamp="2026-02-16 17:18:28 +0000 UTC" firstStartedPulling="2026-02-16 17:19:03.179235828 +0000 UTC m=+1169.127330475" lastFinishedPulling="2026-02-16 17:19:08.690270048 +0000 UTC m=+1174.638364695" observedRunningTime="2026-02-16 17:19:10.031897339 +0000 UTC m=+1175.979991986" watchObservedRunningTime="2026-02-16 17:19:10.037251522 +0000 UTC m=+1175.985346169" Feb 16 17:19:10 crc kubenswrapper[4794]: I0216 17:19:10.080059 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5884f785c-9wnws" podStartSLOduration=9.586893175 podStartE2EDuration="41.080033232s" podCreationTimestamp="2026-02-16 17:18:29 +0000 UTC" firstStartedPulling="2026-02-16 17:18:37.525098761 +0000 UTC m=+1143.473193408" lastFinishedPulling="2026-02-16 17:19:09.018238818 +0000 UTC m=+1174.966333465" observedRunningTime="2026-02-16 17:19:10.067350441 +0000 UTC m=+1176.015445078" watchObservedRunningTime="2026-02-16 17:19:10.080033232 +0000 UTC m=+1176.028127879" Feb 16 17:19:10 crc kubenswrapper[4794]: I0216 17:19:10.141359 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-r5hls" podStartSLOduration=4.201718736 podStartE2EDuration="42.141335282s" podCreationTimestamp="2026-02-16 17:18:28 +0000 UTC" firstStartedPulling="2026-02-16 17:18:31.079891549 +0000 UTC m=+1137.027986196" lastFinishedPulling="2026-02-16 17:19:09.019508095 +0000 UTC m=+1174.967602742" observedRunningTime="2026-02-16 17:19:10.108591137 +0000 UTC m=+1176.056685784" watchObservedRunningTime="2026-02-16 17:19:10.141335282 +0000 UTC m=+1176.089429949" Feb 16 17:19:10 crc kubenswrapper[4794]: I0216 17:19:10.172610 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-p59dn" podStartSLOduration=5.340956061 podStartE2EDuration="41.172585893s" podCreationTimestamp="2026-02-16 17:18:29 +0000 UTC" firstStartedPulling="2026-02-16 17:18:33.204978111 +0000 UTC m=+1139.153072758" lastFinishedPulling="2026-02-16 17:19:09.036607943 +0000 UTC m=+1174.984702590" observedRunningTime="2026-02-16 17:19:10.136836763 +0000 UTC m=+1176.084931410" watchObservedRunningTime="2026-02-16 17:19:10.172585893 +0000 UTC m=+1176.120680541" Feb 16 17:19:10 crc kubenswrapper[4794]: I0216 17:19:10.216969 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-tttfw" Feb 16 17:19:13 crc kubenswrapper[4794]: I0216 17:19:13.958256 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-64c7v" event={"ID":"89a0d9ab-217b-4bc4-ad65-6a66001fe891","Type":"ContainerStarted","Data":"de3940b913a80e2f2b7fa8d20946f33e1499684b65bf5ae1b6e0c61b02a5ecac"} Feb 16 17:19:13 crc kubenswrapper[4794]: I0216 17:19:13.982849 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-64c7v" podStartSLOduration=9.087611084 podStartE2EDuration="44.982827551s" podCreationTimestamp="2026-02-16 17:18:29 +0000 UTC" firstStartedPulling="2026-02-16 17:18:37.507398685 +0000 UTC m=+1143.455493332" lastFinishedPulling="2026-02-16 17:19:13.402615152 +0000 UTC m=+1179.350709799" observedRunningTime="2026-02-16 17:19:13.975269935 +0000 UTC m=+1179.923364592" watchObservedRunningTime="2026-02-16 17:19:13.982827551 +0000 UTC m=+1179.930922198" Feb 16 17:19:14 crc kubenswrapper[4794]: I0216 17:19:14.966735 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-b5tg6" event={"ID":"4d912db4-c2c9-4103-ba2c-26f1dc0cc4a6","Type":"ContainerStarted","Data":"1cab454fe9b321a77fe100722957e7be267da09437cf1beb88db15f4025f38d1"} Feb 16 17:19:14 crc kubenswrapper[4794]: I0216 17:19:14.966954 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-b5tg6" Feb 16 17:19:14 crc kubenswrapper[4794]: I0216 17:19:14.968258 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-q5f7s" event={"ID":"44bf8e87-8212-4680-bcdc-bf1ca6d94d35","Type":"ContainerStarted","Data":"628c2d102a8e7041c38edad5a034051c6fe252f344c2e69c907203ca5aa3e7d8"} Feb 16 17:19:14 crc kubenswrapper[4794]: I0216 17:19:14.968459 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-q5f7s" Feb 16 17:19:14 crc kubenswrapper[4794]: I0216 17:19:14.969502 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-4djph" event={"ID":"ef354ee7-16e4-4b4d-98c5-0f08fc370717","Type":"ContainerStarted","Data":"93ddcb2f3d7f98f7b97946932acd6893b5a8dd5f91d969e6666e1f853bde6436"} Feb 16 17:19:14 crc kubenswrapper[4794]: I0216 17:19:14.969884 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-4djph" Feb 16 17:19:15 crc kubenswrapper[4794]: I0216 17:19:15.002154 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-b5tg6" podStartSLOduration=4.942557583 podStartE2EDuration="46.002136503s" podCreationTimestamp="2026-02-16 17:18:29 +0000 UTC" firstStartedPulling="2026-02-16 17:18:33.210112828 +0000 UTC m=+1139.158207475" lastFinishedPulling="2026-02-16 17:19:14.269691728 +0000 UTC m=+1180.217786395" observedRunningTime="2026-02-16 17:19:14.999162788 +0000 UTC m=+1180.947257435" watchObservedRunningTime="2026-02-16 17:19:15.002136503 +0000 UTC m=+1180.950231150" Feb 16 17:19:15 crc kubenswrapper[4794]: I0216 17:19:15.036635 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-q5f7s" podStartSLOduration=4.965015013 podStartE2EDuration="46.036618337s" podCreationTimestamp="2026-02-16 17:18:29 +0000 UTC" firstStartedPulling="2026-02-16 17:18:33.197913679 +0000 UTC m=+1139.146008326" lastFinishedPulling="2026-02-16 17:19:14.269516983 +0000 UTC m=+1180.217611650" observedRunningTime="2026-02-16 17:19:15.028178676 +0000 UTC m=+1180.976273343" watchObservedRunningTime="2026-02-16 17:19:15.036618337 +0000 UTC m=+1180.984712984" Feb 16 17:19:16 crc kubenswrapper[4794]: I0216 17:19:16.034457 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ch4rq9" Feb 16 17:19:16 crc kubenswrapper[4794]: I0216 17:19:16.076024 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-4djph" podStartSLOduration=3.816996547 podStartE2EDuration="47.076002512s" podCreationTimestamp="2026-02-16 17:18:29 +0000 UTC" firstStartedPulling="2026-02-16 17:18:31.012918237 +0000 UTC m=+1136.961012894" lastFinishedPulling="2026-02-16 17:19:14.271924212 +0000 UTC m=+1180.220018859" observedRunningTime="2026-02-16 17:19:15.06371622 +0000 UTC m=+1181.011810867" watchObservedRunningTime="2026-02-16 17:19:16.076002512 +0000 UTC m=+1182.024097159" Feb 16 17:19:16 crc kubenswrapper[4794]: I0216 17:19:16.324380 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-9nlr2" Feb 16 17:19:19 crc kubenswrapper[4794]: I0216 17:19:19.012545 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-s79lr" event={"ID":"f7637fa0-4e0c-41e1-a8e7-ba9442495cfc","Type":"ContainerStarted","Data":"f6bbdc77335ab84cb2396442c2a7817b7f84f111d75922d7d591c5406043df2e"} Feb 16 17:19:19 crc kubenswrapper[4794]: I0216 17:19:19.015141 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-s79lr" Feb 16 17:19:19 crc kubenswrapper[4794]: I0216 17:19:19.036667 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-s79lr" podStartSLOduration=4.925810824 podStartE2EDuration="50.03664385s" podCreationTimestamp="2026-02-16 17:18:29 +0000 UTC" firstStartedPulling="2026-02-16 17:18:33.198142146 +0000 UTC m=+1139.146236793" lastFinishedPulling="2026-02-16 17:19:18.308975172 +0000 UTC m=+1184.257069819" observedRunningTime="2026-02-16 17:19:19.030149175 +0000 UTC m=+1184.978243862" watchObservedRunningTime="2026-02-16 17:19:19.03664385 +0000 UTC m=+1184.984738507" Feb 16 17:19:19 crc kubenswrapper[4794]: I0216 17:19:19.362991 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-4djph" Feb 16 17:19:19 crc kubenswrapper[4794]: I0216 17:19:19.384336 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-r5hls" Feb 16 17:19:19 crc kubenswrapper[4794]: I0216 17:19:19.891953 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-p59dn" Feb 16 17:19:19 crc kubenswrapper[4794]: I0216 17:19:19.937138 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-q5f7s" Feb 16 17:19:20 crc kubenswrapper[4794]: I0216 17:19:20.245887 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-g5kbr" Feb 16 17:19:20 crc kubenswrapper[4794]: I0216 17:19:20.348830 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-b5tg6" Feb 16 17:19:20 crc kubenswrapper[4794]: I0216 17:19:20.364776 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5884f785c-9wnws" Feb 16 17:19:20 crc kubenswrapper[4794]: I0216 17:19:20.393461 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-qnr9g" Feb 16 17:19:20 crc kubenswrapper[4794]: I0216 17:19:20.453024 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-946dc" Feb 16 17:19:21 crc kubenswrapper[4794]: I0216 17:19:21.420173 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-45nh7" Feb 16 17:19:30 crc kubenswrapper[4794]: I0216 17:19:30.098788 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-s79lr" Feb 16 17:19:47 crc kubenswrapper[4794]: I0216 17:19:47.572902 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pn2qh"] Feb 16 17:19:47 crc kubenswrapper[4794]: I0216 17:19:47.575050 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pn2qh" Feb 16 17:19:47 crc kubenswrapper[4794]: I0216 17:19:47.577234 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 16 17:19:47 crc kubenswrapper[4794]: I0216 17:19:47.577415 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 16 17:19:47 crc kubenswrapper[4794]: I0216 17:19:47.577586 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 16 17:19:47 crc kubenswrapper[4794]: I0216 17:19:47.583441 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-rst5h" Feb 16 17:19:47 crc kubenswrapper[4794]: I0216 17:19:47.596453 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pn2qh"] Feb 16 17:19:47 crc kubenswrapper[4794]: I0216 17:19:47.660877 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-gfk7g"] Feb 16 17:19:47 crc kubenswrapper[4794]: I0216 17:19:47.662531 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-gfk7g" Feb 16 17:19:47 crc kubenswrapper[4794]: I0216 17:19:47.662709 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-gfk7g"] Feb 16 17:19:47 crc kubenswrapper[4794]: I0216 17:19:47.667292 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 16 17:19:47 crc kubenswrapper[4794]: I0216 17:19:47.678926 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdfjc\" (UniqueName: \"kubernetes.io/projected/771cb777-b410-44fc-bd7a-d5057227dad8-kube-api-access-pdfjc\") pod \"dnsmasq-dns-675f4bcbfc-pn2qh\" (UID: \"771cb777-b410-44fc-bd7a-d5057227dad8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pn2qh" Feb 16 17:19:47 crc kubenswrapper[4794]: I0216 17:19:47.679031 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/771cb777-b410-44fc-bd7a-d5057227dad8-config\") pod \"dnsmasq-dns-675f4bcbfc-pn2qh\" (UID: \"771cb777-b410-44fc-bd7a-d5057227dad8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pn2qh" Feb 16 17:19:47 crc kubenswrapper[4794]: I0216 17:19:47.780825 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdfjc\" (UniqueName: \"kubernetes.io/projected/771cb777-b410-44fc-bd7a-d5057227dad8-kube-api-access-pdfjc\") pod \"dnsmasq-dns-675f4bcbfc-pn2qh\" (UID: \"771cb777-b410-44fc-bd7a-d5057227dad8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pn2qh" Feb 16 17:19:47 crc kubenswrapper[4794]: I0216 17:19:47.781191 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/771cb777-b410-44fc-bd7a-d5057227dad8-config\") pod \"dnsmasq-dns-675f4bcbfc-pn2qh\" (UID: \"771cb777-b410-44fc-bd7a-d5057227dad8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pn2qh" Feb 16 17:19:47 crc kubenswrapper[4794]: I0216 17:19:47.781234 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3a39e2d-9839-4c07-9cec-466372443514-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-gfk7g\" (UID: \"f3a39e2d-9839-4c07-9cec-466372443514\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gfk7g" Feb 16 17:19:47 crc kubenswrapper[4794]: I0216 17:19:47.781299 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jngxb\" (UniqueName: \"kubernetes.io/projected/f3a39e2d-9839-4c07-9cec-466372443514-kube-api-access-jngxb\") pod \"dnsmasq-dns-78dd6ddcc-gfk7g\" (UID: \"f3a39e2d-9839-4c07-9cec-466372443514\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gfk7g" Feb 16 17:19:47 crc kubenswrapper[4794]: I0216 17:19:47.781345 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3a39e2d-9839-4c07-9cec-466372443514-config\") pod \"dnsmasq-dns-78dd6ddcc-gfk7g\" (UID: \"f3a39e2d-9839-4c07-9cec-466372443514\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gfk7g" Feb 16 17:19:47 crc kubenswrapper[4794]: I0216 17:19:47.782270 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/771cb777-b410-44fc-bd7a-d5057227dad8-config\") pod \"dnsmasq-dns-675f4bcbfc-pn2qh\" (UID: \"771cb777-b410-44fc-bd7a-d5057227dad8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pn2qh" Feb 16 17:19:47 crc kubenswrapper[4794]: I0216 17:19:47.801198 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdfjc\" (UniqueName: \"kubernetes.io/projected/771cb777-b410-44fc-bd7a-d5057227dad8-kube-api-access-pdfjc\") pod \"dnsmasq-dns-675f4bcbfc-pn2qh\" (UID: \"771cb777-b410-44fc-bd7a-d5057227dad8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pn2qh" Feb 16 17:19:47 crc kubenswrapper[4794]: I0216 17:19:47.882853 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jngxb\" (UniqueName: \"kubernetes.io/projected/f3a39e2d-9839-4c07-9cec-466372443514-kube-api-access-jngxb\") pod \"dnsmasq-dns-78dd6ddcc-gfk7g\" (UID: \"f3a39e2d-9839-4c07-9cec-466372443514\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gfk7g" Feb 16 17:19:47 crc kubenswrapper[4794]: I0216 17:19:47.882916 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3a39e2d-9839-4c07-9cec-466372443514-config\") pod \"dnsmasq-dns-78dd6ddcc-gfk7g\" (UID: \"f3a39e2d-9839-4c07-9cec-466372443514\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gfk7g" Feb 16 17:19:47 crc kubenswrapper[4794]: I0216 17:19:47.883200 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3a39e2d-9839-4c07-9cec-466372443514-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-gfk7g\" (UID: \"f3a39e2d-9839-4c07-9cec-466372443514\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gfk7g" Feb 16 17:19:47 crc kubenswrapper[4794]: I0216 17:19:47.884743 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3a39e2d-9839-4c07-9cec-466372443514-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-gfk7g\" (UID: \"f3a39e2d-9839-4c07-9cec-466372443514\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gfk7g" Feb 16 17:19:47 crc kubenswrapper[4794]: I0216 17:19:47.884954 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3a39e2d-9839-4c07-9cec-466372443514-config\") pod \"dnsmasq-dns-78dd6ddcc-gfk7g\" (UID: \"f3a39e2d-9839-4c07-9cec-466372443514\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gfk7g" Feb 16 17:19:47 crc kubenswrapper[4794]: I0216 17:19:47.913812 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pn2qh" Feb 16 17:19:47 crc kubenswrapper[4794]: I0216 17:19:47.914067 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jngxb\" (UniqueName: \"kubernetes.io/projected/f3a39e2d-9839-4c07-9cec-466372443514-kube-api-access-jngxb\") pod \"dnsmasq-dns-78dd6ddcc-gfk7g\" (UID: \"f3a39e2d-9839-4c07-9cec-466372443514\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gfk7g" Feb 16 17:19:47 crc kubenswrapper[4794]: I0216 17:19:47.991533 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-gfk7g" Feb 16 17:19:48 crc kubenswrapper[4794]: I0216 17:19:48.418697 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pn2qh"] Feb 16 17:19:48 crc kubenswrapper[4794]: W0216 17:19:48.549935 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3a39e2d_9839_4c07_9cec_466372443514.slice/crio-917722aa0f224f1b5b84883f2832ae33497e21970eabffa30d3bce1c434966d5 WatchSource:0}: Error finding container 917722aa0f224f1b5b84883f2832ae33497e21970eabffa30d3bce1c434966d5: Status 404 returned error can't find the container with id 917722aa0f224f1b5b84883f2832ae33497e21970eabffa30d3bce1c434966d5 Feb 16 17:19:48 crc kubenswrapper[4794]: I0216 17:19:48.551715 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-gfk7g"] Feb 16 17:19:49 crc kubenswrapper[4794]: I0216 17:19:49.290825 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-pn2qh" event={"ID":"771cb777-b410-44fc-bd7a-d5057227dad8","Type":"ContainerStarted","Data":"e47906b3ce735029130e4e54bcef7e8c591336f81e42ffe56aea768d001662d0"} Feb 16 17:19:49 crc kubenswrapper[4794]: I0216 17:19:49.293516 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-gfk7g" event={"ID":"f3a39e2d-9839-4c07-9cec-466372443514","Type":"ContainerStarted","Data":"917722aa0f224f1b5b84883f2832ae33497e21970eabffa30d3bce1c434966d5"} Feb 16 17:19:50 crc kubenswrapper[4794]: I0216 17:19:50.591572 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pn2qh"] Feb 16 17:19:50 crc kubenswrapper[4794]: I0216 17:19:50.623967 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-sz7q7"] Feb 16 17:19:50 crc kubenswrapper[4794]: I0216 17:19:50.626008 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-sz7q7" Feb 16 17:19:50 crc kubenswrapper[4794]: I0216 17:19:50.654331 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-sz7q7"] Feb 16 17:19:50 crc kubenswrapper[4794]: I0216 17:19:50.684216 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gh62\" (UniqueName: \"kubernetes.io/projected/d554e5d8-e390-484d-b414-ace409b51d91-kube-api-access-9gh62\") pod \"dnsmasq-dns-666b6646f7-sz7q7\" (UID: \"d554e5d8-e390-484d-b414-ace409b51d91\") " pod="openstack/dnsmasq-dns-666b6646f7-sz7q7" Feb 16 17:19:50 crc kubenswrapper[4794]: I0216 17:19:50.684514 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d554e5d8-e390-484d-b414-ace409b51d91-dns-svc\") pod \"dnsmasq-dns-666b6646f7-sz7q7\" (UID: \"d554e5d8-e390-484d-b414-ace409b51d91\") " pod="openstack/dnsmasq-dns-666b6646f7-sz7q7" Feb 16 17:19:50 crc kubenswrapper[4794]: I0216 17:19:50.684574 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d554e5d8-e390-484d-b414-ace409b51d91-config\") pod \"dnsmasq-dns-666b6646f7-sz7q7\" (UID: \"d554e5d8-e390-484d-b414-ace409b51d91\") " pod="openstack/dnsmasq-dns-666b6646f7-sz7q7" Feb 16 17:19:50 crc kubenswrapper[4794]: I0216 17:19:50.786748 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d554e5d8-e390-484d-b414-ace409b51d91-dns-svc\") pod \"dnsmasq-dns-666b6646f7-sz7q7\" (UID: \"d554e5d8-e390-484d-b414-ace409b51d91\") " pod="openstack/dnsmasq-dns-666b6646f7-sz7q7" Feb 16 17:19:50 crc kubenswrapper[4794]: I0216 17:19:50.786825 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d554e5d8-e390-484d-b414-ace409b51d91-config\") pod \"dnsmasq-dns-666b6646f7-sz7q7\" (UID: \"d554e5d8-e390-484d-b414-ace409b51d91\") " pod="openstack/dnsmasq-dns-666b6646f7-sz7q7" Feb 16 17:19:50 crc kubenswrapper[4794]: I0216 17:19:50.786914 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gh62\" (UniqueName: \"kubernetes.io/projected/d554e5d8-e390-484d-b414-ace409b51d91-kube-api-access-9gh62\") pod \"dnsmasq-dns-666b6646f7-sz7q7\" (UID: \"d554e5d8-e390-484d-b414-ace409b51d91\") " pod="openstack/dnsmasq-dns-666b6646f7-sz7q7" Feb 16 17:19:50 crc kubenswrapper[4794]: I0216 17:19:50.788253 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d554e5d8-e390-484d-b414-ace409b51d91-dns-svc\") pod \"dnsmasq-dns-666b6646f7-sz7q7\" (UID: \"d554e5d8-e390-484d-b414-ace409b51d91\") " pod="openstack/dnsmasq-dns-666b6646f7-sz7q7" Feb 16 17:19:50 crc kubenswrapper[4794]: I0216 17:19:50.789800 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d554e5d8-e390-484d-b414-ace409b51d91-config\") pod \"dnsmasq-dns-666b6646f7-sz7q7\" (UID: \"d554e5d8-e390-484d-b414-ace409b51d91\") " pod="openstack/dnsmasq-dns-666b6646f7-sz7q7" Feb 16 17:19:50 crc kubenswrapper[4794]: I0216 17:19:50.817411 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gh62\" (UniqueName: \"kubernetes.io/projected/d554e5d8-e390-484d-b414-ace409b51d91-kube-api-access-9gh62\") pod \"dnsmasq-dns-666b6646f7-sz7q7\" (UID: \"d554e5d8-e390-484d-b414-ace409b51d91\") " pod="openstack/dnsmasq-dns-666b6646f7-sz7q7" Feb 16 17:19:50 crc kubenswrapper[4794]: I0216 17:19:50.976029 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-gfk7g"] Feb 16 17:19:50 crc kubenswrapper[4794]: I0216 17:19:50.983827 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-sz7q7" Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.007792 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-tjvdt"] Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.009506 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-tjvdt" Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.025415 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-tjvdt"] Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.094631 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw9km\" (UniqueName: \"kubernetes.io/projected/ab201dca-05ea-4f61-a71f-be55c9587777-kube-api-access-fw9km\") pod \"dnsmasq-dns-57d769cc4f-tjvdt\" (UID: \"ab201dca-05ea-4f61-a71f-be55c9587777\") " pod="openstack/dnsmasq-dns-57d769cc4f-tjvdt" Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.096295 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab201dca-05ea-4f61-a71f-be55c9587777-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-tjvdt\" (UID: \"ab201dca-05ea-4f61-a71f-be55c9587777\") " pod="openstack/dnsmasq-dns-57d769cc4f-tjvdt" Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.096446 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab201dca-05ea-4f61-a71f-be55c9587777-config\") pod \"dnsmasq-dns-57d769cc4f-tjvdt\" (UID: \"ab201dca-05ea-4f61-a71f-be55c9587777\") " pod="openstack/dnsmasq-dns-57d769cc4f-tjvdt" Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.199028 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw9km\" (UniqueName: \"kubernetes.io/projected/ab201dca-05ea-4f61-a71f-be55c9587777-kube-api-access-fw9km\") pod \"dnsmasq-dns-57d769cc4f-tjvdt\" (UID: \"ab201dca-05ea-4f61-a71f-be55c9587777\") " pod="openstack/dnsmasq-dns-57d769cc4f-tjvdt" Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.199706 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab201dca-05ea-4f61-a71f-be55c9587777-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-tjvdt\" (UID: \"ab201dca-05ea-4f61-a71f-be55c9587777\") " pod="openstack/dnsmasq-dns-57d769cc4f-tjvdt" Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.199757 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab201dca-05ea-4f61-a71f-be55c9587777-config\") pod \"dnsmasq-dns-57d769cc4f-tjvdt\" (UID: \"ab201dca-05ea-4f61-a71f-be55c9587777\") " pod="openstack/dnsmasq-dns-57d769cc4f-tjvdt" Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.202637 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab201dca-05ea-4f61-a71f-be55c9587777-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-tjvdt\" (UID: \"ab201dca-05ea-4f61-a71f-be55c9587777\") " pod="openstack/dnsmasq-dns-57d769cc4f-tjvdt" Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.203082 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab201dca-05ea-4f61-a71f-be55c9587777-config\") pod \"dnsmasq-dns-57d769cc4f-tjvdt\" (UID: \"ab201dca-05ea-4f61-a71f-be55c9587777\") " pod="openstack/dnsmasq-dns-57d769cc4f-tjvdt" Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.223542 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw9km\" (UniqueName: \"kubernetes.io/projected/ab201dca-05ea-4f61-a71f-be55c9587777-kube-api-access-fw9km\") pod \"dnsmasq-dns-57d769cc4f-tjvdt\" (UID: \"ab201dca-05ea-4f61-a71f-be55c9587777\") " pod="openstack/dnsmasq-dns-57d769cc4f-tjvdt" Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.399517 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-tjvdt" Feb 16 17:19:51 crc kubenswrapper[4794]: W0216 17:19:51.570098 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd554e5d8_e390_484d_b414_ace409b51d91.slice/crio-9071a90f7fa541d5a4bd07f7d4d727fba8e6c8f5ddf8370c1774cc7490a72fa6 WatchSource:0}: Error finding container 9071a90f7fa541d5a4bd07f7d4d727fba8e6c8f5ddf8370c1774cc7490a72fa6: Status 404 returned error can't find the container with id 9071a90f7fa541d5a4bd07f7d4d727fba8e6c8f5ddf8370c1774cc7490a72fa6 Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.572474 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-sz7q7"] Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.772115 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.773692 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.778227 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-2sxlr" Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.778625 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.778816 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.778942 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.779086 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.779205 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.798972 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.818621 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.844445 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.849651 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.854881 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.863903 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.864060 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.924737 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/026253d8-eaea-4c12-91e0-455331cdaa5e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " pod="openstack/rabbitmq-server-0" Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.933796 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/026253d8-eaea-4c12-91e0-455331cdaa5e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " pod="openstack/rabbitmq-server-0" Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.933885 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/026253d8-eaea-4c12-91e0-455331cdaa5e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " pod="openstack/rabbitmq-server-0" Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.933940 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/026253d8-eaea-4c12-91e0-455331cdaa5e-config-data\") pod \"rabbitmq-server-0\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " pod="openstack/rabbitmq-server-0" Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.933974 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/026253d8-eaea-4c12-91e0-455331cdaa5e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " pod="openstack/rabbitmq-server-0" Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.934037 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/026253d8-eaea-4c12-91e0-455331cdaa5e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " pod="openstack/rabbitmq-server-0" Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.934082 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-89a365df-e5d2-47cd-ba73-ad62767e7783\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89a365df-e5d2-47cd-ba73-ad62767e7783\") pod \"rabbitmq-server-0\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " pod="openstack/rabbitmq-server-0" Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.934161 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/026253d8-eaea-4c12-91e0-455331cdaa5e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " pod="openstack/rabbitmq-server-0" Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.934257 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/026253d8-eaea-4c12-91e0-455331cdaa5e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " pod="openstack/rabbitmq-server-0" Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.937324 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/026253d8-eaea-4c12-91e0-455331cdaa5e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " pod="openstack/rabbitmq-server-0" Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.937478 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4skrz\" (UniqueName: \"kubernetes.io/projected/026253d8-eaea-4c12-91e0-455331cdaa5e-kube-api-access-4skrz\") pod \"rabbitmq-server-0\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " pod="openstack/rabbitmq-server-0" Feb 16 17:19:51 crc kubenswrapper[4794]: I0216 17:19:51.937726 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 17:19:52 crc kubenswrapper[4794]: W0216 17:19:52.005987 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab201dca_05ea_4f61_a71f_be55c9587777.slice/crio-78562777748db48d9f8f620c868e837a4e63365c1512c6fd7f5d87a7c8e15e7e WatchSource:0}: Error finding container 78562777748db48d9f8f620c868e837a4e63365c1512c6fd7f5d87a7c8e15e7e: Status 404 returned error can't find the container with id 78562777748db48d9f8f620c868e837a4e63365c1512c6fd7f5d87a7c8e15e7e Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.006896 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-tjvdt"] Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.039846 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/026253d8-eaea-4c12-91e0-455331cdaa5e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " pod="openstack/rabbitmq-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.039941 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs5wx\" (UniqueName: \"kubernetes.io/projected/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-kube-api-access-hs5wx\") pod \"rabbitmq-server-2\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " pod="openstack/rabbitmq-server-2" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.040154 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-pod-info\") pod \"rabbitmq-server-2\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " pod="openstack/rabbitmq-server-2" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.040261 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8fb6be66-7fef-4554-897b-30d9f4637138-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " pod="openstack/rabbitmq-server-1" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.040339 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8fb6be66-7fef-4554-897b-30d9f4637138-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " pod="openstack/rabbitmq-server-1" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.040379 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/026253d8-eaea-4c12-91e0-455331cdaa5e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " pod="openstack/rabbitmq-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.040443 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-50f0f550-0bce-496f-9120-455efff95d36\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-50f0f550-0bce-496f-9120-455efff95d36\") pod \"rabbitmq-server-2\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " pod="openstack/rabbitmq-server-2" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.040475 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8fb6be66-7fef-4554-897b-30d9f4637138-pod-info\") pod \"rabbitmq-server-1\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " pod="openstack/rabbitmq-server-1" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.040496 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " pod="openstack/rabbitmq-server-2" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.040523 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/026253d8-eaea-4c12-91e0-455331cdaa5e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " pod="openstack/rabbitmq-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.040585 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4skrz\" (UniqueName: \"kubernetes.io/projected/026253d8-eaea-4c12-91e0-455331cdaa5e-kube-api-access-4skrz\") pod \"rabbitmq-server-0\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " pod="openstack/rabbitmq-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.040616 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-server-conf\") pod \"rabbitmq-server-2\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " pod="openstack/rabbitmq-server-2" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.040647 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " pod="openstack/rabbitmq-server-2" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.040689 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8fb6be66-7fef-4554-897b-30d9f4637138-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " pod="openstack/rabbitmq-server-1" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.040727 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8fb6be66-7fef-4554-897b-30d9f4637138-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " pod="openstack/rabbitmq-server-1" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.040741 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8fb6be66-7fef-4554-897b-30d9f4637138-server-conf\") pod \"rabbitmq-server-1\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " pod="openstack/rabbitmq-server-1" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.040781 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8fb6be66-7fef-4554-897b-30d9f4637138-config-data\") pod \"rabbitmq-server-1\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " pod="openstack/rabbitmq-server-1" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.040795 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-config-data\") pod \"rabbitmq-server-2\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " pod="openstack/rabbitmq-server-2" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.040842 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " pod="openstack/rabbitmq-server-2" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.040873 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " pod="openstack/rabbitmq-server-2" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.040915 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8fb6be66-7fef-4554-897b-30d9f4637138-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " pod="openstack/rabbitmq-server-1" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.040955 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/026253d8-eaea-4c12-91e0-455331cdaa5e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " pod="openstack/rabbitmq-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.040986 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8fb6be66-7fef-4554-897b-30d9f4637138-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " pod="openstack/rabbitmq-server-1" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.041007 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " pod="openstack/rabbitmq-server-2" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.041047 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2bba1a5c-beee-4d93-8d8d-f837ee7c43c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bba1a5c-beee-4d93-8d8d-f837ee7c43c1\") pod \"rabbitmq-server-1\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " pod="openstack/rabbitmq-server-1" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.041072 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrs65\" (UniqueName: \"kubernetes.io/projected/8fb6be66-7fef-4554-897b-30d9f4637138-kube-api-access-rrs65\") pod \"rabbitmq-server-1\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " pod="openstack/rabbitmq-server-1" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.041131 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/026253d8-eaea-4c12-91e0-455331cdaa5e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " pod="openstack/rabbitmq-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.041200 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/026253d8-eaea-4c12-91e0-455331cdaa5e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " pod="openstack/rabbitmq-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.041241 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/026253d8-eaea-4c12-91e0-455331cdaa5e-config-data\") pod \"rabbitmq-server-0\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " pod="openstack/rabbitmq-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.041271 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/026253d8-eaea-4c12-91e0-455331cdaa5e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " pod="openstack/rabbitmq-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.041312 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " pod="openstack/rabbitmq-server-2" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.041374 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/026253d8-eaea-4c12-91e0-455331cdaa5e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " pod="openstack/rabbitmq-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.041457 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-89a365df-e5d2-47cd-ba73-ad62767e7783\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89a365df-e5d2-47cd-ba73-ad62767e7783\") pod \"rabbitmq-server-0\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " pod="openstack/rabbitmq-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.041338 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/026253d8-eaea-4c12-91e0-455331cdaa5e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " pod="openstack/rabbitmq-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.041513 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/026253d8-eaea-4c12-91e0-455331cdaa5e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " pod="openstack/rabbitmq-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.044028 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/026253d8-eaea-4c12-91e0-455331cdaa5e-config-data\") pod \"rabbitmq-server-0\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " pod="openstack/rabbitmq-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.044247 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/026253d8-eaea-4c12-91e0-455331cdaa5e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " pod="openstack/rabbitmq-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.045475 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/026253d8-eaea-4c12-91e0-455331cdaa5e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " pod="openstack/rabbitmq-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.057208 4794 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.057270 4794 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-89a365df-e5d2-47cd-ba73-ad62767e7783\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89a365df-e5d2-47cd-ba73-ad62767e7783\") pod \"rabbitmq-server-0\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c0b4d361c22c333b13f1a0671c782685ad05346f3c98eaa4d7999cbaa1be313f/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.057704 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/026253d8-eaea-4c12-91e0-455331cdaa5e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " pod="openstack/rabbitmq-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.062825 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/026253d8-eaea-4c12-91e0-455331cdaa5e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " pod="openstack/rabbitmq-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.074143 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/026253d8-eaea-4c12-91e0-455331cdaa5e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " pod="openstack/rabbitmq-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.077982 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4skrz\" (UniqueName: \"kubernetes.io/projected/026253d8-eaea-4c12-91e0-455331cdaa5e-kube-api-access-4skrz\") pod \"rabbitmq-server-0\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " pod="openstack/rabbitmq-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.085883 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/026253d8-eaea-4c12-91e0-455331cdaa5e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " pod="openstack/rabbitmq-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.145498 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8fb6be66-7fef-4554-897b-30d9f4637138-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " pod="openstack/rabbitmq-server-1" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.145586 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8fb6be66-7fef-4554-897b-30d9f4637138-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " pod="openstack/rabbitmq-server-1" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.145605 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8fb6be66-7fef-4554-897b-30d9f4637138-server-conf\") pod \"rabbitmq-server-1\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " pod="openstack/rabbitmq-server-1" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.145645 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8fb6be66-7fef-4554-897b-30d9f4637138-config-data\") pod \"rabbitmq-server-1\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " pod="openstack/rabbitmq-server-1" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.145666 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-config-data\") pod \"rabbitmq-server-2\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " pod="openstack/rabbitmq-server-2" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.145700 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " pod="openstack/rabbitmq-server-2" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.145726 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " pod="openstack/rabbitmq-server-2" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.145754 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8fb6be66-7fef-4554-897b-30d9f4637138-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " pod="openstack/rabbitmq-server-1" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.145793 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8fb6be66-7fef-4554-897b-30d9f4637138-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " pod="openstack/rabbitmq-server-1" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.145808 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " pod="openstack/rabbitmq-server-2" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.145843 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2bba1a5c-beee-4d93-8d8d-f837ee7c43c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bba1a5c-beee-4d93-8d8d-f837ee7c43c1\") pod \"rabbitmq-server-1\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " pod="openstack/rabbitmq-server-1" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.145864 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrs65\" (UniqueName: \"kubernetes.io/projected/8fb6be66-7fef-4554-897b-30d9f4637138-kube-api-access-rrs65\") pod \"rabbitmq-server-1\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " pod="openstack/rabbitmq-server-1" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.145928 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " pod="openstack/rabbitmq-server-2" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.146615 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " pod="openstack/rabbitmq-server-2" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.147719 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8fb6be66-7fef-4554-897b-30d9f4637138-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " pod="openstack/rabbitmq-server-1" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.150275 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-config-data\") pod \"rabbitmq-server-2\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " pod="openstack/rabbitmq-server-2" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.151105 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8fb6be66-7fef-4554-897b-30d9f4637138-config-data\") pod \"rabbitmq-server-1\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " pod="openstack/rabbitmq-server-1" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.151291 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8fb6be66-7fef-4554-897b-30d9f4637138-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " pod="openstack/rabbitmq-server-1" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.151585 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " pod="openstack/rabbitmq-server-2" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.152139 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8fb6be66-7fef-4554-897b-30d9f4637138-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " pod="openstack/rabbitmq-server-1" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.152788 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " pod="openstack/rabbitmq-server-2" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.153043 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs5wx\" (UniqueName: \"kubernetes.io/projected/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-kube-api-access-hs5wx\") pod \"rabbitmq-server-2\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " pod="openstack/rabbitmq-server-2" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.154210 4794 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.154776 4794 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2bba1a5c-beee-4d93-8d8d-f837ee7c43c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bba1a5c-beee-4d93-8d8d-f837ee7c43c1\") pod \"rabbitmq-server-1\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/242f881427f1e742f812f05a8fc0a139e128bcd26a1d8cef4f20918c4b6df8a4/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.154798 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-pod-info\") pod \"rabbitmq-server-2\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " pod="openstack/rabbitmq-server-2" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.155573 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8fb6be66-7fef-4554-897b-30d9f4637138-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " pod="openstack/rabbitmq-server-1" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.155641 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8fb6be66-7fef-4554-897b-30d9f4637138-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " pod="openstack/rabbitmq-server-1" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.155731 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-50f0f550-0bce-496f-9120-455efff95d36\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-50f0f550-0bce-496f-9120-455efff95d36\") pod \"rabbitmq-server-2\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " pod="openstack/rabbitmq-server-2" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.155778 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8fb6be66-7fef-4554-897b-30d9f4637138-pod-info\") pod \"rabbitmq-server-1\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " pod="openstack/rabbitmq-server-1" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.155799 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " pod="openstack/rabbitmq-server-2" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.155952 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-server-conf\") pod \"rabbitmq-server-2\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " pod="openstack/rabbitmq-server-2" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.156213 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " pod="openstack/rabbitmq-server-2" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.157226 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " pod="openstack/rabbitmq-server-2" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.158948 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8fb6be66-7fef-4554-897b-30d9f4637138-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " pod="openstack/rabbitmq-server-1" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.159027 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8fb6be66-7fef-4554-897b-30d9f4637138-server-conf\") pod \"rabbitmq-server-1\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " pod="openstack/rabbitmq-server-1" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.163431 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-server-conf\") pod \"rabbitmq-server-2\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " pod="openstack/rabbitmq-server-2" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.163665 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " pod="openstack/rabbitmq-server-2" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.164747 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-pod-info\") pod \"rabbitmq-server-2\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " pod="openstack/rabbitmq-server-2" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.164877 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8fb6be66-7fef-4554-897b-30d9f4637138-pod-info\") pod \"rabbitmq-server-1\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " pod="openstack/rabbitmq-server-1" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.164987 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.166081 4794 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.166448 4794 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-50f0f550-0bce-496f-9120-455efff95d36\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-50f0f550-0bce-496f-9120-455efff95d36\") pod \"rabbitmq-server-2\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4c15002b76397959393cbf983d3dd1ee42d1ae06ec66f0df68175f8304780e0f/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.166884 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8fb6be66-7fef-4554-897b-30d9f4637138-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " pod="openstack/rabbitmq-server-1" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.168529 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " pod="openstack/rabbitmq-server-2" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.171204 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8fb6be66-7fef-4554-897b-30d9f4637138-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " pod="openstack/rabbitmq-server-1" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.172935 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.192245 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.192409 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.192534 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.192650 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.192772 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-8m5dd" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.192883 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.192990 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.193230 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrs65\" (UniqueName: \"kubernetes.io/projected/8fb6be66-7fef-4554-897b-30d9f4637138-kube-api-access-rrs65\") pod \"rabbitmq-server-1\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " pod="openstack/rabbitmq-server-1" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.195808 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.205113 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs5wx\" (UniqueName: \"kubernetes.io/projected/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-kube-api-access-hs5wx\") pod \"rabbitmq-server-2\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " pod="openstack/rabbitmq-server-2" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.221962 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-89a365df-e5d2-47cd-ba73-ad62767e7783\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89a365df-e5d2-47cd-ba73-ad62767e7783\") pod \"rabbitmq-server-0\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " pod="openstack/rabbitmq-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.261801 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/47572286-fbbf-4189-9c6f-feb54624ee2a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.261863 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/47572286-fbbf-4189-9c6f-feb54624ee2a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.261882 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/47572286-fbbf-4189-9c6f-feb54624ee2a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.261921 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nddf6\" (UniqueName: \"kubernetes.io/projected/47572286-fbbf-4189-9c6f-feb54624ee2a-kube-api-access-nddf6\") pod \"rabbitmq-cell1-server-0\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.262007 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/47572286-fbbf-4189-9c6f-feb54624ee2a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.262180 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/47572286-fbbf-4189-9c6f-feb54624ee2a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.262259 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/47572286-fbbf-4189-9c6f-feb54624ee2a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.262341 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e508a32c-3b6e-49e8-a1fe-e546a58f5ecb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e508a32c-3b6e-49e8-a1fe-e546a58f5ecb\") pod \"rabbitmq-cell1-server-0\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.262358 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/47572286-fbbf-4189-9c6f-feb54624ee2a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.262508 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/47572286-fbbf-4189-9c6f-feb54624ee2a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.262531 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/47572286-fbbf-4189-9c6f-feb54624ee2a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.280673 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2bba1a5c-beee-4d93-8d8d-f837ee7c43c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bba1a5c-beee-4d93-8d8d-f837ee7c43c1\") pod \"rabbitmq-server-1\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " pod="openstack/rabbitmq-server-1" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.309500 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-50f0f550-0bce-496f-9120-455efff95d36\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-50f0f550-0bce-496f-9120-455efff95d36\") pod \"rabbitmq-server-2\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " pod="openstack/rabbitmq-server-2" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.333908 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-sz7q7" event={"ID":"d554e5d8-e390-484d-b414-ace409b51d91","Type":"ContainerStarted","Data":"9071a90f7fa541d5a4bd07f7d4d727fba8e6c8f5ddf8370c1774cc7490a72fa6"} Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.335832 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-tjvdt" event={"ID":"ab201dca-05ea-4f61-a71f-be55c9587777","Type":"ContainerStarted","Data":"78562777748db48d9f8f620c868e837a4e63365c1512c6fd7f5d87a7c8e15e7e"} Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.364841 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/47572286-fbbf-4189-9c6f-feb54624ee2a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.364886 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/47572286-fbbf-4189-9c6f-feb54624ee2a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.364955 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/47572286-fbbf-4189-9c6f-feb54624ee2a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.364983 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/47572286-fbbf-4189-9c6f-feb54624ee2a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.364999 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/47572286-fbbf-4189-9c6f-feb54624ee2a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.365019 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nddf6\" (UniqueName: \"kubernetes.io/projected/47572286-fbbf-4189-9c6f-feb54624ee2a-kube-api-access-nddf6\") pod \"rabbitmq-cell1-server-0\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.365062 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/47572286-fbbf-4189-9c6f-feb54624ee2a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.365109 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/47572286-fbbf-4189-9c6f-feb54624ee2a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.365126 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/47572286-fbbf-4189-9c6f-feb54624ee2a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.365163 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e508a32c-3b6e-49e8-a1fe-e546a58f5ecb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e508a32c-3b6e-49e8-a1fe-e546a58f5ecb\") pod \"rabbitmq-cell1-server-0\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.365181 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/47572286-fbbf-4189-9c6f-feb54624ee2a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.365664 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/47572286-fbbf-4189-9c6f-feb54624ee2a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.366633 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/47572286-fbbf-4189-9c6f-feb54624ee2a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.367703 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/47572286-fbbf-4189-9c6f-feb54624ee2a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.369129 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/47572286-fbbf-4189-9c6f-feb54624ee2a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.369436 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/47572286-fbbf-4189-9c6f-feb54624ee2a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.369497 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/47572286-fbbf-4189-9c6f-feb54624ee2a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.371602 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/47572286-fbbf-4189-9c6f-feb54624ee2a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.371773 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/47572286-fbbf-4189-9c6f-feb54624ee2a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.372366 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/47572286-fbbf-4189-9c6f-feb54624ee2a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.372873 4794 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.373014 4794 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e508a32c-3b6e-49e8-a1fe-e546a58f5ecb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e508a32c-3b6e-49e8-a1fe-e546a58f5ecb\") pod \"rabbitmq-cell1-server-0\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fd6fe655d1de5a63c78809a5a13c105c52992f0077ee7c00afae181712258956/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.389458 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nddf6\" (UniqueName: \"kubernetes.io/projected/47572286-fbbf-4189-9c6f-feb54624ee2a-kube-api-access-nddf6\") pod \"rabbitmq-cell1-server-0\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.433004 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e508a32c-3b6e-49e8-a1fe-e546a58f5ecb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e508a32c-3b6e-49e8-a1fe-e546a58f5ecb\") pod \"rabbitmq-cell1-server-0\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.462314 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.512235 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.518202 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 16 17:19:52 crc kubenswrapper[4794]: I0216 17:19:52.526966 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.052992 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.259920 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.263136 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.269029 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.269883 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.271219 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-nxcc6" Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.271507 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.277828 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.280011 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.298486 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 17:19:53 crc kubenswrapper[4794]: W0216 17:19:53.323914 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fb6be66_7fef_4554_897b_30d9f4637138.slice/crio-ee57265729545919a3dfbdf0d3a200acd3d16f60923386d102805b7aabe256da WatchSource:0}: Error finding container ee57265729545919a3dfbdf0d3a200acd3d16f60923386d102805b7aabe256da: Status 404 returned error can't find the container with id ee57265729545919a3dfbdf0d3a200acd3d16f60923386d102805b7aabe256da Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.373077 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"8fb6be66-7fef-4554-897b-30d9f4637138","Type":"ContainerStarted","Data":"ee57265729545919a3dfbdf0d3a200acd3d16f60923386d102805b7aabe256da"} Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.393628 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"026253d8-eaea-4c12-91e0-455331cdaa5e","Type":"ContainerStarted","Data":"25237b1f3d7add63fa3f53454163ee819bb25eb18acd6ba04da6b9f4b494bb8a"} Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.402720 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkwm2\" (UniqueName: \"kubernetes.io/projected/c07f58cd-ea21-4cb3-a3db-0d184c3628bd-kube-api-access-qkwm2\") pod \"openstack-galera-0\" (UID: \"c07f58cd-ea21-4cb3-a3db-0d184c3628bd\") " pod="openstack/openstack-galera-0" Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.413668 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c07f58cd-ea21-4cb3-a3db-0d184c3628bd-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c07f58cd-ea21-4cb3-a3db-0d184c3628bd\") " pod="openstack/openstack-galera-0" Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.413819 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c07f58cd-ea21-4cb3-a3db-0d184c3628bd-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c07f58cd-ea21-4cb3-a3db-0d184c3628bd\") " pod="openstack/openstack-galera-0" Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.413867 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c07f58cd-ea21-4cb3-a3db-0d184c3628bd-kolla-config\") pod \"openstack-galera-0\" (UID: \"c07f58cd-ea21-4cb3-a3db-0d184c3628bd\") " pod="openstack/openstack-galera-0" Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.414116 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c07f58cd-ea21-4cb3-a3db-0d184c3628bd-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c07f58cd-ea21-4cb3-a3db-0d184c3628bd\") " pod="openstack/openstack-galera-0" Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.414171 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c07f58cd-ea21-4cb3-a3db-0d184c3628bd-config-data-default\") pod \"openstack-galera-0\" (UID: \"c07f58cd-ea21-4cb3-a3db-0d184c3628bd\") " pod="openstack/openstack-galera-0" Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.414247 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-30210d77-a3e0-4e4f-9c11-e1e900b1be2e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-30210d77-a3e0-4e4f-9c11-e1e900b1be2e\") pod \"openstack-galera-0\" (UID: \"c07f58cd-ea21-4cb3-a3db-0d184c3628bd\") " pod="openstack/openstack-galera-0" Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.414288 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c07f58cd-ea21-4cb3-a3db-0d184c3628bd-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c07f58cd-ea21-4cb3-a3db-0d184c3628bd\") " pod="openstack/openstack-galera-0" Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.492858 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.522631 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c07f58cd-ea21-4cb3-a3db-0d184c3628bd-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c07f58cd-ea21-4cb3-a3db-0d184c3628bd\") " pod="openstack/openstack-galera-0" Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.522684 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c07f58cd-ea21-4cb3-a3db-0d184c3628bd-config-data-default\") pod \"openstack-galera-0\" (UID: \"c07f58cd-ea21-4cb3-a3db-0d184c3628bd\") " pod="openstack/openstack-galera-0" Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.522722 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-30210d77-a3e0-4e4f-9c11-e1e900b1be2e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-30210d77-a3e0-4e4f-9c11-e1e900b1be2e\") pod \"openstack-galera-0\" (UID: \"c07f58cd-ea21-4cb3-a3db-0d184c3628bd\") " pod="openstack/openstack-galera-0" Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.522750 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c07f58cd-ea21-4cb3-a3db-0d184c3628bd-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c07f58cd-ea21-4cb3-a3db-0d184c3628bd\") " pod="openstack/openstack-galera-0" Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.522943 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkwm2\" (UniqueName: \"kubernetes.io/projected/c07f58cd-ea21-4cb3-a3db-0d184c3628bd-kube-api-access-qkwm2\") pod \"openstack-galera-0\" (UID: \"c07f58cd-ea21-4cb3-a3db-0d184c3628bd\") " pod="openstack/openstack-galera-0" Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.523029 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c07f58cd-ea21-4cb3-a3db-0d184c3628bd-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c07f58cd-ea21-4cb3-a3db-0d184c3628bd\") " pod="openstack/openstack-galera-0" Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.523053 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c07f58cd-ea21-4cb3-a3db-0d184c3628bd-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c07f58cd-ea21-4cb3-a3db-0d184c3628bd\") " pod="openstack/openstack-galera-0" Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.523080 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c07f58cd-ea21-4cb3-a3db-0d184c3628bd-kolla-config\") pod \"openstack-galera-0\" (UID: \"c07f58cd-ea21-4cb3-a3db-0d184c3628bd\") " pod="openstack/openstack-galera-0" Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.523871 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c07f58cd-ea21-4cb3-a3db-0d184c3628bd-kolla-config\") pod \"openstack-galera-0\" (UID: \"c07f58cd-ea21-4cb3-a3db-0d184c3628bd\") " pod="openstack/openstack-galera-0" Feb 16 17:19:53 crc kubenswrapper[4794]: W0216 17:19:53.528453 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14a6d353_2dbd_49f5_b69f_1fdcd5c13db8.slice/crio-8dccc3ba620f2918eb1300115aaccebdba42f6717766d76b9570f415509e95bd WatchSource:0}: Error finding container 8dccc3ba620f2918eb1300115aaccebdba42f6717766d76b9570f415509e95bd: Status 404 returned error can't find the container with id 8dccc3ba620f2918eb1300115aaccebdba42f6717766d76b9570f415509e95bd Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.529421 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c07f58cd-ea21-4cb3-a3db-0d184c3628bd-config-data-default\") pod \"openstack-galera-0\" (UID: \"c07f58cd-ea21-4cb3-a3db-0d184c3628bd\") " pod="openstack/openstack-galera-0" Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.530514 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c07f58cd-ea21-4cb3-a3db-0d184c3628bd-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c07f58cd-ea21-4cb3-a3db-0d184c3628bd\") " pod="openstack/openstack-galera-0" Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.530872 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c07f58cd-ea21-4cb3-a3db-0d184c3628bd-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c07f58cd-ea21-4cb3-a3db-0d184c3628bd\") " pod="openstack/openstack-galera-0" Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.544288 4794 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.544374 4794 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-30210d77-a3e0-4e4f-9c11-e1e900b1be2e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-30210d77-a3e0-4e4f-9c11-e1e900b1be2e\") pod \"openstack-galera-0\" (UID: \"c07f58cd-ea21-4cb3-a3db-0d184c3628bd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d3f54bda3a2f2ecb5aa1669974aa596c1ac0a69e90afa150f71320e52f1e5df9/globalmount\"" pod="openstack/openstack-galera-0" Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.551181 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c07f58cd-ea21-4cb3-a3db-0d184c3628bd-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c07f58cd-ea21-4cb3-a3db-0d184c3628bd\") " pod="openstack/openstack-galera-0" Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.554852 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c07f58cd-ea21-4cb3-a3db-0d184c3628bd-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c07f58cd-ea21-4cb3-a3db-0d184c3628bd\") " pod="openstack/openstack-galera-0" Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.560396 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkwm2\" (UniqueName: \"kubernetes.io/projected/c07f58cd-ea21-4cb3-a3db-0d184c3628bd-kube-api-access-qkwm2\") pod \"openstack-galera-0\" (UID: \"c07f58cd-ea21-4cb3-a3db-0d184c3628bd\") " pod="openstack/openstack-galera-0" Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.601750 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-30210d77-a3e0-4e4f-9c11-e1e900b1be2e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-30210d77-a3e0-4e4f-9c11-e1e900b1be2e\") pod \"openstack-galera-0\" (UID: \"c07f58cd-ea21-4cb3-a3db-0d184c3628bd\") " pod="openstack/openstack-galera-0" Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.685445 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 16 17:19:53 crc kubenswrapper[4794]: I0216 17:19:53.739503 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.408122 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8","Type":"ContainerStarted","Data":"8dccc3ba620f2918eb1300115aaccebdba42f6717766d76b9570f415509e95bd"} Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.678715 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.682462 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.689683 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-vhk4x" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.689938 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.690113 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.690256 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.696969 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.762921 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/927505a3-c47f-4b5a-ac60-d35b0140edfe-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"927505a3-c47f-4b5a-ac60-d35b0140edfe\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.763238 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/927505a3-c47f-4b5a-ac60-d35b0140edfe-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"927505a3-c47f-4b5a-ac60-d35b0140edfe\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.763333 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-df238faa-71f2-4638-85c7-afa9eacd18e4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-df238faa-71f2-4638-85c7-afa9eacd18e4\") pod \"openstack-cell1-galera-0\" (UID: \"927505a3-c47f-4b5a-ac60-d35b0140edfe\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.763353 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/927505a3-c47f-4b5a-ac60-d35b0140edfe-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"927505a3-c47f-4b5a-ac60-d35b0140edfe\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.763379 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927505a3-c47f-4b5a-ac60-d35b0140edfe-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"927505a3-c47f-4b5a-ac60-d35b0140edfe\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.763398 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/927505a3-c47f-4b5a-ac60-d35b0140edfe-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"927505a3-c47f-4b5a-ac60-d35b0140edfe\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.763422 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncdnt\" (UniqueName: \"kubernetes.io/projected/927505a3-c47f-4b5a-ac60-d35b0140edfe-kube-api-access-ncdnt\") pod \"openstack-cell1-galera-0\" (UID: \"927505a3-c47f-4b5a-ac60-d35b0140edfe\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.763491 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/927505a3-c47f-4b5a-ac60-d35b0140edfe-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"927505a3-c47f-4b5a-ac60-d35b0140edfe\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.782404 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.784463 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.787900 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.788538 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-zl47t" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.788634 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.835670 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.865523 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/927505a3-c47f-4b5a-ac60-d35b0140edfe-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"927505a3-c47f-4b5a-ac60-d35b0140edfe\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.865577 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-df238faa-71f2-4638-85c7-afa9eacd18e4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-df238faa-71f2-4638-85c7-afa9eacd18e4\") pod \"openstack-cell1-galera-0\" (UID: \"927505a3-c47f-4b5a-ac60-d35b0140edfe\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.865598 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/927505a3-c47f-4b5a-ac60-d35b0140edfe-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"927505a3-c47f-4b5a-ac60-d35b0140edfe\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.865618 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927505a3-c47f-4b5a-ac60-d35b0140edfe-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"927505a3-c47f-4b5a-ac60-d35b0140edfe\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.865639 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/927505a3-c47f-4b5a-ac60-d35b0140edfe-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"927505a3-c47f-4b5a-ac60-d35b0140edfe\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.865660 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncdnt\" (UniqueName: \"kubernetes.io/projected/927505a3-c47f-4b5a-ac60-d35b0140edfe-kube-api-access-ncdnt\") pod \"openstack-cell1-galera-0\" (UID: \"927505a3-c47f-4b5a-ac60-d35b0140edfe\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.865688 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8695c855-a285-408e-a018-ee0060a832e1-config-data\") pod \"memcached-0\" (UID: \"8695c855-a285-408e-a018-ee0060a832e1\") " pod="openstack/memcached-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.865718 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/927505a3-c47f-4b5a-ac60-d35b0140edfe-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"927505a3-c47f-4b5a-ac60-d35b0140edfe\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.865739 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/927505a3-c47f-4b5a-ac60-d35b0140edfe-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"927505a3-c47f-4b5a-ac60-d35b0140edfe\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.865784 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8695c855-a285-408e-a018-ee0060a832e1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"8695c855-a285-408e-a018-ee0060a832e1\") " pod="openstack/memcached-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.865807 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8695c855-a285-408e-a018-ee0060a832e1-kolla-config\") pod \"memcached-0\" (UID: \"8695c855-a285-408e-a018-ee0060a832e1\") " pod="openstack/memcached-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.865850 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/8695c855-a285-408e-a018-ee0060a832e1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"8695c855-a285-408e-a018-ee0060a832e1\") " pod="openstack/memcached-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.865866 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w6fm\" (UniqueName: \"kubernetes.io/projected/8695c855-a285-408e-a018-ee0060a832e1-kube-api-access-4w6fm\") pod \"memcached-0\" (UID: \"8695c855-a285-408e-a018-ee0060a832e1\") " pod="openstack/memcached-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.867191 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/927505a3-c47f-4b5a-ac60-d35b0140edfe-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"927505a3-c47f-4b5a-ac60-d35b0140edfe\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.868039 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/927505a3-c47f-4b5a-ac60-d35b0140edfe-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"927505a3-c47f-4b5a-ac60-d35b0140edfe\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.870378 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/927505a3-c47f-4b5a-ac60-d35b0140edfe-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"927505a3-c47f-4b5a-ac60-d35b0140edfe\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.875370 4794 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.875404 4794 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-df238faa-71f2-4638-85c7-afa9eacd18e4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-df238faa-71f2-4638-85c7-afa9eacd18e4\") pod \"openstack-cell1-galera-0\" (UID: \"927505a3-c47f-4b5a-ac60-d35b0140edfe\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/157a173ca962198882e4a5e16e30aaa5bf42c48747e4118be40886d807b1f22f/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.875924 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/927505a3-c47f-4b5a-ac60-d35b0140edfe-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"927505a3-c47f-4b5a-ac60-d35b0140edfe\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.879533 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/927505a3-c47f-4b5a-ac60-d35b0140edfe-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"927505a3-c47f-4b5a-ac60-d35b0140edfe\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.882014 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927505a3-c47f-4b5a-ac60-d35b0140edfe-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"927505a3-c47f-4b5a-ac60-d35b0140edfe\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.903719 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncdnt\" (UniqueName: \"kubernetes.io/projected/927505a3-c47f-4b5a-ac60-d35b0140edfe-kube-api-access-ncdnt\") pod \"openstack-cell1-galera-0\" (UID: \"927505a3-c47f-4b5a-ac60-d35b0140edfe\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.940837 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-df238faa-71f2-4638-85c7-afa9eacd18e4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-df238faa-71f2-4638-85c7-afa9eacd18e4\") pod \"openstack-cell1-galera-0\" (UID: \"927505a3-c47f-4b5a-ac60-d35b0140edfe\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.967726 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8695c855-a285-408e-a018-ee0060a832e1-config-data\") pod \"memcached-0\" (UID: \"8695c855-a285-408e-a018-ee0060a832e1\") " pod="openstack/memcached-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.967819 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8695c855-a285-408e-a018-ee0060a832e1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"8695c855-a285-408e-a018-ee0060a832e1\") " pod="openstack/memcached-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.967846 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8695c855-a285-408e-a018-ee0060a832e1-kolla-config\") pod \"memcached-0\" (UID: \"8695c855-a285-408e-a018-ee0060a832e1\") " pod="openstack/memcached-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.967887 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/8695c855-a285-408e-a018-ee0060a832e1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"8695c855-a285-408e-a018-ee0060a832e1\") " pod="openstack/memcached-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.967907 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w6fm\" (UniqueName: \"kubernetes.io/projected/8695c855-a285-408e-a018-ee0060a832e1-kube-api-access-4w6fm\") pod \"memcached-0\" (UID: \"8695c855-a285-408e-a018-ee0060a832e1\") " pod="openstack/memcached-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.969090 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8695c855-a285-408e-a018-ee0060a832e1-config-data\") pod \"memcached-0\" (UID: \"8695c855-a285-408e-a018-ee0060a832e1\") " pod="openstack/memcached-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.969942 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8695c855-a285-408e-a018-ee0060a832e1-kolla-config\") pod \"memcached-0\" (UID: \"8695c855-a285-408e-a018-ee0060a832e1\") " pod="openstack/memcached-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.980512 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8695c855-a285-408e-a018-ee0060a832e1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"8695c855-a285-408e-a018-ee0060a832e1\") " pod="openstack/memcached-0" Feb 16 17:19:54 crc kubenswrapper[4794]: I0216 17:19:54.995205 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/8695c855-a285-408e-a018-ee0060a832e1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"8695c855-a285-408e-a018-ee0060a832e1\") " pod="openstack/memcached-0" Feb 16 17:19:55 crc kubenswrapper[4794]: I0216 17:19:55.000559 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w6fm\" (UniqueName: \"kubernetes.io/projected/8695c855-a285-408e-a018-ee0060a832e1-kube-api-access-4w6fm\") pod \"memcached-0\" (UID: \"8695c855-a285-408e-a018-ee0060a832e1\") " pod="openstack/memcached-0" Feb 16 17:19:55 crc kubenswrapper[4794]: I0216 17:19:55.027477 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 16 17:19:55 crc kubenswrapper[4794]: I0216 17:19:55.127126 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 16 17:19:57 crc kubenswrapper[4794]: I0216 17:19:57.201502 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 17:19:57 crc kubenswrapper[4794]: I0216 17:19:57.203248 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 17:19:57 crc kubenswrapper[4794]: I0216 17:19:57.208029 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-x298d" Feb 16 17:19:57 crc kubenswrapper[4794]: I0216 17:19:57.229092 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 17:19:57 crc kubenswrapper[4794]: I0216 17:19:57.327917 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhff4\" (UniqueName: \"kubernetes.io/projected/bad8e694-f919-4a68-b0ce-95c9b55ba56a-kube-api-access-mhff4\") pod \"kube-state-metrics-0\" (UID: \"bad8e694-f919-4a68-b0ce-95c9b55ba56a\") " pod="openstack/kube-state-metrics-0" Feb 16 17:19:57 crc kubenswrapper[4794]: I0216 17:19:57.429972 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhff4\" (UniqueName: \"kubernetes.io/projected/bad8e694-f919-4a68-b0ce-95c9b55ba56a-kube-api-access-mhff4\") pod \"kube-state-metrics-0\" (UID: \"bad8e694-f919-4a68-b0ce-95c9b55ba56a\") " pod="openstack/kube-state-metrics-0" Feb 16 17:19:57 crc kubenswrapper[4794]: I0216 17:19:57.459439 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhff4\" (UniqueName: \"kubernetes.io/projected/bad8e694-f919-4a68-b0ce-95c9b55ba56a-kube-api-access-mhff4\") pod \"kube-state-metrics-0\" (UID: \"bad8e694-f919-4a68-b0ce-95c9b55ba56a\") " pod="openstack/kube-state-metrics-0" Feb 16 17:19:57 crc kubenswrapper[4794]: I0216 17:19:57.528074 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.092572 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-xwlnp"] Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.094107 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-xwlnp" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.098518 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.098705 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-wqwgz" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.108742 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-xwlnp"] Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.149221 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c51ad8ee-4b16-4ddc-89a6-d63e4e5abf53-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-xwlnp\" (UID: \"c51ad8ee-4b16-4ddc-89a6-d63e4e5abf53\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-xwlnp" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.149363 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv6dx\" (UniqueName: \"kubernetes.io/projected/c51ad8ee-4b16-4ddc-89a6-d63e4e5abf53-kube-api-access-fv6dx\") pod \"observability-ui-dashboards-66cbf594b5-xwlnp\" (UID: \"c51ad8ee-4b16-4ddc-89a6-d63e4e5abf53\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-xwlnp" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.252019 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c51ad8ee-4b16-4ddc-89a6-d63e4e5abf53-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-xwlnp\" (UID: \"c51ad8ee-4b16-4ddc-89a6-d63e4e5abf53\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-xwlnp" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.252162 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv6dx\" (UniqueName: \"kubernetes.io/projected/c51ad8ee-4b16-4ddc-89a6-d63e4e5abf53-kube-api-access-fv6dx\") pod \"observability-ui-dashboards-66cbf594b5-xwlnp\" (UID: \"c51ad8ee-4b16-4ddc-89a6-d63e4e5abf53\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-xwlnp" Feb 16 17:19:58 crc kubenswrapper[4794]: E0216 17:19:58.252231 4794 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Feb 16 17:19:58 crc kubenswrapper[4794]: E0216 17:19:58.252325 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c51ad8ee-4b16-4ddc-89a6-d63e4e5abf53-serving-cert podName:c51ad8ee-4b16-4ddc-89a6-d63e4e5abf53 nodeName:}" failed. No retries permitted until 2026-02-16 17:19:58.752293304 +0000 UTC m=+1224.700387951 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c51ad8ee-4b16-4ddc-89a6-d63e4e5abf53-serving-cert") pod "observability-ui-dashboards-66cbf594b5-xwlnp" (UID: "c51ad8ee-4b16-4ddc-89a6-d63e4e5abf53") : secret "observability-ui-dashboards" not found Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.282103 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv6dx\" (UniqueName: \"kubernetes.io/projected/c51ad8ee-4b16-4ddc-89a6-d63e4e5abf53-kube-api-access-fv6dx\") pod \"observability-ui-dashboards-66cbf594b5-xwlnp\" (UID: \"c51ad8ee-4b16-4ddc-89a6-d63e4e5abf53\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-xwlnp" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.399030 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5bcc65df4f-mfqcw"] Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.400207 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5bcc65df4f-mfqcw" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.444286 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5bcc65df4f-mfqcw"] Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.457507 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/be1ac6c4-345c-41e7-992b-f53c4c4eba25-service-ca\") pod \"console-5bcc65df4f-mfqcw\" (UID: \"be1ac6c4-345c-41e7-992b-f53c4c4eba25\") " pod="openshift-console/console-5bcc65df4f-mfqcw" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.457550 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/be1ac6c4-345c-41e7-992b-f53c4c4eba25-console-serving-cert\") pod \"console-5bcc65df4f-mfqcw\" (UID: \"be1ac6c4-345c-41e7-992b-f53c4c4eba25\") " pod="openshift-console/console-5bcc65df4f-mfqcw" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.457573 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/be1ac6c4-345c-41e7-992b-f53c4c4eba25-oauth-serving-cert\") pod \"console-5bcc65df4f-mfqcw\" (UID: \"be1ac6c4-345c-41e7-992b-f53c4c4eba25\") " pod="openshift-console/console-5bcc65df4f-mfqcw" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.457613 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/be1ac6c4-345c-41e7-992b-f53c4c4eba25-console-oauth-config\") pod \"console-5bcc65df4f-mfqcw\" (UID: \"be1ac6c4-345c-41e7-992b-f53c4c4eba25\") " pod="openshift-console/console-5bcc65df4f-mfqcw" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.457636 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvbcf\" (UniqueName: \"kubernetes.io/projected/be1ac6c4-345c-41e7-992b-f53c4c4eba25-kube-api-access-lvbcf\") pod \"console-5bcc65df4f-mfqcw\" (UID: \"be1ac6c4-345c-41e7-992b-f53c4c4eba25\") " pod="openshift-console/console-5bcc65df4f-mfqcw" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.457654 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be1ac6c4-345c-41e7-992b-f53c4c4eba25-trusted-ca-bundle\") pod \"console-5bcc65df4f-mfqcw\" (UID: \"be1ac6c4-345c-41e7-992b-f53c4c4eba25\") " pod="openshift-console/console-5bcc65df4f-mfqcw" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.457680 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/be1ac6c4-345c-41e7-992b-f53c4c4eba25-console-config\") pod \"console-5bcc65df4f-mfqcw\" (UID: \"be1ac6c4-345c-41e7-992b-f53c4c4eba25\") " pod="openshift-console/console-5bcc65df4f-mfqcw" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.559402 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/be1ac6c4-345c-41e7-992b-f53c4c4eba25-service-ca\") pod \"console-5bcc65df4f-mfqcw\" (UID: \"be1ac6c4-345c-41e7-992b-f53c4c4eba25\") " pod="openshift-console/console-5bcc65df4f-mfqcw" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.559445 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/be1ac6c4-345c-41e7-992b-f53c4c4eba25-console-serving-cert\") pod \"console-5bcc65df4f-mfqcw\" (UID: \"be1ac6c4-345c-41e7-992b-f53c4c4eba25\") " pod="openshift-console/console-5bcc65df4f-mfqcw" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.559463 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/be1ac6c4-345c-41e7-992b-f53c4c4eba25-oauth-serving-cert\") pod \"console-5bcc65df4f-mfqcw\" (UID: \"be1ac6c4-345c-41e7-992b-f53c4c4eba25\") " pod="openshift-console/console-5bcc65df4f-mfqcw" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.559506 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/be1ac6c4-345c-41e7-992b-f53c4c4eba25-console-oauth-config\") pod \"console-5bcc65df4f-mfqcw\" (UID: \"be1ac6c4-345c-41e7-992b-f53c4c4eba25\") " pod="openshift-console/console-5bcc65df4f-mfqcw" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.559530 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvbcf\" (UniqueName: \"kubernetes.io/projected/be1ac6c4-345c-41e7-992b-f53c4c4eba25-kube-api-access-lvbcf\") pod \"console-5bcc65df4f-mfqcw\" (UID: \"be1ac6c4-345c-41e7-992b-f53c4c4eba25\") " pod="openshift-console/console-5bcc65df4f-mfqcw" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.559549 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be1ac6c4-345c-41e7-992b-f53c4c4eba25-trusted-ca-bundle\") pod \"console-5bcc65df4f-mfqcw\" (UID: \"be1ac6c4-345c-41e7-992b-f53c4c4eba25\") " pod="openshift-console/console-5bcc65df4f-mfqcw" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.559568 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/be1ac6c4-345c-41e7-992b-f53c4c4eba25-console-config\") pod \"console-5bcc65df4f-mfqcw\" (UID: \"be1ac6c4-345c-41e7-992b-f53c4c4eba25\") " pod="openshift-console/console-5bcc65df4f-mfqcw" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.560356 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/be1ac6c4-345c-41e7-992b-f53c4c4eba25-console-config\") pod \"console-5bcc65df4f-mfqcw\" (UID: \"be1ac6c4-345c-41e7-992b-f53c4c4eba25\") " pod="openshift-console/console-5bcc65df4f-mfqcw" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.561123 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/be1ac6c4-345c-41e7-992b-f53c4c4eba25-oauth-serving-cert\") pod \"console-5bcc65df4f-mfqcw\" (UID: \"be1ac6c4-345c-41e7-992b-f53c4c4eba25\") " pod="openshift-console/console-5bcc65df4f-mfqcw" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.561586 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be1ac6c4-345c-41e7-992b-f53c4c4eba25-trusted-ca-bundle\") pod \"console-5bcc65df4f-mfqcw\" (UID: \"be1ac6c4-345c-41e7-992b-f53c4c4eba25\") " pod="openshift-console/console-5bcc65df4f-mfqcw" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.564859 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/be1ac6c4-345c-41e7-992b-f53c4c4eba25-service-ca\") pod \"console-5bcc65df4f-mfqcw\" (UID: \"be1ac6c4-345c-41e7-992b-f53c4c4eba25\") " pod="openshift-console/console-5bcc65df4f-mfqcw" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.567117 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/be1ac6c4-345c-41e7-992b-f53c4c4eba25-console-oauth-config\") pod \"console-5bcc65df4f-mfqcw\" (UID: \"be1ac6c4-345c-41e7-992b-f53c4c4eba25\") " pod="openshift-console/console-5bcc65df4f-mfqcw" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.576776 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/be1ac6c4-345c-41e7-992b-f53c4c4eba25-console-serving-cert\") pod \"console-5bcc65df4f-mfqcw\" (UID: \"be1ac6c4-345c-41e7-992b-f53c4c4eba25\") " pod="openshift-console/console-5bcc65df4f-mfqcw" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.596485 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.602554 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.602770 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvbcf\" (UniqueName: \"kubernetes.io/projected/be1ac6c4-345c-41e7-992b-f53c4c4eba25-kube-api-access-lvbcf\") pod \"console-5bcc65df4f-mfqcw\" (UID: \"be1ac6c4-345c-41e7-992b-f53c4c4eba25\") " pod="openshift-console/console-5bcc65df4f-mfqcw" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.606430 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.606577 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.606732 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.606841 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.607107 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.607653 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.607907 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-8ndpc" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.619510 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.619892 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.665183 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9d6f0b7b-1214-4425-a850-09933e0e9a6e-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.665288 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9d6f0b7b-1214-4425-a850-09933e0e9a6e-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.665542 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9d6f0b7b-1214-4425-a850-09933e0e9a6e-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.665633 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9d6f0b7b-1214-4425-a850-09933e0e9a6e-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.665713 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-847c3bfd-2842-4b87-9058-2b4210d0df84\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-847c3bfd-2842-4b87-9058-2b4210d0df84\") pod \"prometheus-metric-storage-0\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.665797 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9d6f0b7b-1214-4425-a850-09933e0e9a6e-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.665840 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9d6f0b7b-1214-4425-a850-09933e0e9a6e-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.665869 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9d6f0b7b-1214-4425-a850-09933e0e9a6e-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.665926 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9d6f0b7b-1214-4425-a850-09933e0e9a6e-config\") pod \"prometheus-metric-storage-0\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.665964 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4lnq\" (UniqueName: \"kubernetes.io/projected/9d6f0b7b-1214-4425-a850-09933e0e9a6e-kube-api-access-x4lnq\") pod \"prometheus-metric-storage-0\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.749191 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5bcc65df4f-mfqcw" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.767428 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9d6f0b7b-1214-4425-a850-09933e0e9a6e-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.767500 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9d6f0b7b-1214-4425-a850-09933e0e9a6e-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.767542 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-847c3bfd-2842-4b87-9058-2b4210d0df84\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-847c3bfd-2842-4b87-9058-2b4210d0df84\") pod \"prometheus-metric-storage-0\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.767593 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9d6f0b7b-1214-4425-a850-09933e0e9a6e-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.767623 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9d6f0b7b-1214-4425-a850-09933e0e9a6e-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.767648 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9d6f0b7b-1214-4425-a850-09933e0e9a6e-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.767689 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9d6f0b7b-1214-4425-a850-09933e0e9a6e-config\") pod \"prometheus-metric-storage-0\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.767718 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4lnq\" (UniqueName: \"kubernetes.io/projected/9d6f0b7b-1214-4425-a850-09933e0e9a6e-kube-api-access-x4lnq\") pod \"prometheus-metric-storage-0\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.767765 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9d6f0b7b-1214-4425-a850-09933e0e9a6e-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.767793 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c51ad8ee-4b16-4ddc-89a6-d63e4e5abf53-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-xwlnp\" (UID: \"c51ad8ee-4b16-4ddc-89a6-d63e4e5abf53\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-xwlnp" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.767844 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9d6f0b7b-1214-4425-a850-09933e0e9a6e-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.769500 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9d6f0b7b-1214-4425-a850-09933e0e9a6e-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.769919 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9d6f0b7b-1214-4425-a850-09933e0e9a6e-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.770450 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9d6f0b7b-1214-4425-a850-09933e0e9a6e-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.770463 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9d6f0b7b-1214-4425-a850-09933e0e9a6e-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.777650 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9d6f0b7b-1214-4425-a850-09933e0e9a6e-config\") pod \"prometheus-metric-storage-0\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.778740 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c51ad8ee-4b16-4ddc-89a6-d63e4e5abf53-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-xwlnp\" (UID: \"c51ad8ee-4b16-4ddc-89a6-d63e4e5abf53\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-xwlnp" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.778755 4794 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.779072 4794 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-847c3bfd-2842-4b87-9058-2b4210d0df84\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-847c3bfd-2842-4b87-9058-2b4210d0df84\") pod \"prometheus-metric-storage-0\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/60fc82cc31f45bb9356123d8b01b5dfd7c96515a1e2ee078d5b084dc843df6e3/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.783071 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9d6f0b7b-1214-4425-a850-09933e0e9a6e-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.785706 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9d6f0b7b-1214-4425-a850-09933e0e9a6e-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.802141 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9d6f0b7b-1214-4425-a850-09933e0e9a6e-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.808176 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4lnq\" (UniqueName: \"kubernetes.io/projected/9d6f0b7b-1214-4425-a850-09933e0e9a6e-kube-api-access-x4lnq\") pod \"prometheus-metric-storage-0\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:58 crc kubenswrapper[4794]: I0216 17:19:58.824537 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-847c3bfd-2842-4b87-9058-2b4210d0df84\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-847c3bfd-2842-4b87-9058-2b4210d0df84\") pod \"prometheus-metric-storage-0\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:59 crc kubenswrapper[4794]: I0216 17:19:59.001028 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:59 crc kubenswrapper[4794]: I0216 17:19:59.032776 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-xwlnp" Feb 16 17:19:59 crc kubenswrapper[4794]: I0216 17:19:59.542508 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"47572286-fbbf-4189-9c6f-feb54624ee2a","Type":"ContainerStarted","Data":"2b0ed0ea2b15a42c330584c07dfdbcd182b1d2f69dca7f086e773464cc8fbb90"} Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.583918 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-frfcd"] Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.585453 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-frfcd" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.588939 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.589179 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.593558 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-ssmkn" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.611211 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e6ba4ad1-ede1-49d7-a317-8f6d71134947-var-run-ovn\") pod \"ovn-controller-frfcd\" (UID: \"e6ba4ad1-ede1-49d7-a317-8f6d71134947\") " pod="openstack/ovn-controller-frfcd" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.611263 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6ba4ad1-ede1-49d7-a317-8f6d71134947-ovn-controller-tls-certs\") pod \"ovn-controller-frfcd\" (UID: \"e6ba4ad1-ede1-49d7-a317-8f6d71134947\") " pod="openstack/ovn-controller-frfcd" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.611337 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e6ba4ad1-ede1-49d7-a317-8f6d71134947-var-run\") pod \"ovn-controller-frfcd\" (UID: \"e6ba4ad1-ede1-49d7-a317-8f6d71134947\") " pod="openstack/ovn-controller-frfcd" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.611365 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v59tw\" (UniqueName: \"kubernetes.io/projected/e6ba4ad1-ede1-49d7-a317-8f6d71134947-kube-api-access-v59tw\") pod \"ovn-controller-frfcd\" (UID: \"e6ba4ad1-ede1-49d7-a317-8f6d71134947\") " pod="openstack/ovn-controller-frfcd" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.611431 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e6ba4ad1-ede1-49d7-a317-8f6d71134947-scripts\") pod \"ovn-controller-frfcd\" (UID: \"e6ba4ad1-ede1-49d7-a317-8f6d71134947\") " pod="openstack/ovn-controller-frfcd" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.611462 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e6ba4ad1-ede1-49d7-a317-8f6d71134947-var-log-ovn\") pod \"ovn-controller-frfcd\" (UID: \"e6ba4ad1-ede1-49d7-a317-8f6d71134947\") " pod="openstack/ovn-controller-frfcd" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.611481 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6ba4ad1-ede1-49d7-a317-8f6d71134947-combined-ca-bundle\") pod \"ovn-controller-frfcd\" (UID: \"e6ba4ad1-ede1-49d7-a317-8f6d71134947\") " pod="openstack/ovn-controller-frfcd" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.614133 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-frfcd"] Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.695359 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-jgbgf"] Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.698228 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-jgbgf" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.713176 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e6ba4ad1-ede1-49d7-a317-8f6d71134947-scripts\") pod \"ovn-controller-frfcd\" (UID: \"e6ba4ad1-ede1-49d7-a317-8f6d71134947\") " pod="openstack/ovn-controller-frfcd" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.713223 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e6ba4ad1-ede1-49d7-a317-8f6d71134947-var-log-ovn\") pod \"ovn-controller-frfcd\" (UID: \"e6ba4ad1-ede1-49d7-a317-8f6d71134947\") " pod="openstack/ovn-controller-frfcd" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.713242 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6ba4ad1-ede1-49d7-a317-8f6d71134947-combined-ca-bundle\") pod \"ovn-controller-frfcd\" (UID: \"e6ba4ad1-ede1-49d7-a317-8f6d71134947\") " pod="openstack/ovn-controller-frfcd" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.713275 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e6ba4ad1-ede1-49d7-a317-8f6d71134947-var-run-ovn\") pod \"ovn-controller-frfcd\" (UID: \"e6ba4ad1-ede1-49d7-a317-8f6d71134947\") " pod="openstack/ovn-controller-frfcd" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.713304 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6ba4ad1-ede1-49d7-a317-8f6d71134947-ovn-controller-tls-certs\") pod \"ovn-controller-frfcd\" (UID: \"e6ba4ad1-ede1-49d7-a317-8f6d71134947\") " pod="openstack/ovn-controller-frfcd" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.713372 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e6ba4ad1-ede1-49d7-a317-8f6d71134947-var-run\") pod \"ovn-controller-frfcd\" (UID: \"e6ba4ad1-ede1-49d7-a317-8f6d71134947\") " pod="openstack/ovn-controller-frfcd" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.713399 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v59tw\" (UniqueName: \"kubernetes.io/projected/e6ba4ad1-ede1-49d7-a317-8f6d71134947-kube-api-access-v59tw\") pod \"ovn-controller-frfcd\" (UID: \"e6ba4ad1-ede1-49d7-a317-8f6d71134947\") " pod="openstack/ovn-controller-frfcd" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.716464 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e6ba4ad1-ede1-49d7-a317-8f6d71134947-scripts\") pod \"ovn-controller-frfcd\" (UID: \"e6ba4ad1-ede1-49d7-a317-8f6d71134947\") " pod="openstack/ovn-controller-frfcd" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.717397 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e6ba4ad1-ede1-49d7-a317-8f6d71134947-var-log-ovn\") pod \"ovn-controller-frfcd\" (UID: \"e6ba4ad1-ede1-49d7-a317-8f6d71134947\") " pod="openstack/ovn-controller-frfcd" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.717479 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e6ba4ad1-ede1-49d7-a317-8f6d71134947-var-run-ovn\") pod \"ovn-controller-frfcd\" (UID: \"e6ba4ad1-ede1-49d7-a317-8f6d71134947\") " pod="openstack/ovn-controller-frfcd" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.719474 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e6ba4ad1-ede1-49d7-a317-8f6d71134947-var-run\") pod \"ovn-controller-frfcd\" (UID: \"e6ba4ad1-ede1-49d7-a317-8f6d71134947\") " pod="openstack/ovn-controller-frfcd" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.728473 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-jgbgf"] Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.742310 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6ba4ad1-ede1-49d7-a317-8f6d71134947-combined-ca-bundle\") pod \"ovn-controller-frfcd\" (UID: \"e6ba4ad1-ede1-49d7-a317-8f6d71134947\") " pod="openstack/ovn-controller-frfcd" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.742403 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v59tw\" (UniqueName: \"kubernetes.io/projected/e6ba4ad1-ede1-49d7-a317-8f6d71134947-kube-api-access-v59tw\") pod \"ovn-controller-frfcd\" (UID: \"e6ba4ad1-ede1-49d7-a317-8f6d71134947\") " pod="openstack/ovn-controller-frfcd" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.776502 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6ba4ad1-ede1-49d7-a317-8f6d71134947-ovn-controller-tls-certs\") pod \"ovn-controller-frfcd\" (UID: \"e6ba4ad1-ede1-49d7-a317-8f6d71134947\") " pod="openstack/ovn-controller-frfcd" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.806185 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.808292 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.812136 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.812392 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.812682 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-fjz9f" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.812846 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.814891 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/004145a2-867a-43fd-be9d-ad53806a1c19-scripts\") pod \"ovn-controller-ovs-jgbgf\" (UID: \"004145a2-867a-43fd-be9d-ad53806a1c19\") " pod="openstack/ovn-controller-ovs-jgbgf" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.814995 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vfqr\" (UniqueName: \"kubernetes.io/projected/004145a2-867a-43fd-be9d-ad53806a1c19-kube-api-access-6vfqr\") pod \"ovn-controller-ovs-jgbgf\" (UID: \"004145a2-867a-43fd-be9d-ad53806a1c19\") " pod="openstack/ovn-controller-ovs-jgbgf" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.815063 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/004145a2-867a-43fd-be9d-ad53806a1c19-var-log\") pod \"ovn-controller-ovs-jgbgf\" (UID: \"004145a2-867a-43fd-be9d-ad53806a1c19\") " pod="openstack/ovn-controller-ovs-jgbgf" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.815095 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/004145a2-867a-43fd-be9d-ad53806a1c19-var-lib\") pod \"ovn-controller-ovs-jgbgf\" (UID: \"004145a2-867a-43fd-be9d-ad53806a1c19\") " pod="openstack/ovn-controller-ovs-jgbgf" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.815134 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/004145a2-867a-43fd-be9d-ad53806a1c19-var-run\") pod \"ovn-controller-ovs-jgbgf\" (UID: \"004145a2-867a-43fd-be9d-ad53806a1c19\") " pod="openstack/ovn-controller-ovs-jgbgf" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.815217 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/004145a2-867a-43fd-be9d-ad53806a1c19-etc-ovs\") pod \"ovn-controller-ovs-jgbgf\" (UID: \"004145a2-867a-43fd-be9d-ad53806a1c19\") " pod="openstack/ovn-controller-ovs-jgbgf" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.818172 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.820214 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.907695 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-frfcd" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.934335 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/004145a2-867a-43fd-be9d-ad53806a1c19-var-run\") pod \"ovn-controller-ovs-jgbgf\" (UID: \"004145a2-867a-43fd-be9d-ad53806a1c19\") " pod="openstack/ovn-controller-ovs-jgbgf" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.934518 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfbst\" (UniqueName: \"kubernetes.io/projected/8528fad2-4c8a-4171-92a6-eb31e80d0f2e-kube-api-access-sfbst\") pod \"ovsdbserver-nb-0\" (UID: \"8528fad2-4c8a-4171-92a6-eb31e80d0f2e\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.934591 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/004145a2-867a-43fd-be9d-ad53806a1c19-etc-ovs\") pod \"ovn-controller-ovs-jgbgf\" (UID: \"004145a2-867a-43fd-be9d-ad53806a1c19\") " pod="openstack/ovn-controller-ovs-jgbgf" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.934630 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8528fad2-4c8a-4171-92a6-eb31e80d0f2e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8528fad2-4c8a-4171-92a6-eb31e80d0f2e\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.934696 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-163f004d-9005-41c4-a74e-3122f2a7fe7f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-163f004d-9005-41c4-a74e-3122f2a7fe7f\") pod \"ovsdbserver-nb-0\" (UID: \"8528fad2-4c8a-4171-92a6-eb31e80d0f2e\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.934730 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/004145a2-867a-43fd-be9d-ad53806a1c19-scripts\") pod \"ovn-controller-ovs-jgbgf\" (UID: \"004145a2-867a-43fd-be9d-ad53806a1c19\") " pod="openstack/ovn-controller-ovs-jgbgf" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.934762 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8528fad2-4c8a-4171-92a6-eb31e80d0f2e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8528fad2-4c8a-4171-92a6-eb31e80d0f2e\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.934809 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8528fad2-4c8a-4171-92a6-eb31e80d0f2e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8528fad2-4c8a-4171-92a6-eb31e80d0f2e\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.934840 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8528fad2-4c8a-4171-92a6-eb31e80d0f2e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8528fad2-4c8a-4171-92a6-eb31e80d0f2e\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.934885 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8528fad2-4c8a-4171-92a6-eb31e80d0f2e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8528fad2-4c8a-4171-92a6-eb31e80d0f2e\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.934914 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8528fad2-4c8a-4171-92a6-eb31e80d0f2e-config\") pod \"ovsdbserver-nb-0\" (UID: \"8528fad2-4c8a-4171-92a6-eb31e80d0f2e\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.934949 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vfqr\" (UniqueName: \"kubernetes.io/projected/004145a2-867a-43fd-be9d-ad53806a1c19-kube-api-access-6vfqr\") pod \"ovn-controller-ovs-jgbgf\" (UID: \"004145a2-867a-43fd-be9d-ad53806a1c19\") " pod="openstack/ovn-controller-ovs-jgbgf" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.935013 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/004145a2-867a-43fd-be9d-ad53806a1c19-var-log\") pod \"ovn-controller-ovs-jgbgf\" (UID: \"004145a2-867a-43fd-be9d-ad53806a1c19\") " pod="openstack/ovn-controller-ovs-jgbgf" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.935124 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/004145a2-867a-43fd-be9d-ad53806a1c19-var-lib\") pod \"ovn-controller-ovs-jgbgf\" (UID: \"004145a2-867a-43fd-be9d-ad53806a1c19\") " pod="openstack/ovn-controller-ovs-jgbgf" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.935775 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/004145a2-867a-43fd-be9d-ad53806a1c19-var-lib\") pod \"ovn-controller-ovs-jgbgf\" (UID: \"004145a2-867a-43fd-be9d-ad53806a1c19\") " pod="openstack/ovn-controller-ovs-jgbgf" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.935870 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/004145a2-867a-43fd-be9d-ad53806a1c19-var-run\") pod \"ovn-controller-ovs-jgbgf\" (UID: \"004145a2-867a-43fd-be9d-ad53806a1c19\") " pod="openstack/ovn-controller-ovs-jgbgf" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.936021 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/004145a2-867a-43fd-be9d-ad53806a1c19-etc-ovs\") pod \"ovn-controller-ovs-jgbgf\" (UID: \"004145a2-867a-43fd-be9d-ad53806a1c19\") " pod="openstack/ovn-controller-ovs-jgbgf" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.936984 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/004145a2-867a-43fd-be9d-ad53806a1c19-var-log\") pod \"ovn-controller-ovs-jgbgf\" (UID: \"004145a2-867a-43fd-be9d-ad53806a1c19\") " pod="openstack/ovn-controller-ovs-jgbgf" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.938356 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/004145a2-867a-43fd-be9d-ad53806a1c19-scripts\") pod \"ovn-controller-ovs-jgbgf\" (UID: \"004145a2-867a-43fd-be9d-ad53806a1c19\") " pod="openstack/ovn-controller-ovs-jgbgf" Feb 16 17:20:00 crc kubenswrapper[4794]: I0216 17:20:00.964865 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vfqr\" (UniqueName: \"kubernetes.io/projected/004145a2-867a-43fd-be9d-ad53806a1c19-kube-api-access-6vfqr\") pod \"ovn-controller-ovs-jgbgf\" (UID: \"004145a2-867a-43fd-be9d-ad53806a1c19\") " pod="openstack/ovn-controller-ovs-jgbgf" Feb 16 17:20:01 crc kubenswrapper[4794]: I0216 17:20:01.022974 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-jgbgf" Feb 16 17:20:01 crc kubenswrapper[4794]: I0216 17:20:01.037053 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8528fad2-4c8a-4171-92a6-eb31e80d0f2e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8528fad2-4c8a-4171-92a6-eb31e80d0f2e\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:20:01 crc kubenswrapper[4794]: I0216 17:20:01.037111 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8528fad2-4c8a-4171-92a6-eb31e80d0f2e-config\") pod \"ovsdbserver-nb-0\" (UID: \"8528fad2-4c8a-4171-92a6-eb31e80d0f2e\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:20:01 crc kubenswrapper[4794]: I0216 17:20:01.037232 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfbst\" (UniqueName: \"kubernetes.io/projected/8528fad2-4c8a-4171-92a6-eb31e80d0f2e-kube-api-access-sfbst\") pod \"ovsdbserver-nb-0\" (UID: \"8528fad2-4c8a-4171-92a6-eb31e80d0f2e\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:20:01 crc kubenswrapper[4794]: I0216 17:20:01.037318 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8528fad2-4c8a-4171-92a6-eb31e80d0f2e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8528fad2-4c8a-4171-92a6-eb31e80d0f2e\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:20:01 crc kubenswrapper[4794]: I0216 17:20:01.037376 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-163f004d-9005-41c4-a74e-3122f2a7fe7f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-163f004d-9005-41c4-a74e-3122f2a7fe7f\") pod \"ovsdbserver-nb-0\" (UID: \"8528fad2-4c8a-4171-92a6-eb31e80d0f2e\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:20:01 crc kubenswrapper[4794]: I0216 17:20:01.037413 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8528fad2-4c8a-4171-92a6-eb31e80d0f2e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8528fad2-4c8a-4171-92a6-eb31e80d0f2e\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:20:01 crc kubenswrapper[4794]: I0216 17:20:01.037446 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8528fad2-4c8a-4171-92a6-eb31e80d0f2e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8528fad2-4c8a-4171-92a6-eb31e80d0f2e\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:20:01 crc kubenswrapper[4794]: I0216 17:20:01.037466 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8528fad2-4c8a-4171-92a6-eb31e80d0f2e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8528fad2-4c8a-4171-92a6-eb31e80d0f2e\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:20:01 crc kubenswrapper[4794]: I0216 17:20:01.038273 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8528fad2-4c8a-4171-92a6-eb31e80d0f2e-config\") pod \"ovsdbserver-nb-0\" (UID: \"8528fad2-4c8a-4171-92a6-eb31e80d0f2e\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:20:01 crc kubenswrapper[4794]: I0216 17:20:01.038647 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8528fad2-4c8a-4171-92a6-eb31e80d0f2e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8528fad2-4c8a-4171-92a6-eb31e80d0f2e\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:20:01 crc kubenswrapper[4794]: I0216 17:20:01.039733 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8528fad2-4c8a-4171-92a6-eb31e80d0f2e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8528fad2-4c8a-4171-92a6-eb31e80d0f2e\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:20:01 crc kubenswrapper[4794]: I0216 17:20:01.040844 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8528fad2-4c8a-4171-92a6-eb31e80d0f2e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8528fad2-4c8a-4171-92a6-eb31e80d0f2e\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:20:01 crc kubenswrapper[4794]: I0216 17:20:01.042478 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8528fad2-4c8a-4171-92a6-eb31e80d0f2e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8528fad2-4c8a-4171-92a6-eb31e80d0f2e\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:20:01 crc kubenswrapper[4794]: I0216 17:20:01.043111 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8528fad2-4c8a-4171-92a6-eb31e80d0f2e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8528fad2-4c8a-4171-92a6-eb31e80d0f2e\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:20:01 crc kubenswrapper[4794]: I0216 17:20:01.047204 4794 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:20:01 crc kubenswrapper[4794]: I0216 17:20:01.047243 4794 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-163f004d-9005-41c4-a74e-3122f2a7fe7f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-163f004d-9005-41c4-a74e-3122f2a7fe7f\") pod \"ovsdbserver-nb-0\" (UID: \"8528fad2-4c8a-4171-92a6-eb31e80d0f2e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3a94d989aefe403afbf1ffe7bc4238a8dd1578b6e94eef47266449e6fb1816a0/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 16 17:20:01 crc kubenswrapper[4794]: I0216 17:20:01.056131 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfbst\" (UniqueName: \"kubernetes.io/projected/8528fad2-4c8a-4171-92a6-eb31e80d0f2e-kube-api-access-sfbst\") pod \"ovsdbserver-nb-0\" (UID: \"8528fad2-4c8a-4171-92a6-eb31e80d0f2e\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:20:01 crc kubenswrapper[4794]: I0216 17:20:01.082621 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-163f004d-9005-41c4-a74e-3122f2a7fe7f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-163f004d-9005-41c4-a74e-3122f2a7fe7f\") pod \"ovsdbserver-nb-0\" (UID: \"8528fad2-4c8a-4171-92a6-eb31e80d0f2e\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:20:01 crc kubenswrapper[4794]: I0216 17:20:01.146563 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 16 17:20:04 crc kubenswrapper[4794]: I0216 17:20:04.194515 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 17:20:04 crc kubenswrapper[4794]: I0216 17:20:04.197656 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 16 17:20:04 crc kubenswrapper[4794]: I0216 17:20:04.199586 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-mhwlw" Feb 16 17:20:04 crc kubenswrapper[4794]: I0216 17:20:04.199840 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 16 17:20:04 crc kubenswrapper[4794]: I0216 17:20:04.200003 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 16 17:20:04 crc kubenswrapper[4794]: I0216 17:20:04.209296 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 17:20:04 crc kubenswrapper[4794]: I0216 17:20:04.210184 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 16 17:20:04 crc kubenswrapper[4794]: I0216 17:20:04.308594 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f1431cf2-f935-492b-8f07-1f4cb880f4c9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f1431cf2-f935-492b-8f07-1f4cb880f4c9\") pod \"ovsdbserver-sb-0\" (UID: \"eab559f8-3130-43e5-bbf7-cf980cb15a56\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:20:04 crc kubenswrapper[4794]: I0216 17:20:04.308664 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpn8d\" (UniqueName: \"kubernetes.io/projected/eab559f8-3130-43e5-bbf7-cf980cb15a56-kube-api-access-mpn8d\") pod \"ovsdbserver-sb-0\" (UID: \"eab559f8-3130-43e5-bbf7-cf980cb15a56\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:20:04 crc kubenswrapper[4794]: I0216 17:20:04.308708 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eab559f8-3130-43e5-bbf7-cf980cb15a56-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"eab559f8-3130-43e5-bbf7-cf980cb15a56\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:20:04 crc kubenswrapper[4794]: I0216 17:20:04.308748 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eab559f8-3130-43e5-bbf7-cf980cb15a56-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"eab559f8-3130-43e5-bbf7-cf980cb15a56\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:20:04 crc kubenswrapper[4794]: I0216 17:20:04.308816 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eab559f8-3130-43e5-bbf7-cf980cb15a56-config\") pod \"ovsdbserver-sb-0\" (UID: \"eab559f8-3130-43e5-bbf7-cf980cb15a56\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:20:04 crc kubenswrapper[4794]: I0216 17:20:04.308848 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eab559f8-3130-43e5-bbf7-cf980cb15a56-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"eab559f8-3130-43e5-bbf7-cf980cb15a56\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:20:04 crc kubenswrapper[4794]: I0216 17:20:04.308896 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eab559f8-3130-43e5-bbf7-cf980cb15a56-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"eab559f8-3130-43e5-bbf7-cf980cb15a56\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:20:04 crc kubenswrapper[4794]: I0216 17:20:04.308932 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eab559f8-3130-43e5-bbf7-cf980cb15a56-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"eab559f8-3130-43e5-bbf7-cf980cb15a56\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:20:04 crc kubenswrapper[4794]: I0216 17:20:04.410960 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f1431cf2-f935-492b-8f07-1f4cb880f4c9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f1431cf2-f935-492b-8f07-1f4cb880f4c9\") pod \"ovsdbserver-sb-0\" (UID: \"eab559f8-3130-43e5-bbf7-cf980cb15a56\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:20:04 crc kubenswrapper[4794]: I0216 17:20:04.411033 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpn8d\" (UniqueName: \"kubernetes.io/projected/eab559f8-3130-43e5-bbf7-cf980cb15a56-kube-api-access-mpn8d\") pod \"ovsdbserver-sb-0\" (UID: \"eab559f8-3130-43e5-bbf7-cf980cb15a56\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:20:04 crc kubenswrapper[4794]: I0216 17:20:04.411090 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eab559f8-3130-43e5-bbf7-cf980cb15a56-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"eab559f8-3130-43e5-bbf7-cf980cb15a56\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:20:04 crc kubenswrapper[4794]: I0216 17:20:04.411130 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eab559f8-3130-43e5-bbf7-cf980cb15a56-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"eab559f8-3130-43e5-bbf7-cf980cb15a56\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:20:04 crc kubenswrapper[4794]: I0216 17:20:04.411185 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eab559f8-3130-43e5-bbf7-cf980cb15a56-config\") pod \"ovsdbserver-sb-0\" (UID: \"eab559f8-3130-43e5-bbf7-cf980cb15a56\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:20:04 crc kubenswrapper[4794]: I0216 17:20:04.411254 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eab559f8-3130-43e5-bbf7-cf980cb15a56-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"eab559f8-3130-43e5-bbf7-cf980cb15a56\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:20:04 crc kubenswrapper[4794]: I0216 17:20:04.411338 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eab559f8-3130-43e5-bbf7-cf980cb15a56-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"eab559f8-3130-43e5-bbf7-cf980cb15a56\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:20:04 crc kubenswrapper[4794]: I0216 17:20:04.411384 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eab559f8-3130-43e5-bbf7-cf980cb15a56-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"eab559f8-3130-43e5-bbf7-cf980cb15a56\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:20:04 crc kubenswrapper[4794]: I0216 17:20:04.413849 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eab559f8-3130-43e5-bbf7-cf980cb15a56-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"eab559f8-3130-43e5-bbf7-cf980cb15a56\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:20:04 crc kubenswrapper[4794]: I0216 17:20:04.414516 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eab559f8-3130-43e5-bbf7-cf980cb15a56-config\") pod \"ovsdbserver-sb-0\" (UID: \"eab559f8-3130-43e5-bbf7-cf980cb15a56\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:20:04 crc kubenswrapper[4794]: I0216 17:20:04.415461 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eab559f8-3130-43e5-bbf7-cf980cb15a56-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"eab559f8-3130-43e5-bbf7-cf980cb15a56\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:20:04 crc kubenswrapper[4794]: I0216 17:20:04.419389 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eab559f8-3130-43e5-bbf7-cf980cb15a56-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"eab559f8-3130-43e5-bbf7-cf980cb15a56\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:20:04 crc kubenswrapper[4794]: I0216 17:20:04.419485 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eab559f8-3130-43e5-bbf7-cf980cb15a56-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"eab559f8-3130-43e5-bbf7-cf980cb15a56\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:20:04 crc kubenswrapper[4794]: I0216 17:20:04.420831 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eab559f8-3130-43e5-bbf7-cf980cb15a56-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"eab559f8-3130-43e5-bbf7-cf980cb15a56\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:20:04 crc kubenswrapper[4794]: I0216 17:20:04.421598 4794 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:20:04 crc kubenswrapper[4794]: I0216 17:20:04.421628 4794 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f1431cf2-f935-492b-8f07-1f4cb880f4c9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f1431cf2-f935-492b-8f07-1f4cb880f4c9\") pod \"ovsdbserver-sb-0\" (UID: \"eab559f8-3130-43e5-bbf7-cf980cb15a56\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/89649d7759985a4592b675743cae650ceeff0bda20ebfb3ccf9eaaac990f0e49/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 16 17:20:04 crc kubenswrapper[4794]: I0216 17:20:04.434959 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpn8d\" (UniqueName: \"kubernetes.io/projected/eab559f8-3130-43e5-bbf7-cf980cb15a56-kube-api-access-mpn8d\") pod \"ovsdbserver-sb-0\" (UID: \"eab559f8-3130-43e5-bbf7-cf980cb15a56\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:20:04 crc kubenswrapper[4794]: I0216 17:20:04.476515 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f1431cf2-f935-492b-8f07-1f4cb880f4c9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f1431cf2-f935-492b-8f07-1f4cb880f4c9\") pod \"ovsdbserver-sb-0\" (UID: \"eab559f8-3130-43e5-bbf7-cf980cb15a56\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:20:04 crc kubenswrapper[4794]: I0216 17:20:04.554481 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 16 17:20:12 crc kubenswrapper[4794]: E0216 17:20:12.046741 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 17:20:12 crc kubenswrapper[4794]: E0216 17:20:12.047577 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pdfjc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-pn2qh_openstack(771cb777-b410-44fc-bd7a-d5057227dad8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 17:20:12 crc kubenswrapper[4794]: E0216 17:20:12.048831 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-pn2qh" podUID="771cb777-b410-44fc-bd7a-d5057227dad8" Feb 16 17:20:12 crc kubenswrapper[4794]: E0216 17:20:12.154362 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 17:20:12 crc kubenswrapper[4794]: E0216 17:20:12.154881 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9gh62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-sz7q7_openstack(d554e5d8-e390-484d-b414-ace409b51d91): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 17:20:12 crc kubenswrapper[4794]: E0216 17:20:12.158277 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-sz7q7" podUID="d554e5d8-e390-484d-b414-ace409b51d91" Feb 16 17:20:12 crc kubenswrapper[4794]: E0216 17:20:12.202452 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 17:20:12 crc kubenswrapper[4794]: E0216 17:20:12.202782 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jngxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-gfk7g_openstack(f3a39e2d-9839-4c07-9cec-466372443514): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 17:20:12 crc kubenswrapper[4794]: E0216 17:20:12.204391 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-gfk7g" podUID="f3a39e2d-9839-4c07-9cec-466372443514" Feb 16 17:20:12 crc kubenswrapper[4794]: E0216 17:20:12.211448 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 17:20:12 crc kubenswrapper[4794]: E0216 17:20:12.211631 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fw9km,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-tjvdt_openstack(ab201dca-05ea-4f61-a71f-be55c9587777): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 17:20:12 crc kubenswrapper[4794]: E0216 17:20:12.215559 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-tjvdt" podUID="ab201dca-05ea-4f61-a71f-be55c9587777" Feb 16 17:20:12 crc kubenswrapper[4794]: E0216 17:20:12.687884 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-sz7q7" podUID="d554e5d8-e390-484d-b414-ace409b51d91" Feb 16 17:20:12 crc kubenswrapper[4794]: E0216 17:20:12.688090 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-tjvdt" podUID="ab201dca-05ea-4f61-a71f-be55c9587777" Feb 16 17:20:13 crc kubenswrapper[4794]: I0216 17:20:13.159001 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 16 17:20:13 crc kubenswrapper[4794]: W0216 17:20:13.160685 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8695c855_a285_408e_a018_ee0060a832e1.slice/crio-a3b6dbfa769372bf5360f083afb8310a571788541684e719f1f9da452f5b82d2 WatchSource:0}: Error finding container a3b6dbfa769372bf5360f083afb8310a571788541684e719f1f9da452f5b82d2: Status 404 returned error can't find the container with id a3b6dbfa769372bf5360f083afb8310a571788541684e719f1f9da452f5b82d2 Feb 16 17:20:13 crc kubenswrapper[4794]: W0216 17:20:13.185267 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc07f58cd_ea21_4cb3_a3db_0d184c3628bd.slice/crio-912da8a987baeb43734bc262c3ed685b95b08a40c6e4bf2e8156b91504441ab2 WatchSource:0}: Error finding container 912da8a987baeb43734bc262c3ed685b95b08a40c6e4bf2e8156b91504441ab2: Status 404 returned error can't find the container with id 912da8a987baeb43734bc262c3ed685b95b08a40c6e4bf2e8156b91504441ab2 Feb 16 17:20:13 crc kubenswrapper[4794]: I0216 17:20:13.187659 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 16 17:20:13 crc kubenswrapper[4794]: I0216 17:20:13.199782 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-xwlnp"] Feb 16 17:20:13 crc kubenswrapper[4794]: W0216 17:20:13.200476 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc51ad8ee_4b16_4ddc_89a6_d63e4e5abf53.slice/crio-2441b55749cadba9f0f5e67505d642a70e6407ab5c026cfa0ddb5574c789f65b WatchSource:0}: Error finding container 2441b55749cadba9f0f5e67505d642a70e6407ab5c026cfa0ddb5574c789f65b: Status 404 returned error can't find the container with id 2441b55749cadba9f0f5e67505d642a70e6407ab5c026cfa0ddb5574c789f65b Feb 16 17:20:13 crc kubenswrapper[4794]: I0216 17:20:13.695389 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c07f58cd-ea21-4cb3-a3db-0d184c3628bd","Type":"ContainerStarted","Data":"912da8a987baeb43734bc262c3ed685b95b08a40c6e4bf2e8156b91504441ab2"} Feb 16 17:20:13 crc kubenswrapper[4794]: I0216 17:20:13.697940 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-xwlnp" event={"ID":"c51ad8ee-4b16-4ddc-89a6-d63e4e5abf53","Type":"ContainerStarted","Data":"2441b55749cadba9f0f5e67505d642a70e6407ab5c026cfa0ddb5574c789f65b"} Feb 16 17:20:13 crc kubenswrapper[4794]: I0216 17:20:13.699163 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"8695c855-a285-408e-a018-ee0060a832e1","Type":"ContainerStarted","Data":"a3b6dbfa769372bf5360f083afb8310a571788541684e719f1f9da452f5b82d2"} Feb 16 17:20:13 crc kubenswrapper[4794]: I0216 17:20:13.844119 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 17:20:13 crc kubenswrapper[4794]: I0216 17:20:13.874974 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5bcc65df4f-mfqcw"] Feb 16 17:20:13 crc kubenswrapper[4794]: I0216 17:20:13.990274 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pn2qh" Feb 16 17:20:14 crc kubenswrapper[4794]: I0216 17:20:14.023498 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-gfk7g" Feb 16 17:20:14 crc kubenswrapper[4794]: W0216 17:20:14.044489 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d6f0b7b_1214_4425_a850_09933e0e9a6e.slice/crio-a1e81dae2c43526579221a535c210d4d750db4ff55ce4a55759e79c6ccdb7e7f WatchSource:0}: Error finding container a1e81dae2c43526579221a535c210d4d750db4ff55ce4a55759e79c6ccdb7e7f: Status 404 returned error can't find the container with id a1e81dae2c43526579221a535c210d4d750db4ff55ce4a55759e79c6ccdb7e7f Feb 16 17:20:14 crc kubenswrapper[4794]: I0216 17:20:14.049397 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 17:20:14 crc kubenswrapper[4794]: I0216 17:20:14.050484 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdfjc\" (UniqueName: \"kubernetes.io/projected/771cb777-b410-44fc-bd7a-d5057227dad8-kube-api-access-pdfjc\") pod \"771cb777-b410-44fc-bd7a-d5057227dad8\" (UID: \"771cb777-b410-44fc-bd7a-d5057227dad8\") " Feb 16 17:20:14 crc kubenswrapper[4794]: I0216 17:20:14.050693 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/771cb777-b410-44fc-bd7a-d5057227dad8-config\") pod \"771cb777-b410-44fc-bd7a-d5057227dad8\" (UID: \"771cb777-b410-44fc-bd7a-d5057227dad8\") " Feb 16 17:20:14 crc kubenswrapper[4794]: I0216 17:20:14.052944 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/771cb777-b410-44fc-bd7a-d5057227dad8-config" (OuterVolumeSpecName: "config") pod "771cb777-b410-44fc-bd7a-d5057227dad8" (UID: "771cb777-b410-44fc-bd7a-d5057227dad8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:14 crc kubenswrapper[4794]: I0216 17:20:14.069809 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/771cb777-b410-44fc-bd7a-d5057227dad8-kube-api-access-pdfjc" (OuterVolumeSpecName: "kube-api-access-pdfjc") pod "771cb777-b410-44fc-bd7a-d5057227dad8" (UID: "771cb777-b410-44fc-bd7a-d5057227dad8"). InnerVolumeSpecName "kube-api-access-pdfjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:14 crc kubenswrapper[4794]: I0216 17:20:14.095773 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 17:20:14 crc kubenswrapper[4794]: I0216 17:20:14.146437 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-frfcd"] Feb 16 17:20:14 crc kubenswrapper[4794]: I0216 17:20:14.152423 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3a39e2d-9839-4c07-9cec-466372443514-dns-svc\") pod \"f3a39e2d-9839-4c07-9cec-466372443514\" (UID: \"f3a39e2d-9839-4c07-9cec-466372443514\") " Feb 16 17:20:14 crc kubenswrapper[4794]: I0216 17:20:14.152533 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jngxb\" (UniqueName: \"kubernetes.io/projected/f3a39e2d-9839-4c07-9cec-466372443514-kube-api-access-jngxb\") pod \"f3a39e2d-9839-4c07-9cec-466372443514\" (UID: \"f3a39e2d-9839-4c07-9cec-466372443514\") " Feb 16 17:20:14 crc kubenswrapper[4794]: I0216 17:20:14.152609 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3a39e2d-9839-4c07-9cec-466372443514-config\") pod \"f3a39e2d-9839-4c07-9cec-466372443514\" (UID: \"f3a39e2d-9839-4c07-9cec-466372443514\") " Feb 16 17:20:14 crc kubenswrapper[4794]: I0216 17:20:14.153105 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdfjc\" (UniqueName: \"kubernetes.io/projected/771cb777-b410-44fc-bd7a-d5057227dad8-kube-api-access-pdfjc\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:14 crc kubenswrapper[4794]: I0216 17:20:14.153162 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/771cb777-b410-44fc-bd7a-d5057227dad8-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:14 crc kubenswrapper[4794]: I0216 17:20:14.153622 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3a39e2d-9839-4c07-9cec-466372443514-config" (OuterVolumeSpecName: "config") pod "f3a39e2d-9839-4c07-9cec-466372443514" (UID: "f3a39e2d-9839-4c07-9cec-466372443514"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:14 crc kubenswrapper[4794]: I0216 17:20:14.153968 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3a39e2d-9839-4c07-9cec-466372443514-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f3a39e2d-9839-4c07-9cec-466372443514" (UID: "f3a39e2d-9839-4c07-9cec-466372443514"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:14 crc kubenswrapper[4794]: I0216 17:20:14.159578 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3a39e2d-9839-4c07-9cec-466372443514-kube-api-access-jngxb" (OuterVolumeSpecName: "kube-api-access-jngxb") pod "f3a39e2d-9839-4c07-9cec-466372443514" (UID: "f3a39e2d-9839-4c07-9cec-466372443514"). InnerVolumeSpecName "kube-api-access-jngxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:14 crc kubenswrapper[4794]: W0216 17:20:14.181349 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbad8e694_f919_4a68_b0ce_95c9b55ba56a.slice/crio-e481eb4b741da234a1130dc005d8be965495e49be8d2c7cd0d64649b78b7cab1 WatchSource:0}: Error finding container e481eb4b741da234a1130dc005d8be965495e49be8d2c7cd0d64649b78b7cab1: Status 404 returned error can't find the container with id e481eb4b741da234a1130dc005d8be965495e49be8d2c7cd0d64649b78b7cab1 Feb 16 17:20:14 crc kubenswrapper[4794]: W0216 17:20:14.191379 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6ba4ad1_ede1_49d7_a317_8f6d71134947.slice/crio-ae23c716615fad32757ad572209e833eadf532da8ed1d62debba92a0532be6ab WatchSource:0}: Error finding container ae23c716615fad32757ad572209e833eadf532da8ed1d62debba92a0532be6ab: Status 404 returned error can't find the container with id ae23c716615fad32757ad572209e833eadf532da8ed1d62debba92a0532be6ab Feb 16 17:20:14 crc kubenswrapper[4794]: I0216 17:20:14.292869 4794 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3a39e2d-9839-4c07-9cec-466372443514-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:14 crc kubenswrapper[4794]: I0216 17:20:14.293393 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jngxb\" (UniqueName: \"kubernetes.io/projected/f3a39e2d-9839-4c07-9cec-466372443514-kube-api-access-jngxb\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:14 crc kubenswrapper[4794]: I0216 17:20:14.293412 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3a39e2d-9839-4c07-9cec-466372443514-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:14 crc kubenswrapper[4794]: I0216 17:20:14.331926 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-jgbgf"] Feb 16 17:20:14 crc kubenswrapper[4794]: I0216 17:20:14.708695 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5bcc65df4f-mfqcw" event={"ID":"be1ac6c4-345c-41e7-992b-f53c4c4eba25","Type":"ContainerStarted","Data":"3692a12971d0aba1230d18c7e339fa727f0a819a3819188c781a1065f54349d5"} Feb 16 17:20:14 crc kubenswrapper[4794]: I0216 17:20:14.708850 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5bcc65df4f-mfqcw" event={"ID":"be1ac6c4-345c-41e7-992b-f53c4c4eba25","Type":"ContainerStarted","Data":"4936f2ba0eeff22d293348f8f4df6bcfbfdc82c2d49bafd9f79b45a0af76316f"} Feb 16 17:20:14 crc kubenswrapper[4794]: I0216 17:20:14.711242 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"47572286-fbbf-4189-9c6f-feb54624ee2a","Type":"ContainerStarted","Data":"92a5854561520f29512043bfa53b1c5f9a1f3caae385e57af28b57dc0df64414"} Feb 16 17:20:14 crc kubenswrapper[4794]: I0216 17:20:14.714294 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-pn2qh" event={"ID":"771cb777-b410-44fc-bd7a-d5057227dad8","Type":"ContainerDied","Data":"e47906b3ce735029130e4e54bcef7e8c591336f81e42ffe56aea768d001662d0"} Feb 16 17:20:14 crc kubenswrapper[4794]: I0216 17:20:14.714417 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pn2qh" Feb 16 17:20:14 crc kubenswrapper[4794]: I0216 17:20:14.718000 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"8fb6be66-7fef-4554-897b-30d9f4637138","Type":"ContainerStarted","Data":"a5611785ff80a2040a0e9583d8fe5567236fc1088f42337abc77e4841bba2724"} Feb 16 17:20:14 crc kubenswrapper[4794]: I0216 17:20:14.719722 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"927505a3-c47f-4b5a-ac60-d35b0140edfe","Type":"ContainerStarted","Data":"f6aa9af901be46a6932a40b3ac4f4f4a0b358f909c6021484b999846e372d032"} Feb 16 17:20:14 crc kubenswrapper[4794]: I0216 17:20:14.720969 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-gfk7g" Feb 16 17:20:14 crc kubenswrapper[4794]: I0216 17:20:14.720966 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-gfk7g" event={"ID":"f3a39e2d-9839-4c07-9cec-466372443514","Type":"ContainerDied","Data":"917722aa0f224f1b5b84883f2832ae33497e21970eabffa30d3bce1c434966d5"} Feb 16 17:20:14 crc kubenswrapper[4794]: I0216 17:20:14.722657 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-frfcd" event={"ID":"e6ba4ad1-ede1-49d7-a317-8f6d71134947","Type":"ContainerStarted","Data":"ae23c716615fad32757ad572209e833eadf532da8ed1d62debba92a0532be6ab"} Feb 16 17:20:14 crc kubenswrapper[4794]: I0216 17:20:14.723685 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d6f0b7b-1214-4425-a850-09933e0e9a6e","Type":"ContainerStarted","Data":"a1e81dae2c43526579221a535c210d4d750db4ff55ce4a55759e79c6ccdb7e7f"} Feb 16 17:20:14 crc kubenswrapper[4794]: I0216 17:20:14.724689 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jgbgf" event={"ID":"004145a2-867a-43fd-be9d-ad53806a1c19","Type":"ContainerStarted","Data":"e44bd1d2af05c135e2b8d70f643ac309a97481a6daf28007a03c924c6cf4261c"} Feb 16 17:20:14 crc kubenswrapper[4794]: I0216 17:20:14.726142 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8","Type":"ContainerStarted","Data":"b004f25d6252ce636e11c9fcd2ce973a1cb440882c3b2a80e3a5d3acf1ec4abf"} Feb 16 17:20:14 crc kubenswrapper[4794]: I0216 17:20:14.728265 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bad8e694-f919-4a68-b0ce-95c9b55ba56a","Type":"ContainerStarted","Data":"e481eb4b741da234a1130dc005d8be965495e49be8d2c7cd0d64649b78b7cab1"} Feb 16 17:20:14 crc kubenswrapper[4794]: I0216 17:20:14.734710 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5bcc65df4f-mfqcw" podStartSLOduration=16.734690824 podStartE2EDuration="16.734690824s" podCreationTimestamp="2026-02-16 17:19:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:20:14.732641742 +0000 UTC m=+1240.680736399" watchObservedRunningTime="2026-02-16 17:20:14.734690824 +0000 UTC m=+1240.682785461" Feb 16 17:20:14 crc kubenswrapper[4794]: I0216 17:20:14.735350 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"026253d8-eaea-4c12-91e0-455331cdaa5e","Type":"ContainerStarted","Data":"5f28321fb236a1745593b9c7644f21bfbf3b8430f0f512f514c4f8f1c040ee02"} Feb 16 17:20:14 crc kubenswrapper[4794]: I0216 17:20:14.988218 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-gfk7g"] Feb 16 17:20:15 crc kubenswrapper[4794]: I0216 17:20:15.009523 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-gfk7g"] Feb 16 17:20:15 crc kubenswrapper[4794]: I0216 17:20:15.025789 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 17:20:15 crc kubenswrapper[4794]: I0216 17:20:15.085077 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pn2qh"] Feb 16 17:20:15 crc kubenswrapper[4794]: I0216 17:20:15.095331 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pn2qh"] Feb 16 17:20:15 crc kubenswrapper[4794]: W0216 17:20:15.713125 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeab559f8_3130_43e5_bbf7_cf980cb15a56.slice/crio-363d1986c09b7de72a59293aa3b99970d101bceb05df43625f5f7c0f15c69dfe WatchSource:0}: Error finding container 363d1986c09b7de72a59293aa3b99970d101bceb05df43625f5f7c0f15c69dfe: Status 404 returned error can't find the container with id 363d1986c09b7de72a59293aa3b99970d101bceb05df43625f5f7c0f15c69dfe Feb 16 17:20:15 crc kubenswrapper[4794]: I0216 17:20:15.746893 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"eab559f8-3130-43e5-bbf7-cf980cb15a56","Type":"ContainerStarted","Data":"363d1986c09b7de72a59293aa3b99970d101bceb05df43625f5f7c0f15c69dfe"} Feb 16 17:20:15 crc kubenswrapper[4794]: I0216 17:20:15.837385 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 17:20:16 crc kubenswrapper[4794]: I0216 17:20:16.758465 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8528fad2-4c8a-4171-92a6-eb31e80d0f2e","Type":"ContainerStarted","Data":"8bdd12f6982b925ade9a466253ea8cfe026fb4f3a58658bb9d25ced93826e3c2"} Feb 16 17:20:16 crc kubenswrapper[4794]: I0216 17:20:16.804777 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="771cb777-b410-44fc-bd7a-d5057227dad8" path="/var/lib/kubelet/pods/771cb777-b410-44fc-bd7a-d5057227dad8/volumes" Feb 16 17:20:16 crc kubenswrapper[4794]: I0216 17:20:16.805405 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3a39e2d-9839-4c07-9cec-466372443514" path="/var/lib/kubelet/pods/f3a39e2d-9839-4c07-9cec-466372443514/volumes" Feb 16 17:20:18 crc kubenswrapper[4794]: I0216 17:20:18.749972 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5bcc65df4f-mfqcw" Feb 16 17:20:18 crc kubenswrapper[4794]: I0216 17:20:18.750044 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5bcc65df4f-mfqcw" Feb 16 17:20:18 crc kubenswrapper[4794]: I0216 17:20:18.754924 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5bcc65df4f-mfqcw" Feb 16 17:20:18 crc kubenswrapper[4794]: I0216 17:20:18.811585 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5bcc65df4f-mfqcw" Feb 16 17:20:18 crc kubenswrapper[4794]: I0216 17:20:18.871298 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-576f6bf7c-mkh5d"] Feb 16 17:20:20 crc kubenswrapper[4794]: I0216 17:20:20.140647 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:20:20 crc kubenswrapper[4794]: I0216 17:20:20.140983 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:20:21 crc kubenswrapper[4794]: I0216 17:20:21.814619 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"927505a3-c47f-4b5a-ac60-d35b0140edfe","Type":"ContainerStarted","Data":"4fdd5f484e0f7e1e04b14ae306ffd75e9bb33b13d173c2e370331e3d56490c72"} Feb 16 17:20:21 crc kubenswrapper[4794]: I0216 17:20:21.817773 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c07f58cd-ea21-4cb3-a3db-0d184c3628bd","Type":"ContainerStarted","Data":"cf6733d7ff433fb34b8a6378eab2424071f8c1fb5f2ff407fcdc566d19653452"} Feb 16 17:20:21 crc kubenswrapper[4794]: I0216 17:20:21.825459 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-xwlnp" event={"ID":"c51ad8ee-4b16-4ddc-89a6-d63e4e5abf53","Type":"ContainerStarted","Data":"d8b1c63dcce6f55d935e938236ad4f8245424cd39712bcaad5261db51def3187"} Feb 16 17:20:21 crc kubenswrapper[4794]: I0216 17:20:21.829009 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"8695c855-a285-408e-a018-ee0060a832e1","Type":"ContainerStarted","Data":"ddd77960f7bec1583d85e65f3bc6e5264c4fd2c809c7033f8dee7243b5d11c51"} Feb 16 17:20:21 crc kubenswrapper[4794]: I0216 17:20:21.829152 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 16 17:20:21 crc kubenswrapper[4794]: I0216 17:20:21.924658 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=20.897599743 podStartE2EDuration="27.924633797s" podCreationTimestamp="2026-02-16 17:19:54 +0000 UTC" firstStartedPulling="2026-02-16 17:20:13.166151247 +0000 UTC m=+1239.114245894" lastFinishedPulling="2026-02-16 17:20:20.193185301 +0000 UTC m=+1246.141279948" observedRunningTime="2026-02-16 17:20:21.889457797 +0000 UTC m=+1247.837552464" watchObservedRunningTime="2026-02-16 17:20:21.924633797 +0000 UTC m=+1247.872728434" Feb 16 17:20:21 crc kubenswrapper[4794]: I0216 17:20:21.944154 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-xwlnp" podStartSLOduration=17.127741882 podStartE2EDuration="23.94413092s" podCreationTimestamp="2026-02-16 17:19:58 +0000 UTC" firstStartedPulling="2026-02-16 17:20:13.203369728 +0000 UTC m=+1239.151464375" lastFinishedPulling="2026-02-16 17:20:20.019758766 +0000 UTC m=+1245.967853413" observedRunningTime="2026-02-16 17:20:21.932005873 +0000 UTC m=+1247.880100520" watchObservedRunningTime="2026-02-16 17:20:21.94413092 +0000 UTC m=+1247.892225567" Feb 16 17:20:22 crc kubenswrapper[4794]: I0216 17:20:22.853574 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-frfcd" event={"ID":"e6ba4ad1-ede1-49d7-a317-8f6d71134947","Type":"ContainerStarted","Data":"53598e58811ac0a3da692c93900603d19678986bcd6be25813ae133041719f1f"} Feb 16 17:20:22 crc kubenswrapper[4794]: I0216 17:20:22.854600 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-frfcd" Feb 16 17:20:22 crc kubenswrapper[4794]: I0216 17:20:22.856087 4794 generic.go:334] "Generic (PLEG): container finished" podID="004145a2-867a-43fd-be9d-ad53806a1c19" containerID="9aae61aeb7feb790ac5b9cb192aaed0f06ad6fda86e3909d0d728b7d86ff6fa8" exitCode=0 Feb 16 17:20:22 crc kubenswrapper[4794]: I0216 17:20:22.856234 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jgbgf" event={"ID":"004145a2-867a-43fd-be9d-ad53806a1c19","Type":"ContainerDied","Data":"9aae61aeb7feb790ac5b9cb192aaed0f06ad6fda86e3909d0d728b7d86ff6fa8"} Feb 16 17:20:22 crc kubenswrapper[4794]: I0216 17:20:22.886034 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-frfcd" podStartSLOduration=16.354867937 podStartE2EDuration="22.886009603s" podCreationTimestamp="2026-02-16 17:20:00 +0000 UTC" firstStartedPulling="2026-02-16 17:20:14.194967649 +0000 UTC m=+1240.143062306" lastFinishedPulling="2026-02-16 17:20:20.726109325 +0000 UTC m=+1246.674203972" observedRunningTime="2026-02-16 17:20:22.878799321 +0000 UTC m=+1248.826893988" watchObservedRunningTime="2026-02-16 17:20:22.886009603 +0000 UTC m=+1248.834104260" Feb 16 17:20:23 crc kubenswrapper[4794]: I0216 17:20:23.866872 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bad8e694-f919-4a68-b0ce-95c9b55ba56a","Type":"ContainerStarted","Data":"ce842615109c9f94ae5eb12663a3827bf072cf92dc03df8bb5197cf9df325015"} Feb 16 17:20:23 crc kubenswrapper[4794]: I0216 17:20:23.867258 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 16 17:20:23 crc kubenswrapper[4794]: I0216 17:20:23.871763 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"eab559f8-3130-43e5-bbf7-cf980cb15a56","Type":"ContainerStarted","Data":"bf7cdfbc214091b4641d4b1b6149a3837901487fc817809472093d428b7550a7"} Feb 16 17:20:23 crc kubenswrapper[4794]: I0216 17:20:23.875812 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jgbgf" event={"ID":"004145a2-867a-43fd-be9d-ad53806a1c19","Type":"ContainerStarted","Data":"0ad7bafc5c0643db1ae129c6508411782745ea7317516168e6725c3feb5f65fd"} Feb 16 17:20:23 crc kubenswrapper[4794]: I0216 17:20:23.875863 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jgbgf" event={"ID":"004145a2-867a-43fd-be9d-ad53806a1c19","Type":"ContainerStarted","Data":"042cb7fd637c86be90c9721e66d582fa91e94a0101c3da4d0ee14754ede18f5a"} Feb 16 17:20:23 crc kubenswrapper[4794]: I0216 17:20:23.876062 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-jgbgf" Feb 16 17:20:23 crc kubenswrapper[4794]: I0216 17:20:23.879232 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8528fad2-4c8a-4171-92a6-eb31e80d0f2e","Type":"ContainerStarted","Data":"b3ccd68491688904c9a98c1133d2a448916a468605c08410909eb4c537173b4f"} Feb 16 17:20:23 crc kubenswrapper[4794]: I0216 17:20:23.892730 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=18.292634242 podStartE2EDuration="26.892666605s" podCreationTimestamp="2026-02-16 17:19:57 +0000 UTC" firstStartedPulling="2026-02-16 17:20:14.184474704 +0000 UTC m=+1240.132569351" lastFinishedPulling="2026-02-16 17:20:22.784507057 +0000 UTC m=+1248.732601714" observedRunningTime="2026-02-16 17:20:23.879550773 +0000 UTC m=+1249.827645460" watchObservedRunningTime="2026-02-16 17:20:23.892666605 +0000 UTC m=+1249.840761292" Feb 16 17:20:23 crc kubenswrapper[4794]: I0216 17:20:23.916324 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-jgbgf" podStartSLOduration=17.680144604 podStartE2EDuration="23.916284242s" podCreationTimestamp="2026-02-16 17:20:00 +0000 UTC" firstStartedPulling="2026-02-16 17:20:14.344879759 +0000 UTC m=+1240.292974396" lastFinishedPulling="2026-02-16 17:20:20.581019387 +0000 UTC m=+1246.529114034" observedRunningTime="2026-02-16 17:20:23.900798531 +0000 UTC m=+1249.848893208" watchObservedRunningTime="2026-02-16 17:20:23.916284242 +0000 UTC m=+1249.864378909" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.014283 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-b25d2"] Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.016480 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-b25d2" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.020216 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.050722 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-b25d2"] Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.062265 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a5d8158-28e2-414b-8ebd-abce9aa4b12d-combined-ca-bundle\") pod \"ovn-controller-metrics-b25d2\" (UID: \"6a5d8158-28e2-414b-8ebd-abce9aa4b12d\") " pod="openstack/ovn-controller-metrics-b25d2" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.062360 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/6a5d8158-28e2-414b-8ebd-abce9aa4b12d-ovs-rundir\") pod \"ovn-controller-metrics-b25d2\" (UID: \"6a5d8158-28e2-414b-8ebd-abce9aa4b12d\") " pod="openstack/ovn-controller-metrics-b25d2" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.062396 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a5d8158-28e2-414b-8ebd-abce9aa4b12d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-b25d2\" (UID: \"6a5d8158-28e2-414b-8ebd-abce9aa4b12d\") " pod="openstack/ovn-controller-metrics-b25d2" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.062532 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a5d8158-28e2-414b-8ebd-abce9aa4b12d-config\") pod \"ovn-controller-metrics-b25d2\" (UID: \"6a5d8158-28e2-414b-8ebd-abce9aa4b12d\") " pod="openstack/ovn-controller-metrics-b25d2" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.062574 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/6a5d8158-28e2-414b-8ebd-abce9aa4b12d-ovn-rundir\") pod \"ovn-controller-metrics-b25d2\" (UID: \"6a5d8158-28e2-414b-8ebd-abce9aa4b12d\") " pod="openstack/ovn-controller-metrics-b25d2" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.062642 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s22kq\" (UniqueName: \"kubernetes.io/projected/6a5d8158-28e2-414b-8ebd-abce9aa4b12d-kube-api-access-s22kq\") pod \"ovn-controller-metrics-b25d2\" (UID: \"6a5d8158-28e2-414b-8ebd-abce9aa4b12d\") " pod="openstack/ovn-controller-metrics-b25d2" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.164021 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/6a5d8158-28e2-414b-8ebd-abce9aa4b12d-ovs-rundir\") pod \"ovn-controller-metrics-b25d2\" (UID: \"6a5d8158-28e2-414b-8ebd-abce9aa4b12d\") " pod="openstack/ovn-controller-metrics-b25d2" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.164122 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a5d8158-28e2-414b-8ebd-abce9aa4b12d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-b25d2\" (UID: \"6a5d8158-28e2-414b-8ebd-abce9aa4b12d\") " pod="openstack/ovn-controller-metrics-b25d2" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.164217 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a5d8158-28e2-414b-8ebd-abce9aa4b12d-config\") pod \"ovn-controller-metrics-b25d2\" (UID: \"6a5d8158-28e2-414b-8ebd-abce9aa4b12d\") " pod="openstack/ovn-controller-metrics-b25d2" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.164253 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/6a5d8158-28e2-414b-8ebd-abce9aa4b12d-ovn-rundir\") pod \"ovn-controller-metrics-b25d2\" (UID: \"6a5d8158-28e2-414b-8ebd-abce9aa4b12d\") " pod="openstack/ovn-controller-metrics-b25d2" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.164328 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s22kq\" (UniqueName: \"kubernetes.io/projected/6a5d8158-28e2-414b-8ebd-abce9aa4b12d-kube-api-access-s22kq\") pod \"ovn-controller-metrics-b25d2\" (UID: \"6a5d8158-28e2-414b-8ebd-abce9aa4b12d\") " pod="openstack/ovn-controller-metrics-b25d2" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.164388 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/6a5d8158-28e2-414b-8ebd-abce9aa4b12d-ovs-rundir\") pod \"ovn-controller-metrics-b25d2\" (UID: \"6a5d8158-28e2-414b-8ebd-abce9aa4b12d\") " pod="openstack/ovn-controller-metrics-b25d2" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.164411 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a5d8158-28e2-414b-8ebd-abce9aa4b12d-combined-ca-bundle\") pod \"ovn-controller-metrics-b25d2\" (UID: \"6a5d8158-28e2-414b-8ebd-abce9aa4b12d\") " pod="openstack/ovn-controller-metrics-b25d2" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.164463 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/6a5d8158-28e2-414b-8ebd-abce9aa4b12d-ovn-rundir\") pod \"ovn-controller-metrics-b25d2\" (UID: \"6a5d8158-28e2-414b-8ebd-abce9aa4b12d\") " pod="openstack/ovn-controller-metrics-b25d2" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.165173 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a5d8158-28e2-414b-8ebd-abce9aa4b12d-config\") pod \"ovn-controller-metrics-b25d2\" (UID: \"6a5d8158-28e2-414b-8ebd-abce9aa4b12d\") " pod="openstack/ovn-controller-metrics-b25d2" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.169570 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a5d8158-28e2-414b-8ebd-abce9aa4b12d-combined-ca-bundle\") pod \"ovn-controller-metrics-b25d2\" (UID: \"6a5d8158-28e2-414b-8ebd-abce9aa4b12d\") " pod="openstack/ovn-controller-metrics-b25d2" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.170617 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a5d8158-28e2-414b-8ebd-abce9aa4b12d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-b25d2\" (UID: \"6a5d8158-28e2-414b-8ebd-abce9aa4b12d\") " pod="openstack/ovn-controller-metrics-b25d2" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.191902 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s22kq\" (UniqueName: \"kubernetes.io/projected/6a5d8158-28e2-414b-8ebd-abce9aa4b12d-kube-api-access-s22kq\") pod \"ovn-controller-metrics-b25d2\" (UID: \"6a5d8158-28e2-414b-8ebd-abce9aa4b12d\") " pod="openstack/ovn-controller-metrics-b25d2" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.283544 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-tjvdt"] Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.348643 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-b25d2" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.374598 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-ffrdr"] Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.378397 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-ffrdr" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.390055 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.391929 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-ffrdr"] Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.471368 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d282d685-2404-4141-9865-03de5d0928c5-config\") pod \"dnsmasq-dns-5bf47b49b7-ffrdr\" (UID: \"d282d685-2404-4141-9865-03de5d0928c5\") " pod="openstack/dnsmasq-dns-5bf47b49b7-ffrdr" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.471455 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d282d685-2404-4141-9865-03de5d0928c5-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-ffrdr\" (UID: \"d282d685-2404-4141-9865-03de5d0928c5\") " pod="openstack/dnsmasq-dns-5bf47b49b7-ffrdr" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.471507 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzjq9\" (UniqueName: \"kubernetes.io/projected/d282d685-2404-4141-9865-03de5d0928c5-kube-api-access-lzjq9\") pod \"dnsmasq-dns-5bf47b49b7-ffrdr\" (UID: \"d282d685-2404-4141-9865-03de5d0928c5\") " pod="openstack/dnsmasq-dns-5bf47b49b7-ffrdr" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.471596 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d282d685-2404-4141-9865-03de5d0928c5-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-ffrdr\" (UID: \"d282d685-2404-4141-9865-03de5d0928c5\") " pod="openstack/dnsmasq-dns-5bf47b49b7-ffrdr" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.522034 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-sz7q7"] Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.573926 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d282d685-2404-4141-9865-03de5d0928c5-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-ffrdr\" (UID: \"d282d685-2404-4141-9865-03de5d0928c5\") " pod="openstack/dnsmasq-dns-5bf47b49b7-ffrdr" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.573994 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d282d685-2404-4141-9865-03de5d0928c5-config\") pod \"dnsmasq-dns-5bf47b49b7-ffrdr\" (UID: \"d282d685-2404-4141-9865-03de5d0928c5\") " pod="openstack/dnsmasq-dns-5bf47b49b7-ffrdr" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.574046 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d282d685-2404-4141-9865-03de5d0928c5-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-ffrdr\" (UID: \"d282d685-2404-4141-9865-03de5d0928c5\") " pod="openstack/dnsmasq-dns-5bf47b49b7-ffrdr" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.574089 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzjq9\" (UniqueName: \"kubernetes.io/projected/d282d685-2404-4141-9865-03de5d0928c5-kube-api-access-lzjq9\") pod \"dnsmasq-dns-5bf47b49b7-ffrdr\" (UID: \"d282d685-2404-4141-9865-03de5d0928c5\") " pod="openstack/dnsmasq-dns-5bf47b49b7-ffrdr" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.575132 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d282d685-2404-4141-9865-03de5d0928c5-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-ffrdr\" (UID: \"d282d685-2404-4141-9865-03de5d0928c5\") " pod="openstack/dnsmasq-dns-5bf47b49b7-ffrdr" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.575702 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d282d685-2404-4141-9865-03de5d0928c5-config\") pod \"dnsmasq-dns-5bf47b49b7-ffrdr\" (UID: \"d282d685-2404-4141-9865-03de5d0928c5\") " pod="openstack/dnsmasq-dns-5bf47b49b7-ffrdr" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.576260 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d282d685-2404-4141-9865-03de5d0928c5-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-ffrdr\" (UID: \"d282d685-2404-4141-9865-03de5d0928c5\") " pod="openstack/dnsmasq-dns-5bf47b49b7-ffrdr" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.587352 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-dndb2"] Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.588933 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-dndb2" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.591340 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.598166 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzjq9\" (UniqueName: \"kubernetes.io/projected/d282d685-2404-4141-9865-03de5d0928c5-kube-api-access-lzjq9\") pod \"dnsmasq-dns-5bf47b49b7-ffrdr\" (UID: \"d282d685-2404-4141-9865-03de5d0928c5\") " pod="openstack/dnsmasq-dns-5bf47b49b7-ffrdr" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.610045 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-dndb2"] Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.682741 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53be55ab-28ee-4368-8650-f5c90340992a-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-dndb2\" (UID: \"53be55ab-28ee-4368-8650-f5c90340992a\") " pod="openstack/dnsmasq-dns-8554648995-dndb2" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.682860 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53be55ab-28ee-4368-8650-f5c90340992a-config\") pod \"dnsmasq-dns-8554648995-dndb2\" (UID: \"53be55ab-28ee-4368-8650-f5c90340992a\") " pod="openstack/dnsmasq-dns-8554648995-dndb2" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.682986 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53be55ab-28ee-4368-8650-f5c90340992a-dns-svc\") pod \"dnsmasq-dns-8554648995-dndb2\" (UID: \"53be55ab-28ee-4368-8650-f5c90340992a\") " pod="openstack/dnsmasq-dns-8554648995-dndb2" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.683190 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27mbw\" (UniqueName: \"kubernetes.io/projected/53be55ab-28ee-4368-8650-f5c90340992a-kube-api-access-27mbw\") pod \"dnsmasq-dns-8554648995-dndb2\" (UID: \"53be55ab-28ee-4368-8650-f5c90340992a\") " pod="openstack/dnsmasq-dns-8554648995-dndb2" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.683265 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53be55ab-28ee-4368-8650-f5c90340992a-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-dndb2\" (UID: \"53be55ab-28ee-4368-8650-f5c90340992a\") " pod="openstack/dnsmasq-dns-8554648995-dndb2" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.706940 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-ffrdr" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.784859 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53be55ab-28ee-4368-8650-f5c90340992a-config\") pod \"dnsmasq-dns-8554648995-dndb2\" (UID: \"53be55ab-28ee-4368-8650-f5c90340992a\") " pod="openstack/dnsmasq-dns-8554648995-dndb2" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.784982 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53be55ab-28ee-4368-8650-f5c90340992a-dns-svc\") pod \"dnsmasq-dns-8554648995-dndb2\" (UID: \"53be55ab-28ee-4368-8650-f5c90340992a\") " pod="openstack/dnsmasq-dns-8554648995-dndb2" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.785085 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27mbw\" (UniqueName: \"kubernetes.io/projected/53be55ab-28ee-4368-8650-f5c90340992a-kube-api-access-27mbw\") pod \"dnsmasq-dns-8554648995-dndb2\" (UID: \"53be55ab-28ee-4368-8650-f5c90340992a\") " pod="openstack/dnsmasq-dns-8554648995-dndb2" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.785128 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53be55ab-28ee-4368-8650-f5c90340992a-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-dndb2\" (UID: \"53be55ab-28ee-4368-8650-f5c90340992a\") " pod="openstack/dnsmasq-dns-8554648995-dndb2" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.785222 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53be55ab-28ee-4368-8650-f5c90340992a-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-dndb2\" (UID: \"53be55ab-28ee-4368-8650-f5c90340992a\") " pod="openstack/dnsmasq-dns-8554648995-dndb2" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.786190 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53be55ab-28ee-4368-8650-f5c90340992a-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-dndb2\" (UID: \"53be55ab-28ee-4368-8650-f5c90340992a\") " pod="openstack/dnsmasq-dns-8554648995-dndb2" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.786868 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53be55ab-28ee-4368-8650-f5c90340992a-config\") pod \"dnsmasq-dns-8554648995-dndb2\" (UID: \"53be55ab-28ee-4368-8650-f5c90340992a\") " pod="openstack/dnsmasq-dns-8554648995-dndb2" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.787534 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53be55ab-28ee-4368-8650-f5c90340992a-dns-svc\") pod \"dnsmasq-dns-8554648995-dndb2\" (UID: \"53be55ab-28ee-4368-8650-f5c90340992a\") " pod="openstack/dnsmasq-dns-8554648995-dndb2" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.788231 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53be55ab-28ee-4368-8650-f5c90340992a-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-dndb2\" (UID: \"53be55ab-28ee-4368-8650-f5c90340992a\") " pod="openstack/dnsmasq-dns-8554648995-dndb2" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.832802 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27mbw\" (UniqueName: \"kubernetes.io/projected/53be55ab-28ee-4368-8650-f5c90340992a-kube-api-access-27mbw\") pod \"dnsmasq-dns-8554648995-dndb2\" (UID: \"53be55ab-28ee-4368-8650-f5c90340992a\") " pod="openstack/dnsmasq-dns-8554648995-dndb2" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.900413 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d6f0b7b-1214-4425-a850-09933e0e9a6e","Type":"ContainerStarted","Data":"c685149ca5c1cacd22f1f520b28739a9acd18c22814651677ff962615d1dc812"} Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.900729 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-jgbgf" Feb 16 17:20:24 crc kubenswrapper[4794]: I0216 17:20:24.988089 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-dndb2" Feb 16 17:20:25 crc kubenswrapper[4794]: I0216 17:20:25.828670 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-sz7q7" Feb 16 17:20:25 crc kubenswrapper[4794]: I0216 17:20:25.829452 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-tjvdt" Feb 16 17:20:25 crc kubenswrapper[4794]: I0216 17:20:25.946550 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-sz7q7" event={"ID":"d554e5d8-e390-484d-b414-ace409b51d91","Type":"ContainerDied","Data":"9071a90f7fa541d5a4bd07f7d4d727fba8e6c8f5ddf8370c1774cc7490a72fa6"} Feb 16 17:20:25 crc kubenswrapper[4794]: I0216 17:20:25.946684 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-sz7q7" Feb 16 17:20:25 crc kubenswrapper[4794]: I0216 17:20:25.949518 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-tjvdt" event={"ID":"ab201dca-05ea-4f61-a71f-be55c9587777","Type":"ContainerDied","Data":"78562777748db48d9f8f620c868e837a4e63365c1512c6fd7f5d87a7c8e15e7e"} Feb 16 17:20:25 crc kubenswrapper[4794]: I0216 17:20:25.949616 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-tjvdt" Feb 16 17:20:25 crc kubenswrapper[4794]: I0216 17:20:25.961603 4794 generic.go:334] "Generic (PLEG): container finished" podID="927505a3-c47f-4b5a-ac60-d35b0140edfe" containerID="4fdd5f484e0f7e1e04b14ae306ffd75e9bb33b13d173c2e370331e3d56490c72" exitCode=0 Feb 16 17:20:25 crc kubenswrapper[4794]: I0216 17:20:25.961670 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"927505a3-c47f-4b5a-ac60-d35b0140edfe","Type":"ContainerDied","Data":"4fdd5f484e0f7e1e04b14ae306ffd75e9bb33b13d173c2e370331e3d56490c72"} Feb 16 17:20:25 crc kubenswrapper[4794]: I0216 17:20:25.963806 4794 generic.go:334] "Generic (PLEG): container finished" podID="c07f58cd-ea21-4cb3-a3db-0d184c3628bd" containerID="cf6733d7ff433fb34b8a6378eab2424071f8c1fb5f2ff407fcdc566d19653452" exitCode=0 Feb 16 17:20:25 crc kubenswrapper[4794]: I0216 17:20:25.963904 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c07f58cd-ea21-4cb3-a3db-0d184c3628bd","Type":"ContainerDied","Data":"cf6733d7ff433fb34b8a6378eab2424071f8c1fb5f2ff407fcdc566d19653452"} Feb 16 17:20:26 crc kubenswrapper[4794]: I0216 17:20:26.042344 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d554e5d8-e390-484d-b414-ace409b51d91-config\") pod \"d554e5d8-e390-484d-b414-ace409b51d91\" (UID: \"d554e5d8-e390-484d-b414-ace409b51d91\") " Feb 16 17:20:26 crc kubenswrapper[4794]: I0216 17:20:26.042623 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d554e5d8-e390-484d-b414-ace409b51d91-dns-svc\") pod \"d554e5d8-e390-484d-b414-ace409b51d91\" (UID: \"d554e5d8-e390-484d-b414-ace409b51d91\") " Feb 16 17:20:26 crc kubenswrapper[4794]: I0216 17:20:26.042704 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw9km\" (UniqueName: \"kubernetes.io/projected/ab201dca-05ea-4f61-a71f-be55c9587777-kube-api-access-fw9km\") pod \"ab201dca-05ea-4f61-a71f-be55c9587777\" (UID: \"ab201dca-05ea-4f61-a71f-be55c9587777\") " Feb 16 17:20:26 crc kubenswrapper[4794]: I0216 17:20:26.042835 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab201dca-05ea-4f61-a71f-be55c9587777-dns-svc\") pod \"ab201dca-05ea-4f61-a71f-be55c9587777\" (UID: \"ab201dca-05ea-4f61-a71f-be55c9587777\") " Feb 16 17:20:26 crc kubenswrapper[4794]: I0216 17:20:26.042862 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab201dca-05ea-4f61-a71f-be55c9587777-config\") pod \"ab201dca-05ea-4f61-a71f-be55c9587777\" (UID: \"ab201dca-05ea-4f61-a71f-be55c9587777\") " Feb 16 17:20:26 crc kubenswrapper[4794]: I0216 17:20:26.042909 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gh62\" (UniqueName: \"kubernetes.io/projected/d554e5d8-e390-484d-b414-ace409b51d91-kube-api-access-9gh62\") pod \"d554e5d8-e390-484d-b414-ace409b51d91\" (UID: \"d554e5d8-e390-484d-b414-ace409b51d91\") " Feb 16 17:20:26 crc kubenswrapper[4794]: I0216 17:20:26.044881 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d554e5d8-e390-484d-b414-ace409b51d91-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d554e5d8-e390-484d-b414-ace409b51d91" (UID: "d554e5d8-e390-484d-b414-ace409b51d91"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:26 crc kubenswrapper[4794]: I0216 17:20:26.045336 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d554e5d8-e390-484d-b414-ace409b51d91-config" (OuterVolumeSpecName: "config") pod "d554e5d8-e390-484d-b414-ace409b51d91" (UID: "d554e5d8-e390-484d-b414-ace409b51d91"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:26 crc kubenswrapper[4794]: I0216 17:20:26.047604 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d554e5d8-e390-484d-b414-ace409b51d91-kube-api-access-9gh62" (OuterVolumeSpecName: "kube-api-access-9gh62") pod "d554e5d8-e390-484d-b414-ace409b51d91" (UID: "d554e5d8-e390-484d-b414-ace409b51d91"). InnerVolumeSpecName "kube-api-access-9gh62". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:26 crc kubenswrapper[4794]: I0216 17:20:26.051385 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab201dca-05ea-4f61-a71f-be55c9587777-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ab201dca-05ea-4f61-a71f-be55c9587777" (UID: "ab201dca-05ea-4f61-a71f-be55c9587777"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:26 crc kubenswrapper[4794]: I0216 17:20:26.051805 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab201dca-05ea-4f61-a71f-be55c9587777-config" (OuterVolumeSpecName: "config") pod "ab201dca-05ea-4f61-a71f-be55c9587777" (UID: "ab201dca-05ea-4f61-a71f-be55c9587777"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:26 crc kubenswrapper[4794]: I0216 17:20:26.057240 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab201dca-05ea-4f61-a71f-be55c9587777-kube-api-access-fw9km" (OuterVolumeSpecName: "kube-api-access-fw9km") pod "ab201dca-05ea-4f61-a71f-be55c9587777" (UID: "ab201dca-05ea-4f61-a71f-be55c9587777"). InnerVolumeSpecName "kube-api-access-fw9km". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:26 crc kubenswrapper[4794]: I0216 17:20:26.153636 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d554e5d8-e390-484d-b414-ace409b51d91-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:26 crc kubenswrapper[4794]: I0216 17:20:26.153685 4794 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d554e5d8-e390-484d-b414-ace409b51d91-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:26 crc kubenswrapper[4794]: I0216 17:20:26.153698 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw9km\" (UniqueName: \"kubernetes.io/projected/ab201dca-05ea-4f61-a71f-be55c9587777-kube-api-access-fw9km\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:26 crc kubenswrapper[4794]: I0216 17:20:26.153711 4794 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab201dca-05ea-4f61-a71f-be55c9587777-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:26 crc kubenswrapper[4794]: I0216 17:20:26.153720 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab201dca-05ea-4f61-a71f-be55c9587777-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:26 crc kubenswrapper[4794]: I0216 17:20:26.153730 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gh62\" (UniqueName: \"kubernetes.io/projected/d554e5d8-e390-484d-b414-ace409b51d91-kube-api-access-9gh62\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:26 crc kubenswrapper[4794]: I0216 17:20:26.232599 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-dndb2"] Feb 16 17:20:26 crc kubenswrapper[4794]: I0216 17:20:26.322380 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-tjvdt"] Feb 16 17:20:26 crc kubenswrapper[4794]: I0216 17:20:26.340955 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-tjvdt"] Feb 16 17:20:26 crc kubenswrapper[4794]: I0216 17:20:26.367593 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-sz7q7"] Feb 16 17:20:26 crc kubenswrapper[4794]: I0216 17:20:26.392523 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-sz7q7"] Feb 16 17:20:26 crc kubenswrapper[4794]: I0216 17:20:26.400552 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-b25d2"] Feb 16 17:20:26 crc kubenswrapper[4794]: I0216 17:20:26.480790 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-ffrdr"] Feb 16 17:20:26 crc kubenswrapper[4794]: W0216 17:20:26.496995 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd282d685_2404_4141_9865_03de5d0928c5.slice/crio-ffd0783de29a9a14d7c669a1692aca115fa65707ec70b53549bc2ab0357f382c WatchSource:0}: Error finding container ffd0783de29a9a14d7c669a1692aca115fa65707ec70b53549bc2ab0357f382c: Status 404 returned error can't find the container with id ffd0783de29a9a14d7c669a1692aca115fa65707ec70b53549bc2ab0357f382c Feb 16 17:20:26 crc kubenswrapper[4794]: I0216 17:20:26.807640 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab201dca-05ea-4f61-a71f-be55c9587777" path="/var/lib/kubelet/pods/ab201dca-05ea-4f61-a71f-be55c9587777/volumes" Feb 16 17:20:26 crc kubenswrapper[4794]: I0216 17:20:26.808582 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d554e5d8-e390-484d-b414-ace409b51d91" path="/var/lib/kubelet/pods/d554e5d8-e390-484d-b414-ace409b51d91/volumes" Feb 16 17:20:26 crc kubenswrapper[4794]: I0216 17:20:26.976319 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-ffrdr" event={"ID":"d282d685-2404-4141-9865-03de5d0928c5","Type":"ContainerStarted","Data":"ffd0783de29a9a14d7c669a1692aca115fa65707ec70b53549bc2ab0357f382c"} Feb 16 17:20:26 crc kubenswrapper[4794]: I0216 17:20:26.977886 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c07f58cd-ea21-4cb3-a3db-0d184c3628bd","Type":"ContainerStarted","Data":"d5963dafc540e447f00667f40aa280cbda473360074bb1955bdd35ed3c9614d2"} Feb 16 17:20:26 crc kubenswrapper[4794]: I0216 17:20:26.981034 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"eab559f8-3130-43e5-bbf7-cf980cb15a56","Type":"ContainerStarted","Data":"0cb1bc39b4a9f7f4538105e00a0abd44af18f24648b6ce00b3f7dc13ee6af8b4"} Feb 16 17:20:26 crc kubenswrapper[4794]: I0216 17:20:26.983256 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8528fad2-4c8a-4171-92a6-eb31e80d0f2e","Type":"ContainerStarted","Data":"c992cc2e39691d89c3df6235672d02b4739630d7b30716ed674cdc0910503fe6"} Feb 16 17:20:26 crc kubenswrapper[4794]: I0216 17:20:26.984576 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-dndb2" event={"ID":"53be55ab-28ee-4368-8650-f5c90340992a","Type":"ContainerStarted","Data":"68bb0ae72a7d8a136a4e8f5738f5b3bb80b1dd6b0631373eddb17968d55bbc75"} Feb 16 17:20:26 crc kubenswrapper[4794]: I0216 17:20:26.985612 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-b25d2" event={"ID":"6a5d8158-28e2-414b-8ebd-abce9aa4b12d","Type":"ContainerStarted","Data":"354a01bd87ddb7e71439f707bf3e9c03efbc8fddc3c7d1a0a227b12f5e4c360e"} Feb 16 17:20:26 crc kubenswrapper[4794]: I0216 17:20:26.985637 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-b25d2" event={"ID":"6a5d8158-28e2-414b-8ebd-abce9aa4b12d","Type":"ContainerStarted","Data":"7b5fd28754137630798d2a8ea2a0856e30c7cd49f029560846bde2a3495f9f27"} Feb 16 17:20:26 crc kubenswrapper[4794]: I0216 17:20:26.987292 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"927505a3-c47f-4b5a-ac60-d35b0140edfe","Type":"ContainerStarted","Data":"e31c6c66f8ff112f33c5082408b39a7c7dca0a363a3fde622636c9bce32d99ae"} Feb 16 17:20:27 crc kubenswrapper[4794]: I0216 17:20:27.026878 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=28.023208173 podStartE2EDuration="35.026856626s" podCreationTimestamp="2026-02-16 17:19:52 +0000 UTC" firstStartedPulling="2026-02-16 17:20:13.19000596 +0000 UTC m=+1239.138100607" lastFinishedPulling="2026-02-16 17:20:20.193654413 +0000 UTC m=+1246.141749060" observedRunningTime="2026-02-16 17:20:27.024956728 +0000 UTC m=+1252.973051385" watchObservedRunningTime="2026-02-16 17:20:27.026856626 +0000 UTC m=+1252.974951273" Feb 16 17:20:27 crc kubenswrapper[4794]: I0216 17:20:27.060161 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=14.124588898 podStartE2EDuration="24.060134348s" podCreationTimestamp="2026-02-16 17:20:03 +0000 UTC" firstStartedPulling="2026-02-16 17:20:15.7178139 +0000 UTC m=+1241.665908547" lastFinishedPulling="2026-02-16 17:20:25.65335934 +0000 UTC m=+1251.601453997" observedRunningTime="2026-02-16 17:20:27.0562747 +0000 UTC m=+1253.004369357" watchObservedRunningTime="2026-02-16 17:20:27.060134348 +0000 UTC m=+1253.008228995" Feb 16 17:20:27 crc kubenswrapper[4794]: I0216 17:20:27.085851 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-b25d2" podStartSLOduration=4.085831417 podStartE2EDuration="4.085831417s" podCreationTimestamp="2026-02-16 17:20:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:20:27.083858138 +0000 UTC m=+1253.031952785" watchObservedRunningTime="2026-02-16 17:20:27.085831417 +0000 UTC m=+1253.033926064" Feb 16 17:20:27 crc kubenswrapper[4794]: I0216 17:20:27.146762 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=27.468335318 podStartE2EDuration="34.146747578s" podCreationTimestamp="2026-02-16 17:19:53 +0000 UTC" firstStartedPulling="2026-02-16 17:20:13.902456553 +0000 UTC m=+1239.850551200" lastFinishedPulling="2026-02-16 17:20:20.580868773 +0000 UTC m=+1246.528963460" observedRunningTime="2026-02-16 17:20:27.105604697 +0000 UTC m=+1253.053699354" watchObservedRunningTime="2026-02-16 17:20:27.146747578 +0000 UTC m=+1253.094842215" Feb 16 17:20:27 crc kubenswrapper[4794]: I0216 17:20:27.169291 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=18.854745069 podStartE2EDuration="28.169272827s" podCreationTimestamp="2026-02-16 17:19:59 +0000 UTC" firstStartedPulling="2026-02-16 17:20:16.345956362 +0000 UTC m=+1242.294051009" lastFinishedPulling="2026-02-16 17:20:25.66048412 +0000 UTC m=+1251.608578767" observedRunningTime="2026-02-16 17:20:27.145356322 +0000 UTC m=+1253.093450969" watchObservedRunningTime="2026-02-16 17:20:27.169272827 +0000 UTC m=+1253.117367474" Feb 16 17:20:28 crc kubenswrapper[4794]: I0216 17:20:28.012041 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-dndb2" event={"ID":"53be55ab-28ee-4368-8650-f5c90340992a","Type":"ContainerStarted","Data":"1436df60f3e91227f02112b0eaafa1b9cc5675ad4b28fc5d7583876df114bda0"} Feb 16 17:20:28 crc kubenswrapper[4794]: I0216 17:20:28.146934 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 16 17:20:28 crc kubenswrapper[4794]: I0216 17:20:28.202787 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 16 17:20:28 crc kubenswrapper[4794]: I0216 17:20:28.555931 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 16 17:20:28 crc kubenswrapper[4794]: I0216 17:20:28.595285 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 16 17:20:29 crc kubenswrapper[4794]: I0216 17:20:29.034808 4794 generic.go:334] "Generic (PLEG): container finished" podID="53be55ab-28ee-4368-8650-f5c90340992a" containerID="1436df60f3e91227f02112b0eaafa1b9cc5675ad4b28fc5d7583876df114bda0" exitCode=0 Feb 16 17:20:29 crc kubenswrapper[4794]: I0216 17:20:29.034881 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-dndb2" event={"ID":"53be55ab-28ee-4368-8650-f5c90340992a","Type":"ContainerDied","Data":"1436df60f3e91227f02112b0eaafa1b9cc5675ad4b28fc5d7583876df114bda0"} Feb 16 17:20:29 crc kubenswrapper[4794]: I0216 17:20:29.037293 4794 generic.go:334] "Generic (PLEG): container finished" podID="d282d685-2404-4141-9865-03de5d0928c5" containerID="987d6dce6a954698dda23d72d893856fd8c7fa8bc59e1cf5803f69c670e9a145" exitCode=0 Feb 16 17:20:29 crc kubenswrapper[4794]: I0216 17:20:29.037341 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-ffrdr" event={"ID":"d282d685-2404-4141-9865-03de5d0928c5","Type":"ContainerDied","Data":"987d6dce6a954698dda23d72d893856fd8c7fa8bc59e1cf5803f69c670e9a145"} Feb 16 17:20:29 crc kubenswrapper[4794]: I0216 17:20:29.037614 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 16 17:20:29 crc kubenswrapper[4794]: I0216 17:20:29.037867 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 16 17:20:29 crc kubenswrapper[4794]: I0216 17:20:29.320099 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 16 17:20:29 crc kubenswrapper[4794]: I0216 17:20:29.324070 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 16 17:20:29 crc kubenswrapper[4794]: I0216 17:20:29.596006 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 16 17:20:29 crc kubenswrapper[4794]: I0216 17:20:29.599724 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 16 17:20:29 crc kubenswrapper[4794]: I0216 17:20:29.604009 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 16 17:20:29 crc kubenswrapper[4794]: I0216 17:20:29.604040 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-g884f" Feb 16 17:20:29 crc kubenswrapper[4794]: I0216 17:20:29.604512 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 16 17:20:29 crc kubenswrapper[4794]: I0216 17:20:29.604591 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 16 17:20:29 crc kubenswrapper[4794]: I0216 17:20:29.622969 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 16 17:20:29 crc kubenswrapper[4794]: I0216 17:20:29.749283 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b05015a0-b648-4ebd-a7f1-2621e125504e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b05015a0-b648-4ebd-a7f1-2621e125504e\") " pod="openstack/ovn-northd-0" Feb 16 17:20:29 crc kubenswrapper[4794]: I0216 17:20:29.749494 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b05015a0-b648-4ebd-a7f1-2621e125504e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b05015a0-b648-4ebd-a7f1-2621e125504e\") " pod="openstack/ovn-northd-0" Feb 16 17:20:29 crc kubenswrapper[4794]: I0216 17:20:29.749681 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjqcl\" (UniqueName: \"kubernetes.io/projected/b05015a0-b648-4ebd-a7f1-2621e125504e-kube-api-access-rjqcl\") pod \"ovn-northd-0\" (UID: \"b05015a0-b648-4ebd-a7f1-2621e125504e\") " pod="openstack/ovn-northd-0" Feb 16 17:20:29 crc kubenswrapper[4794]: I0216 17:20:29.749770 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b05015a0-b648-4ebd-a7f1-2621e125504e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b05015a0-b648-4ebd-a7f1-2621e125504e\") " pod="openstack/ovn-northd-0" Feb 16 17:20:29 crc kubenswrapper[4794]: I0216 17:20:29.749796 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b05015a0-b648-4ebd-a7f1-2621e125504e-scripts\") pod \"ovn-northd-0\" (UID: \"b05015a0-b648-4ebd-a7f1-2621e125504e\") " pod="openstack/ovn-northd-0" Feb 16 17:20:29 crc kubenswrapper[4794]: I0216 17:20:29.749963 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b05015a0-b648-4ebd-a7f1-2621e125504e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b05015a0-b648-4ebd-a7f1-2621e125504e\") " pod="openstack/ovn-northd-0" Feb 16 17:20:29 crc kubenswrapper[4794]: I0216 17:20:29.750177 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b05015a0-b648-4ebd-a7f1-2621e125504e-config\") pod \"ovn-northd-0\" (UID: \"b05015a0-b648-4ebd-a7f1-2621e125504e\") " pod="openstack/ovn-northd-0" Feb 16 17:20:29 crc kubenswrapper[4794]: I0216 17:20:29.852531 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b05015a0-b648-4ebd-a7f1-2621e125504e-config\") pod \"ovn-northd-0\" (UID: \"b05015a0-b648-4ebd-a7f1-2621e125504e\") " pod="openstack/ovn-northd-0" Feb 16 17:20:29 crc kubenswrapper[4794]: I0216 17:20:29.852956 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b05015a0-b648-4ebd-a7f1-2621e125504e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b05015a0-b648-4ebd-a7f1-2621e125504e\") " pod="openstack/ovn-northd-0" Feb 16 17:20:29 crc kubenswrapper[4794]: I0216 17:20:29.853019 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b05015a0-b648-4ebd-a7f1-2621e125504e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b05015a0-b648-4ebd-a7f1-2621e125504e\") " pod="openstack/ovn-northd-0" Feb 16 17:20:29 crc kubenswrapper[4794]: I0216 17:20:29.853103 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjqcl\" (UniqueName: \"kubernetes.io/projected/b05015a0-b648-4ebd-a7f1-2621e125504e-kube-api-access-rjqcl\") pod \"ovn-northd-0\" (UID: \"b05015a0-b648-4ebd-a7f1-2621e125504e\") " pod="openstack/ovn-northd-0" Feb 16 17:20:29 crc kubenswrapper[4794]: I0216 17:20:29.853147 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b05015a0-b648-4ebd-a7f1-2621e125504e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b05015a0-b648-4ebd-a7f1-2621e125504e\") " pod="openstack/ovn-northd-0" Feb 16 17:20:29 crc kubenswrapper[4794]: I0216 17:20:29.853169 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b05015a0-b648-4ebd-a7f1-2621e125504e-scripts\") pod \"ovn-northd-0\" (UID: \"b05015a0-b648-4ebd-a7f1-2621e125504e\") " pod="openstack/ovn-northd-0" Feb 16 17:20:29 crc kubenswrapper[4794]: I0216 17:20:29.853215 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b05015a0-b648-4ebd-a7f1-2621e125504e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b05015a0-b648-4ebd-a7f1-2621e125504e\") " pod="openstack/ovn-northd-0" Feb 16 17:20:29 crc kubenswrapper[4794]: I0216 17:20:29.856905 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b05015a0-b648-4ebd-a7f1-2621e125504e-scripts\") pod \"ovn-northd-0\" (UID: \"b05015a0-b648-4ebd-a7f1-2621e125504e\") " pod="openstack/ovn-northd-0" Feb 16 17:20:29 crc kubenswrapper[4794]: I0216 17:20:29.859215 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b05015a0-b648-4ebd-a7f1-2621e125504e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b05015a0-b648-4ebd-a7f1-2621e125504e\") " pod="openstack/ovn-northd-0" Feb 16 17:20:29 crc kubenswrapper[4794]: I0216 17:20:29.860796 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b05015a0-b648-4ebd-a7f1-2621e125504e-config\") pod \"ovn-northd-0\" (UID: \"b05015a0-b648-4ebd-a7f1-2621e125504e\") " pod="openstack/ovn-northd-0" Feb 16 17:20:29 crc kubenswrapper[4794]: I0216 17:20:29.862243 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b05015a0-b648-4ebd-a7f1-2621e125504e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b05015a0-b648-4ebd-a7f1-2621e125504e\") " pod="openstack/ovn-northd-0" Feb 16 17:20:29 crc kubenswrapper[4794]: I0216 17:20:29.862247 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b05015a0-b648-4ebd-a7f1-2621e125504e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b05015a0-b648-4ebd-a7f1-2621e125504e\") " pod="openstack/ovn-northd-0" Feb 16 17:20:29 crc kubenswrapper[4794]: I0216 17:20:29.862349 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b05015a0-b648-4ebd-a7f1-2621e125504e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b05015a0-b648-4ebd-a7f1-2621e125504e\") " pod="openstack/ovn-northd-0" Feb 16 17:20:29 crc kubenswrapper[4794]: I0216 17:20:29.888445 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjqcl\" (UniqueName: \"kubernetes.io/projected/b05015a0-b648-4ebd-a7f1-2621e125504e-kube-api-access-rjqcl\") pod \"ovn-northd-0\" (UID: \"b05015a0-b648-4ebd-a7f1-2621e125504e\") " pod="openstack/ovn-northd-0" Feb 16 17:20:29 crc kubenswrapper[4794]: I0216 17:20:29.933104 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 16 17:20:30 crc kubenswrapper[4794]: I0216 17:20:30.130699 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 16 17:20:30 crc kubenswrapper[4794]: E0216 17:20:30.551556 4794 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d6f0b7b_1214_4425_a850_09933e0e9a6e.slice/crio-c685149ca5c1cacd22f1f520b28739a9acd18c22814651677ff962615d1dc812.scope\": RecentStats: unable to find data in memory cache]" Feb 16 17:20:30 crc kubenswrapper[4794]: I0216 17:20:30.564932 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 16 17:20:30 crc kubenswrapper[4794]: W0216 17:20:30.567494 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb05015a0_b648_4ebd_a7f1_2621e125504e.slice/crio-5c937f6b3ae5594804c5b6f5ccd6d85838783d2c7bb540619de3d5cf2e9ca3d9 WatchSource:0}: Error finding container 5c937f6b3ae5594804c5b6f5ccd6d85838783d2c7bb540619de3d5cf2e9ca3d9: Status 404 returned error can't find the container with id 5c937f6b3ae5594804c5b6f5ccd6d85838783d2c7bb540619de3d5cf2e9ca3d9 Feb 16 17:20:31 crc kubenswrapper[4794]: I0216 17:20:31.059855 4794 generic.go:334] "Generic (PLEG): container finished" podID="9d6f0b7b-1214-4425-a850-09933e0e9a6e" containerID="c685149ca5c1cacd22f1f520b28739a9acd18c22814651677ff962615d1dc812" exitCode=0 Feb 16 17:20:31 crc kubenswrapper[4794]: I0216 17:20:31.059941 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d6f0b7b-1214-4425-a850-09933e0e9a6e","Type":"ContainerDied","Data":"c685149ca5c1cacd22f1f520b28739a9acd18c22814651677ff962615d1dc812"} Feb 16 17:20:31 crc kubenswrapper[4794]: I0216 17:20:31.061340 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b05015a0-b648-4ebd-a7f1-2621e125504e","Type":"ContainerStarted","Data":"5c937f6b3ae5594804c5b6f5ccd6d85838783d2c7bb540619de3d5cf2e9ca3d9"} Feb 16 17:20:33 crc kubenswrapper[4794]: I0216 17:20:33.080246 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-dndb2" event={"ID":"53be55ab-28ee-4368-8650-f5c90340992a","Type":"ContainerStarted","Data":"6eff68df668384167a35834cfcb270bdd8a30fee88c7235d549d43a0f2df31b2"} Feb 16 17:20:33 crc kubenswrapper[4794]: I0216 17:20:33.081659 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-dndb2" Feb 16 17:20:33 crc kubenswrapper[4794]: I0216 17:20:33.084770 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-ffrdr" event={"ID":"d282d685-2404-4141-9865-03de5d0928c5","Type":"ContainerStarted","Data":"81d45b52a52af2441eb9588c4e0b6643cdd1099654236bce8fcfef8a549151b7"} Feb 16 17:20:33 crc kubenswrapper[4794]: I0216 17:20:33.085672 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-ffrdr" Feb 16 17:20:33 crc kubenswrapper[4794]: I0216 17:20:33.105644 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-dndb2" podStartSLOduration=8.596742009 podStartE2EDuration="9.105624455s" podCreationTimestamp="2026-02-16 17:20:24 +0000 UTC" firstStartedPulling="2026-02-16 17:20:26.260745617 +0000 UTC m=+1252.208840264" lastFinishedPulling="2026-02-16 17:20:26.769628063 +0000 UTC m=+1252.717722710" observedRunningTime="2026-02-16 17:20:33.099627244 +0000 UTC m=+1259.047721901" watchObservedRunningTime="2026-02-16 17:20:33.105624455 +0000 UTC m=+1259.053719102" Feb 16 17:20:33 crc kubenswrapper[4794]: I0216 17:20:33.122043 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-ffrdr" podStartSLOduration=8.685093773 podStartE2EDuration="9.12202069s" podCreationTimestamp="2026-02-16 17:20:24 +0000 UTC" firstStartedPulling="2026-02-16 17:20:26.501602816 +0000 UTC m=+1252.449697463" lastFinishedPulling="2026-02-16 17:20:26.938529733 +0000 UTC m=+1252.886624380" observedRunningTime="2026-02-16 17:20:33.114824268 +0000 UTC m=+1259.062918935" watchObservedRunningTime="2026-02-16 17:20:33.12202069 +0000 UTC m=+1259.070115337" Feb 16 17:20:33 crc kubenswrapper[4794]: I0216 17:20:33.685681 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 16 17:20:33 crc kubenswrapper[4794]: I0216 17:20:33.685760 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 16 17:20:33 crc kubenswrapper[4794]: I0216 17:20:33.847775 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 16 17:20:34 crc kubenswrapper[4794]: I0216 17:20:34.096723 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b05015a0-b648-4ebd-a7f1-2621e125504e","Type":"ContainerStarted","Data":"bb3b859ad07d93b64046205fb57b969ba6e0842e0dc73c75ac6ea9f632b73145"} Feb 16 17:20:34 crc kubenswrapper[4794]: I0216 17:20:34.193232 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 16 17:20:35 crc kubenswrapper[4794]: I0216 17:20:35.027859 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 16 17:20:35 crc kubenswrapper[4794]: I0216 17:20:35.028293 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 16 17:20:35 crc kubenswrapper[4794]: I0216 17:20:35.120267 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b05015a0-b648-4ebd-a7f1-2621e125504e","Type":"ContainerStarted","Data":"a36651c03cd4c1551bff823bb0d1cf1e5dde0217a04edbddb200653e2c1413a8"} Feb 16 17:20:35 crc kubenswrapper[4794]: I0216 17:20:35.164979 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.02518122 podStartE2EDuration="6.164959472s" podCreationTimestamp="2026-02-16 17:20:29 +0000 UTC" firstStartedPulling="2026-02-16 17:20:30.570172152 +0000 UTC m=+1256.518266799" lastFinishedPulling="2026-02-16 17:20:33.709950404 +0000 UTC m=+1259.658045051" observedRunningTime="2026-02-16 17:20:35.153160433 +0000 UTC m=+1261.101255080" watchObservedRunningTime="2026-02-16 17:20:35.164959472 +0000 UTC m=+1261.113054119" Feb 16 17:20:35 crc kubenswrapper[4794]: I0216 17:20:35.389685 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 16 17:20:35 crc kubenswrapper[4794]: I0216 17:20:35.495149 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.129100 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.252413 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-0fed-account-create-update-tb2gr"] Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.254375 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0fed-account-create-update-tb2gr" Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.260571 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0fed-account-create-update-tb2gr"] Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.262150 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.303752 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-6wxqb"] Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.305111 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6wxqb" Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.308594 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhnq2\" (UniqueName: \"kubernetes.io/projected/3d346472-4e86-4519-8307-ee7cf5f74280-kube-api-access-fhnq2\") pod \"keystone-0fed-account-create-update-tb2gr\" (UID: \"3d346472-4e86-4519-8307-ee7cf5f74280\") " pod="openstack/keystone-0fed-account-create-update-tb2gr" Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.308647 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d346472-4e86-4519-8307-ee7cf5f74280-operator-scripts\") pod \"keystone-0fed-account-create-update-tb2gr\" (UID: \"3d346472-4e86-4519-8307-ee7cf5f74280\") " pod="openstack/keystone-0fed-account-create-update-tb2gr" Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.314286 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6wxqb"] Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.410900 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7351df94-ade5-4e5e-b281-b195301dc37d-operator-scripts\") pod \"keystone-db-create-6wxqb\" (UID: \"7351df94-ade5-4e5e-b281-b195301dc37d\") " pod="openstack/keystone-db-create-6wxqb" Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.410989 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhnq2\" (UniqueName: \"kubernetes.io/projected/3d346472-4e86-4519-8307-ee7cf5f74280-kube-api-access-fhnq2\") pod \"keystone-0fed-account-create-update-tb2gr\" (UID: \"3d346472-4e86-4519-8307-ee7cf5f74280\") " pod="openstack/keystone-0fed-account-create-update-tb2gr" Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.411037 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d346472-4e86-4519-8307-ee7cf5f74280-operator-scripts\") pod \"keystone-0fed-account-create-update-tb2gr\" (UID: \"3d346472-4e86-4519-8307-ee7cf5f74280\") " pod="openstack/keystone-0fed-account-create-update-tb2gr" Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.411127 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfphx\" (UniqueName: \"kubernetes.io/projected/7351df94-ade5-4e5e-b281-b195301dc37d-kube-api-access-bfphx\") pod \"keystone-db-create-6wxqb\" (UID: \"7351df94-ade5-4e5e-b281-b195301dc37d\") " pod="openstack/keystone-db-create-6wxqb" Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.412079 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d346472-4e86-4519-8307-ee7cf5f74280-operator-scripts\") pod \"keystone-0fed-account-create-update-tb2gr\" (UID: \"3d346472-4e86-4519-8307-ee7cf5f74280\") " pod="openstack/keystone-0fed-account-create-update-tb2gr" Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.460569 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-s4fk6"] Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.461957 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-s4fk6" Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.467941 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhnq2\" (UniqueName: \"kubernetes.io/projected/3d346472-4e86-4519-8307-ee7cf5f74280-kube-api-access-fhnq2\") pod \"keystone-0fed-account-create-update-tb2gr\" (UID: \"3d346472-4e86-4519-8307-ee7cf5f74280\") " pod="openstack/keystone-0fed-account-create-update-tb2gr" Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.475245 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-s4fk6"] Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.512552 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7351df94-ade5-4e5e-b281-b195301dc37d-operator-scripts\") pod \"keystone-db-create-6wxqb\" (UID: \"7351df94-ade5-4e5e-b281-b195301dc37d\") " pod="openstack/keystone-db-create-6wxqb" Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.512688 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05c86def-4e37-40ef-847d-ccb9dd6c99a9-operator-scripts\") pod \"placement-db-create-s4fk6\" (UID: \"05c86def-4e37-40ef-847d-ccb9dd6c99a9\") " pod="openstack/placement-db-create-s4fk6" Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.512737 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7w2h\" (UniqueName: \"kubernetes.io/projected/05c86def-4e37-40ef-847d-ccb9dd6c99a9-kube-api-access-j7w2h\") pod \"placement-db-create-s4fk6\" (UID: \"05c86def-4e37-40ef-847d-ccb9dd6c99a9\") " pod="openstack/placement-db-create-s4fk6" Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.512779 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfphx\" (UniqueName: \"kubernetes.io/projected/7351df94-ade5-4e5e-b281-b195301dc37d-kube-api-access-bfphx\") pod \"keystone-db-create-6wxqb\" (UID: \"7351df94-ade5-4e5e-b281-b195301dc37d\") " pod="openstack/keystone-db-create-6wxqb" Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.513529 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7351df94-ade5-4e5e-b281-b195301dc37d-operator-scripts\") pod \"keystone-db-create-6wxqb\" (UID: \"7351df94-ade5-4e5e-b281-b195301dc37d\") " pod="openstack/keystone-db-create-6wxqb" Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.541219 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfphx\" (UniqueName: \"kubernetes.io/projected/7351df94-ade5-4e5e-b281-b195301dc37d-kube-api-access-bfphx\") pod \"keystone-db-create-6wxqb\" (UID: \"7351df94-ade5-4e5e-b281-b195301dc37d\") " pod="openstack/keystone-db-create-6wxqb" Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.567343 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-c135-account-create-update-79qz9"] Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.568867 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c135-account-create-update-79qz9" Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.571543 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.577061 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c135-account-create-update-79qz9"] Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.580956 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0fed-account-create-update-tb2gr" Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.615029 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05c86def-4e37-40ef-847d-ccb9dd6c99a9-operator-scripts\") pod \"placement-db-create-s4fk6\" (UID: \"05c86def-4e37-40ef-847d-ccb9dd6c99a9\") " pod="openstack/placement-db-create-s4fk6" Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.615091 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7w2h\" (UniqueName: \"kubernetes.io/projected/05c86def-4e37-40ef-847d-ccb9dd6c99a9-kube-api-access-j7w2h\") pod \"placement-db-create-s4fk6\" (UID: \"05c86def-4e37-40ef-847d-ccb9dd6c99a9\") " pod="openstack/placement-db-create-s4fk6" Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.615182 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8cd1b17-5173-42b6-a51d-e2a057d404f4-operator-scripts\") pod \"placement-c135-account-create-update-79qz9\" (UID: \"b8cd1b17-5173-42b6-a51d-e2a057d404f4\") " pod="openstack/placement-c135-account-create-update-79qz9" Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.615224 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22nw2\" (UniqueName: \"kubernetes.io/projected/b8cd1b17-5173-42b6-a51d-e2a057d404f4-kube-api-access-22nw2\") pod \"placement-c135-account-create-update-79qz9\" (UID: \"b8cd1b17-5173-42b6-a51d-e2a057d404f4\") " pod="openstack/placement-c135-account-create-update-79qz9" Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.615969 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05c86def-4e37-40ef-847d-ccb9dd6c99a9-operator-scripts\") pod \"placement-db-create-s4fk6\" (UID: \"05c86def-4e37-40ef-847d-ccb9dd6c99a9\") " pod="openstack/placement-db-create-s4fk6" Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.632215 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7w2h\" (UniqueName: \"kubernetes.io/projected/05c86def-4e37-40ef-847d-ccb9dd6c99a9-kube-api-access-j7w2h\") pod \"placement-db-create-s4fk6\" (UID: \"05c86def-4e37-40ef-847d-ccb9dd6c99a9\") " pod="openstack/placement-db-create-s4fk6" Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.634428 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6wxqb" Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.720010 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8cd1b17-5173-42b6-a51d-e2a057d404f4-operator-scripts\") pod \"placement-c135-account-create-update-79qz9\" (UID: \"b8cd1b17-5173-42b6-a51d-e2a057d404f4\") " pod="openstack/placement-c135-account-create-update-79qz9" Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.721105 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22nw2\" (UniqueName: \"kubernetes.io/projected/b8cd1b17-5173-42b6-a51d-e2a057d404f4-kube-api-access-22nw2\") pod \"placement-c135-account-create-update-79qz9\" (UID: \"b8cd1b17-5173-42b6-a51d-e2a057d404f4\") " pod="openstack/placement-c135-account-create-update-79qz9" Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.721191 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8cd1b17-5173-42b6-a51d-e2a057d404f4-operator-scripts\") pod \"placement-c135-account-create-update-79qz9\" (UID: \"b8cd1b17-5173-42b6-a51d-e2a057d404f4\") " pod="openstack/placement-c135-account-create-update-79qz9" Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.748905 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22nw2\" (UniqueName: \"kubernetes.io/projected/b8cd1b17-5173-42b6-a51d-e2a057d404f4-kube-api-access-22nw2\") pod \"placement-c135-account-create-update-79qz9\" (UID: \"b8cd1b17-5173-42b6-a51d-e2a057d404f4\") " pod="openstack/placement-c135-account-create-update-79qz9" Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.761562 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c135-account-create-update-79qz9" Feb 16 17:20:36 crc kubenswrapper[4794]: I0216 17:20:36.822970 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-s4fk6" Feb 16 17:20:37 crc kubenswrapper[4794]: I0216 17:20:37.123658 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0fed-account-create-update-tb2gr"] Feb 16 17:20:37 crc kubenswrapper[4794]: W0216 17:20:37.135770 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d346472_4e86_4519_8307_ee7cf5f74280.slice/crio-5e75c2407cb58d65dfd3b0abee6726eff8a9b3f622e2eaed8c18f301f08baf8a WatchSource:0}: Error finding container 5e75c2407cb58d65dfd3b0abee6726eff8a9b3f622e2eaed8c18f301f08baf8a: Status 404 returned error can't find the container with id 5e75c2407cb58d65dfd3b0abee6726eff8a9b3f622e2eaed8c18f301f08baf8a Feb 16 17:20:37 crc kubenswrapper[4794]: I0216 17:20:37.257553 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6wxqb"] Feb 16 17:20:37 crc kubenswrapper[4794]: W0216 17:20:37.261634 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7351df94_ade5_4e5e_b281_b195301dc37d.slice/crio-172106df5f4ac4e68344de8996ff18743b4169dd116b1c7b2abccddaeb8cc1c2 WatchSource:0}: Error finding container 172106df5f4ac4e68344de8996ff18743b4169dd116b1c7b2abccddaeb8cc1c2: Status 404 returned error can't find the container with id 172106df5f4ac4e68344de8996ff18743b4169dd116b1c7b2abccddaeb8cc1c2 Feb 16 17:20:37 crc kubenswrapper[4794]: I0216 17:20:37.408273 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c135-account-create-update-79qz9"] Feb 16 17:20:37 crc kubenswrapper[4794]: W0216 17:20:37.411389 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8cd1b17_5173_42b6_a51d_e2a057d404f4.slice/crio-a18a475b0ddbf454953172821d7daca3a60e362fd7f771fb530e3705bec1aa3f WatchSource:0}: Error finding container a18a475b0ddbf454953172821d7daca3a60e362fd7f771fb530e3705bec1aa3f: Status 404 returned error can't find the container with id a18a475b0ddbf454953172821d7daca3a60e362fd7f771fb530e3705bec1aa3f Feb 16 17:20:37 crc kubenswrapper[4794]: I0216 17:20:37.445876 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-s4fk6"] Feb 16 17:20:37 crc kubenswrapper[4794]: W0216 17:20:37.454770 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05c86def_4e37_40ef_847d_ccb9dd6c99a9.slice/crio-75c2c512d3771305d55efe20065a2ba93d53bda2b6f7ccaec687462a87e457b8 WatchSource:0}: Error finding container 75c2c512d3771305d55efe20065a2ba93d53bda2b6f7ccaec687462a87e457b8: Status 404 returned error can't find the container with id 75c2c512d3771305d55efe20065a2ba93d53bda2b6f7ccaec687462a87e457b8 Feb 16 17:20:37 crc kubenswrapper[4794]: I0216 17:20:37.536146 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 16 17:20:37 crc kubenswrapper[4794]: I0216 17:20:37.882662 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-ffrdr"] Feb 16 17:20:37 crc kubenswrapper[4794]: I0216 17:20:37.883152 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-ffrdr" podUID="d282d685-2404-4141-9865-03de5d0928c5" containerName="dnsmasq-dns" containerID="cri-o://81d45b52a52af2441eb9588c4e0b6643cdd1099654236bce8fcfef8a549151b7" gracePeriod=10 Feb 16 17:20:37 crc kubenswrapper[4794]: I0216 17:20:37.884449 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bf47b49b7-ffrdr" Feb 16 17:20:37 crc kubenswrapper[4794]: I0216 17:20:37.950850 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lswtm"] Feb 16 17:20:37 crc kubenswrapper[4794]: I0216 17:20:37.953038 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-lswtm" Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.003652 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lswtm"] Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.077548 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f564c83-65cd-4eb0-81b3-155b5a221041-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-lswtm\" (UID: \"2f564c83-65cd-4eb0-81b3-155b5a221041\") " pod="openstack/dnsmasq-dns-b8fbc5445-lswtm" Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.077627 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f564c83-65cd-4eb0-81b3-155b5a221041-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-lswtm\" (UID: \"2f564c83-65cd-4eb0-81b3-155b5a221041\") " pod="openstack/dnsmasq-dns-b8fbc5445-lswtm" Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.077691 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56xrn\" (UniqueName: \"kubernetes.io/projected/2f564c83-65cd-4eb0-81b3-155b5a221041-kube-api-access-56xrn\") pod \"dnsmasq-dns-b8fbc5445-lswtm\" (UID: \"2f564c83-65cd-4eb0-81b3-155b5a221041\") " pod="openstack/dnsmasq-dns-b8fbc5445-lswtm" Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.077741 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f564c83-65cd-4eb0-81b3-155b5a221041-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-lswtm\" (UID: \"2f564c83-65cd-4eb0-81b3-155b5a221041\") " pod="openstack/dnsmasq-dns-b8fbc5445-lswtm" Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.077768 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f564c83-65cd-4eb0-81b3-155b5a221041-config\") pod \"dnsmasq-dns-b8fbc5445-lswtm\" (UID: \"2f564c83-65cd-4eb0-81b3-155b5a221041\") " pod="openstack/dnsmasq-dns-b8fbc5445-lswtm" Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.177663 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-nbn72"] Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.179412 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-nbn72" Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.182846 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f564c83-65cd-4eb0-81b3-155b5a221041-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-lswtm\" (UID: \"2f564c83-65cd-4eb0-81b3-155b5a221041\") " pod="openstack/dnsmasq-dns-b8fbc5445-lswtm" Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.183101 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f564c83-65cd-4eb0-81b3-155b5a221041-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-lswtm\" (UID: \"2f564c83-65cd-4eb0-81b3-155b5a221041\") " pod="openstack/dnsmasq-dns-b8fbc5445-lswtm" Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.183232 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56xrn\" (UniqueName: \"kubernetes.io/projected/2f564c83-65cd-4eb0-81b3-155b5a221041-kube-api-access-56xrn\") pod \"dnsmasq-dns-b8fbc5445-lswtm\" (UID: \"2f564c83-65cd-4eb0-81b3-155b5a221041\") " pod="openstack/dnsmasq-dns-b8fbc5445-lswtm" Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.183344 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f564c83-65cd-4eb0-81b3-155b5a221041-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-lswtm\" (UID: \"2f564c83-65cd-4eb0-81b3-155b5a221041\") " pod="openstack/dnsmasq-dns-b8fbc5445-lswtm" Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.183388 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f564c83-65cd-4eb0-81b3-155b5a221041-config\") pod \"dnsmasq-dns-b8fbc5445-lswtm\" (UID: \"2f564c83-65cd-4eb0-81b3-155b5a221041\") " pod="openstack/dnsmasq-dns-b8fbc5445-lswtm" Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.186980 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f564c83-65cd-4eb0-81b3-155b5a221041-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-lswtm\" (UID: \"2f564c83-65cd-4eb0-81b3-155b5a221041\") " pod="openstack/dnsmasq-dns-b8fbc5445-lswtm" Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.188166 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f564c83-65cd-4eb0-81b3-155b5a221041-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-lswtm\" (UID: \"2f564c83-65cd-4eb0-81b3-155b5a221041\") " pod="openstack/dnsmasq-dns-b8fbc5445-lswtm" Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.188364 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f564c83-65cd-4eb0-81b3-155b5a221041-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-lswtm\" (UID: \"2f564c83-65cd-4eb0-81b3-155b5a221041\") " pod="openstack/dnsmasq-dns-b8fbc5445-lswtm" Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.201217 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f564c83-65cd-4eb0-81b3-155b5a221041-config\") pod \"dnsmasq-dns-b8fbc5445-lswtm\" (UID: \"2f564c83-65cd-4eb0-81b3-155b5a221041\") " pod="openstack/dnsmasq-dns-b8fbc5445-lswtm" Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.202520 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-s4fk6" event={"ID":"05c86def-4e37-40ef-847d-ccb9dd6c99a9","Type":"ContainerStarted","Data":"75c2c512d3771305d55efe20065a2ba93d53bda2b6f7ccaec687462a87e457b8"} Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.236657 4794 generic.go:334] "Generic (PLEG): container finished" podID="7351df94-ade5-4e5e-b281-b195301dc37d" containerID="5e45cbe19ecb4c6c292eb2959ae5ea77a14adbd89b63acd3148b6a3a9f5f7e58" exitCode=0 Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.237506 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6wxqb" event={"ID":"7351df94-ade5-4e5e-b281-b195301dc37d","Type":"ContainerDied","Data":"5e45cbe19ecb4c6c292eb2959ae5ea77a14adbd89b63acd3148b6a3a9f5f7e58"} Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.237554 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6wxqb" event={"ID":"7351df94-ade5-4e5e-b281-b195301dc37d","Type":"ContainerStarted","Data":"172106df5f4ac4e68344de8996ff18743b4169dd116b1c7b2abccddaeb8cc1c2"} Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.259245 4794 generic.go:334] "Generic (PLEG): container finished" podID="d282d685-2404-4141-9865-03de5d0928c5" containerID="81d45b52a52af2441eb9588c4e0b6643cdd1099654236bce8fcfef8a549151b7" exitCode=0 Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.259386 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-ffrdr" event={"ID":"d282d685-2404-4141-9865-03de5d0928c5","Type":"ContainerDied","Data":"81d45b52a52af2441eb9588c4e0b6643cdd1099654236bce8fcfef8a549151b7"} Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.260742 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56xrn\" (UniqueName: \"kubernetes.io/projected/2f564c83-65cd-4eb0-81b3-155b5a221041-kube-api-access-56xrn\") pod \"dnsmasq-dns-b8fbc5445-lswtm\" (UID: \"2f564c83-65cd-4eb0-81b3-155b5a221041\") " pod="openstack/dnsmasq-dns-b8fbc5445-lswtm" Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.285145 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3bc8a6c-f954-4825-8853-316738b0eb94-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-nbn72\" (UID: \"c3bc8a6c-f954-4825-8853-316738b0eb94\") " pod="openstack/mysqld-exporter-openstack-db-create-nbn72" Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.285250 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shs8l\" (UniqueName: \"kubernetes.io/projected/c3bc8a6c-f954-4825-8853-316738b0eb94-kube-api-access-shs8l\") pod \"mysqld-exporter-openstack-db-create-nbn72\" (UID: \"c3bc8a6c-f954-4825-8853-316738b0eb94\") " pod="openstack/mysqld-exporter-openstack-db-create-nbn72" Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.288249 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c135-account-create-update-79qz9" event={"ID":"b8cd1b17-5173-42b6-a51d-e2a057d404f4","Type":"ContainerStarted","Data":"2829c362c5a037ccd3c1ad307b5707931b39470677ae11b3443c439c1a392495"} Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.288438 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c135-account-create-update-79qz9" event={"ID":"b8cd1b17-5173-42b6-a51d-e2a057d404f4","Type":"ContainerStarted","Data":"a18a475b0ddbf454953172821d7daca3a60e362fd7f771fb530e3705bec1aa3f"} Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.301366 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-7b9c-account-create-update-xqtkk"] Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.302902 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-7b9c-account-create-update-xqtkk" Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.304658 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0fed-account-create-update-tb2gr" event={"ID":"3d346472-4e86-4519-8307-ee7cf5f74280","Type":"ContainerStarted","Data":"88e3906f0ca3fd28b8a0b47412e1e4a24f611740e2bc9e3bd7fb2503645ff84c"} Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.304703 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0fed-account-create-update-tb2gr" event={"ID":"3d346472-4e86-4519-8307-ee7cf5f74280","Type":"ContainerStarted","Data":"5e75c2407cb58d65dfd3b0abee6726eff8a9b3f622e2eaed8c18f301f08baf8a"} Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.313799 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.339701 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-nbn72"] Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.351390 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-7b9c-account-create-update-xqtkk"] Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.390018 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shs8l\" (UniqueName: \"kubernetes.io/projected/c3bc8a6c-f954-4825-8853-316738b0eb94-kube-api-access-shs8l\") pod \"mysqld-exporter-openstack-db-create-nbn72\" (UID: \"c3bc8a6c-f954-4825-8853-316738b0eb94\") " pod="openstack/mysqld-exporter-openstack-db-create-nbn72" Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.390230 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3bc8a6c-f954-4825-8853-316738b0eb94-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-nbn72\" (UID: \"c3bc8a6c-f954-4825-8853-316738b0eb94\") " pod="openstack/mysqld-exporter-openstack-db-create-nbn72" Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.391174 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3bc8a6c-f954-4825-8853-316738b0eb94-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-nbn72\" (UID: \"c3bc8a6c-f954-4825-8853-316738b0eb94\") " pod="openstack/mysqld-exporter-openstack-db-create-nbn72" Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.400124 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-c135-account-create-update-79qz9" podStartSLOduration=2.400101956 podStartE2EDuration="2.400101956s" podCreationTimestamp="2026-02-16 17:20:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:20:38.322616876 +0000 UTC m=+1264.270711523" watchObservedRunningTime="2026-02-16 17:20:38.400101956 +0000 UTC m=+1264.348196593" Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.426356 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-0fed-account-create-update-tb2gr" podStartSLOduration=2.426337399 podStartE2EDuration="2.426337399s" podCreationTimestamp="2026-02-16 17:20:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:20:38.401767828 +0000 UTC m=+1264.349862475" watchObservedRunningTime="2026-02-16 17:20:38.426337399 +0000 UTC m=+1264.374432036" Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.442493 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shs8l\" (UniqueName: \"kubernetes.io/projected/c3bc8a6c-f954-4825-8853-316738b0eb94-kube-api-access-shs8l\") pod \"mysqld-exporter-openstack-db-create-nbn72\" (UID: \"c3bc8a6c-f954-4825-8853-316738b0eb94\") " pod="openstack/mysqld-exporter-openstack-db-create-nbn72" Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.469118 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-lswtm" Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.492469 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7fd9bb0-100b-4941-80d2-1a9ec63423be-operator-scripts\") pod \"mysqld-exporter-7b9c-account-create-update-xqtkk\" (UID: \"a7fd9bb0-100b-4941-80d2-1a9ec63423be\") " pod="openstack/mysqld-exporter-7b9c-account-create-update-xqtkk" Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.492717 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvcvw\" (UniqueName: \"kubernetes.io/projected/a7fd9bb0-100b-4941-80d2-1a9ec63423be-kube-api-access-vvcvw\") pod \"mysqld-exporter-7b9c-account-create-update-xqtkk\" (UID: \"a7fd9bb0-100b-4941-80d2-1a9ec63423be\") " pod="openstack/mysqld-exporter-7b9c-account-create-update-xqtkk" Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.540162 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-nbn72" Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.595025 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7fd9bb0-100b-4941-80d2-1a9ec63423be-operator-scripts\") pod \"mysqld-exporter-7b9c-account-create-update-xqtkk\" (UID: \"a7fd9bb0-100b-4941-80d2-1a9ec63423be\") " pod="openstack/mysqld-exporter-7b9c-account-create-update-xqtkk" Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.595096 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvcvw\" (UniqueName: \"kubernetes.io/projected/a7fd9bb0-100b-4941-80d2-1a9ec63423be-kube-api-access-vvcvw\") pod \"mysqld-exporter-7b9c-account-create-update-xqtkk\" (UID: \"a7fd9bb0-100b-4941-80d2-1a9ec63423be\") " pod="openstack/mysqld-exporter-7b9c-account-create-update-xqtkk" Feb 16 17:20:38 crc kubenswrapper[4794]: I0216 17:20:38.596113 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7fd9bb0-100b-4941-80d2-1a9ec63423be-operator-scripts\") pod \"mysqld-exporter-7b9c-account-create-update-xqtkk\" (UID: \"a7fd9bb0-100b-4941-80d2-1a9ec63423be\") " pod="openstack/mysqld-exporter-7b9c-account-create-update-xqtkk" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:38.629935 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvcvw\" (UniqueName: \"kubernetes.io/projected/a7fd9bb0-100b-4941-80d2-1a9ec63423be-kube-api-access-vvcvw\") pod \"mysqld-exporter-7b9c-account-create-update-xqtkk\" (UID: \"a7fd9bb0-100b-4941-80d2-1a9ec63423be\") " pod="openstack/mysqld-exporter-7b9c-account-create-update-xqtkk" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:38.704272 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-7b9c-account-create-update-xqtkk" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:38.724233 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-ffrdr" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:38.899892 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d282d685-2404-4141-9865-03de5d0928c5-config\") pod \"d282d685-2404-4141-9865-03de5d0928c5\" (UID: \"d282d685-2404-4141-9865-03de5d0928c5\") " Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:38.900255 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzjq9\" (UniqueName: \"kubernetes.io/projected/d282d685-2404-4141-9865-03de5d0928c5-kube-api-access-lzjq9\") pod \"d282d685-2404-4141-9865-03de5d0928c5\" (UID: \"d282d685-2404-4141-9865-03de5d0928c5\") " Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:38.900363 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d282d685-2404-4141-9865-03de5d0928c5-ovsdbserver-nb\") pod \"d282d685-2404-4141-9865-03de5d0928c5\" (UID: \"d282d685-2404-4141-9865-03de5d0928c5\") " Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:38.900405 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d282d685-2404-4141-9865-03de5d0928c5-dns-svc\") pod \"d282d685-2404-4141-9865-03de5d0928c5\" (UID: \"d282d685-2404-4141-9865-03de5d0928c5\") " Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:38.905102 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d282d685-2404-4141-9865-03de5d0928c5-kube-api-access-lzjq9" (OuterVolumeSpecName: "kube-api-access-lzjq9") pod "d282d685-2404-4141-9865-03de5d0928c5" (UID: "d282d685-2404-4141-9865-03de5d0928c5"). InnerVolumeSpecName "kube-api-access-lzjq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:38.947460 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d282d685-2404-4141-9865-03de5d0928c5-config" (OuterVolumeSpecName: "config") pod "d282d685-2404-4141-9865-03de5d0928c5" (UID: "d282d685-2404-4141-9865-03de5d0928c5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:38.960436 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d282d685-2404-4141-9865-03de5d0928c5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d282d685-2404-4141-9865-03de5d0928c5" (UID: "d282d685-2404-4141-9865-03de5d0928c5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:38.961082 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d282d685-2404-4141-9865-03de5d0928c5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d282d685-2404-4141-9865-03de5d0928c5" (UID: "d282d685-2404-4141-9865-03de5d0928c5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.003018 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzjq9\" (UniqueName: \"kubernetes.io/projected/d282d685-2404-4141-9865-03de5d0928c5-kube-api-access-lzjq9\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.003043 4794 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d282d685-2404-4141-9865-03de5d0928c5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.003052 4794 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d282d685-2404-4141-9865-03de5d0928c5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.003060 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d282d685-2404-4141-9865-03de5d0928c5-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.190843 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 16 17:20:39 crc kubenswrapper[4794]: E0216 17:20:39.191769 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d282d685-2404-4141-9865-03de5d0928c5" containerName="dnsmasq-dns" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.191788 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="d282d685-2404-4141-9865-03de5d0928c5" containerName="dnsmasq-dns" Feb 16 17:20:39 crc kubenswrapper[4794]: E0216 17:20:39.191813 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d282d685-2404-4141-9865-03de5d0928c5" containerName="init" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.191820 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="d282d685-2404-4141-9865-03de5d0928c5" containerName="init" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.192078 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="d282d685-2404-4141-9865-03de5d0928c5" containerName="dnsmasq-dns" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.199450 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.202885 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-5hf47" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.203096 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.203217 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.218590 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.236141 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.310685 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/54acc9db-6bd7-463f-8637-6aa39ed3eb11-cache\") pod \"swift-storage-0\" (UID: \"54acc9db-6bd7-463f-8637-6aa39ed3eb11\") " pod="openstack/swift-storage-0" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.310755 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b486829c-2c55-4e12-97ad-e065012e5e3b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b486829c-2c55-4e12-97ad-e065012e5e3b\") pod \"swift-storage-0\" (UID: \"54acc9db-6bd7-463f-8637-6aa39ed3eb11\") " pod="openstack/swift-storage-0" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.310776 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/54acc9db-6bd7-463f-8637-6aa39ed3eb11-lock\") pod \"swift-storage-0\" (UID: \"54acc9db-6bd7-463f-8637-6aa39ed3eb11\") " pod="openstack/swift-storage-0" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.310795 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54acc9db-6bd7-463f-8637-6aa39ed3eb11-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"54acc9db-6bd7-463f-8637-6aa39ed3eb11\") " pod="openstack/swift-storage-0" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.310832 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz6k8\" (UniqueName: \"kubernetes.io/projected/54acc9db-6bd7-463f-8637-6aa39ed3eb11-kube-api-access-mz6k8\") pod \"swift-storage-0\" (UID: \"54acc9db-6bd7-463f-8637-6aa39ed3eb11\") " pod="openstack/swift-storage-0" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.310853 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/54acc9db-6bd7-463f-8637-6aa39ed3eb11-etc-swift\") pod \"swift-storage-0\" (UID: \"54acc9db-6bd7-463f-8637-6aa39ed3eb11\") " pod="openstack/swift-storage-0" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.316614 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-ffrdr" event={"ID":"d282d685-2404-4141-9865-03de5d0928c5","Type":"ContainerDied","Data":"ffd0783de29a9a14d7c669a1692aca115fa65707ec70b53549bc2ab0357f382c"} Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.316665 4794 scope.go:117] "RemoveContainer" containerID="81d45b52a52af2441eb9588c4e0b6643cdd1099654236bce8fcfef8a549151b7" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.316750 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-ffrdr" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.321575 4794 generic.go:334] "Generic (PLEG): container finished" podID="b8cd1b17-5173-42b6-a51d-e2a057d404f4" containerID="2829c362c5a037ccd3c1ad307b5707931b39470677ae11b3443c439c1a392495" exitCode=0 Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.321634 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c135-account-create-update-79qz9" event={"ID":"b8cd1b17-5173-42b6-a51d-e2a057d404f4","Type":"ContainerDied","Data":"2829c362c5a037ccd3c1ad307b5707931b39470677ae11b3443c439c1a392495"} Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.324055 4794 generic.go:334] "Generic (PLEG): container finished" podID="05c86def-4e37-40ef-847d-ccb9dd6c99a9" containerID="b361f858b2a25ac83fb9cd20b3b7ef7c69f443dbfbcc0c2a577d2d34cebfc7e3" exitCode=0 Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.324095 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-s4fk6" event={"ID":"05c86def-4e37-40ef-847d-ccb9dd6c99a9","Type":"ContainerDied","Data":"b361f858b2a25ac83fb9cd20b3b7ef7c69f443dbfbcc0c2a577d2d34cebfc7e3"} Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.325811 4794 generic.go:334] "Generic (PLEG): container finished" podID="3d346472-4e86-4519-8307-ee7cf5f74280" containerID="88e3906f0ca3fd28b8a0b47412e1e4a24f611740e2bc9e3bd7fb2503645ff84c" exitCode=0 Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.326056 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0fed-account-create-update-tb2gr" event={"ID":"3d346472-4e86-4519-8307-ee7cf5f74280","Type":"ContainerDied","Data":"88e3906f0ca3fd28b8a0b47412e1e4a24f611740e2bc9e3bd7fb2503645ff84c"} Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.424661 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b486829c-2c55-4e12-97ad-e065012e5e3b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b486829c-2c55-4e12-97ad-e065012e5e3b\") pod \"swift-storage-0\" (UID: \"54acc9db-6bd7-463f-8637-6aa39ed3eb11\") " pod="openstack/swift-storage-0" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.424732 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/54acc9db-6bd7-463f-8637-6aa39ed3eb11-lock\") pod \"swift-storage-0\" (UID: \"54acc9db-6bd7-463f-8637-6aa39ed3eb11\") " pod="openstack/swift-storage-0" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.424763 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54acc9db-6bd7-463f-8637-6aa39ed3eb11-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"54acc9db-6bd7-463f-8637-6aa39ed3eb11\") " pod="openstack/swift-storage-0" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.424860 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz6k8\" (UniqueName: \"kubernetes.io/projected/54acc9db-6bd7-463f-8637-6aa39ed3eb11-kube-api-access-mz6k8\") pod \"swift-storage-0\" (UID: \"54acc9db-6bd7-463f-8637-6aa39ed3eb11\") " pod="openstack/swift-storage-0" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.424902 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/54acc9db-6bd7-463f-8637-6aa39ed3eb11-etc-swift\") pod \"swift-storage-0\" (UID: \"54acc9db-6bd7-463f-8637-6aa39ed3eb11\") " pod="openstack/swift-storage-0" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.425247 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/54acc9db-6bd7-463f-8637-6aa39ed3eb11-cache\") pod \"swift-storage-0\" (UID: \"54acc9db-6bd7-463f-8637-6aa39ed3eb11\") " pod="openstack/swift-storage-0" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.425344 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-ffrdr"] Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.425394 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/54acc9db-6bd7-463f-8637-6aa39ed3eb11-lock\") pod \"swift-storage-0\" (UID: \"54acc9db-6bd7-463f-8637-6aa39ed3eb11\") " pod="openstack/swift-storage-0" Feb 16 17:20:39 crc kubenswrapper[4794]: E0216 17:20:39.425439 4794 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 17:20:39 crc kubenswrapper[4794]: E0216 17:20:39.425468 4794 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 17:20:39 crc kubenswrapper[4794]: E0216 17:20:39.425521 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54acc9db-6bd7-463f-8637-6aa39ed3eb11-etc-swift podName:54acc9db-6bd7-463f-8637-6aa39ed3eb11 nodeName:}" failed. No retries permitted until 2026-02-16 17:20:39.925506501 +0000 UTC m=+1265.873601148 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/54acc9db-6bd7-463f-8637-6aa39ed3eb11-etc-swift") pod "swift-storage-0" (UID: "54acc9db-6bd7-463f-8637-6aa39ed3eb11") : configmap "swift-ring-files" not found Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.425775 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/54acc9db-6bd7-463f-8637-6aa39ed3eb11-cache\") pod \"swift-storage-0\" (UID: \"54acc9db-6bd7-463f-8637-6aa39ed3eb11\") " pod="openstack/swift-storage-0" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.433214 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54acc9db-6bd7-463f-8637-6aa39ed3eb11-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"54acc9db-6bd7-463f-8637-6aa39ed3eb11\") " pod="openstack/swift-storage-0" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.434465 4794 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.434516 4794 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b486829c-2c55-4e12-97ad-e065012e5e3b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b486829c-2c55-4e12-97ad-e065012e5e3b\") pod \"swift-storage-0\" (UID: \"54acc9db-6bd7-463f-8637-6aa39ed3eb11\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/65fbc5fccb8b564b6e69578c7fca7c4f1dbf6345c545d3d3e01e564a1dfde437/globalmount\"" pod="openstack/swift-storage-0" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.445157 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-ffrdr"] Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.446539 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz6k8\" (UniqueName: \"kubernetes.io/projected/54acc9db-6bd7-463f-8637-6aa39ed3eb11-kube-api-access-mz6k8\") pod \"swift-storage-0\" (UID: \"54acc9db-6bd7-463f-8637-6aa39ed3eb11\") " pod="openstack/swift-storage-0" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.522071 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b486829c-2c55-4e12-97ad-e065012e5e3b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b486829c-2c55-4e12-97ad-e065012e5e3b\") pod \"swift-storage-0\" (UID: \"54acc9db-6bd7-463f-8637-6aa39ed3eb11\") " pod="openstack/swift-storage-0" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.576148 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-7b9c-account-create-update-xqtkk"] Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.592854 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lswtm"] Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.600502 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-nbn72"] Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.724679 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-w2gs8"] Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.726533 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-w2gs8" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.730812 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.731047 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.731155 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.750153 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-w2gs8"] Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.840601 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84dc223e-f01c-424c-802a-3e1a5ad819be-combined-ca-bundle\") pod \"swift-ring-rebalance-w2gs8\" (UID: \"84dc223e-f01c-424c-802a-3e1a5ad819be\") " pod="openstack/swift-ring-rebalance-w2gs8" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.840647 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/84dc223e-f01c-424c-802a-3e1a5ad819be-swiftconf\") pod \"swift-ring-rebalance-w2gs8\" (UID: \"84dc223e-f01c-424c-802a-3e1a5ad819be\") " pod="openstack/swift-ring-rebalance-w2gs8" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.840749 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/84dc223e-f01c-424c-802a-3e1a5ad819be-dispersionconf\") pod \"swift-ring-rebalance-w2gs8\" (UID: \"84dc223e-f01c-424c-802a-3e1a5ad819be\") " pod="openstack/swift-ring-rebalance-w2gs8" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.840783 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw2v6\" (UniqueName: \"kubernetes.io/projected/84dc223e-f01c-424c-802a-3e1a5ad819be-kube-api-access-lw2v6\") pod \"swift-ring-rebalance-w2gs8\" (UID: \"84dc223e-f01c-424c-802a-3e1a5ad819be\") " pod="openstack/swift-ring-rebalance-w2gs8" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.840921 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/84dc223e-f01c-424c-802a-3e1a5ad819be-scripts\") pod \"swift-ring-rebalance-w2gs8\" (UID: \"84dc223e-f01c-424c-802a-3e1a5ad819be\") " pod="openstack/swift-ring-rebalance-w2gs8" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.840963 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/84dc223e-f01c-424c-802a-3e1a5ad819be-etc-swift\") pod \"swift-ring-rebalance-w2gs8\" (UID: \"84dc223e-f01c-424c-802a-3e1a5ad819be\") " pod="openstack/swift-ring-rebalance-w2gs8" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.841042 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/84dc223e-f01c-424c-802a-3e1a5ad819be-ring-data-devices\") pod \"swift-ring-rebalance-w2gs8\" (UID: \"84dc223e-f01c-424c-802a-3e1a5ad819be\") " pod="openstack/swift-ring-rebalance-w2gs8" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.942855 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/84dc223e-f01c-424c-802a-3e1a5ad819be-scripts\") pod \"swift-ring-rebalance-w2gs8\" (UID: \"84dc223e-f01c-424c-802a-3e1a5ad819be\") " pod="openstack/swift-ring-rebalance-w2gs8" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.943273 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/84dc223e-f01c-424c-802a-3e1a5ad819be-etc-swift\") pod \"swift-ring-rebalance-w2gs8\" (UID: \"84dc223e-f01c-424c-802a-3e1a5ad819be\") " pod="openstack/swift-ring-rebalance-w2gs8" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.943358 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/54acc9db-6bd7-463f-8637-6aa39ed3eb11-etc-swift\") pod \"swift-storage-0\" (UID: \"54acc9db-6bd7-463f-8637-6aa39ed3eb11\") " pod="openstack/swift-storage-0" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.943385 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/84dc223e-f01c-424c-802a-3e1a5ad819be-ring-data-devices\") pod \"swift-ring-rebalance-w2gs8\" (UID: \"84dc223e-f01c-424c-802a-3e1a5ad819be\") " pod="openstack/swift-ring-rebalance-w2gs8" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.943423 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84dc223e-f01c-424c-802a-3e1a5ad819be-combined-ca-bundle\") pod \"swift-ring-rebalance-w2gs8\" (UID: \"84dc223e-f01c-424c-802a-3e1a5ad819be\") " pod="openstack/swift-ring-rebalance-w2gs8" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.943439 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/84dc223e-f01c-424c-802a-3e1a5ad819be-swiftconf\") pod \"swift-ring-rebalance-w2gs8\" (UID: \"84dc223e-f01c-424c-802a-3e1a5ad819be\") " pod="openstack/swift-ring-rebalance-w2gs8" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.943481 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/84dc223e-f01c-424c-802a-3e1a5ad819be-dispersionconf\") pod \"swift-ring-rebalance-w2gs8\" (UID: \"84dc223e-f01c-424c-802a-3e1a5ad819be\") " pod="openstack/swift-ring-rebalance-w2gs8" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.943534 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw2v6\" (UniqueName: \"kubernetes.io/projected/84dc223e-f01c-424c-802a-3e1a5ad819be-kube-api-access-lw2v6\") pod \"swift-ring-rebalance-w2gs8\" (UID: \"84dc223e-f01c-424c-802a-3e1a5ad819be\") " pod="openstack/swift-ring-rebalance-w2gs8" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.946056 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/84dc223e-f01c-424c-802a-3e1a5ad819be-scripts\") pod \"swift-ring-rebalance-w2gs8\" (UID: \"84dc223e-f01c-424c-802a-3e1a5ad819be\") " pod="openstack/swift-ring-rebalance-w2gs8" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.947071 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/84dc223e-f01c-424c-802a-3e1a5ad819be-etc-swift\") pod \"swift-ring-rebalance-w2gs8\" (UID: \"84dc223e-f01c-424c-802a-3e1a5ad819be\") " pod="openstack/swift-ring-rebalance-w2gs8" Feb 16 17:20:39 crc kubenswrapper[4794]: E0216 17:20:39.947202 4794 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 17:20:39 crc kubenswrapper[4794]: E0216 17:20:39.947222 4794 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 17:20:39 crc kubenswrapper[4794]: E0216 17:20:39.947504 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54acc9db-6bd7-463f-8637-6aa39ed3eb11-etc-swift podName:54acc9db-6bd7-463f-8637-6aa39ed3eb11 nodeName:}" failed. No retries permitted until 2026-02-16 17:20:40.947460907 +0000 UTC m=+1266.895555604 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/54acc9db-6bd7-463f-8637-6aa39ed3eb11-etc-swift") pod "swift-storage-0" (UID: "54acc9db-6bd7-463f-8637-6aa39ed3eb11") : configmap "swift-ring-files" not found Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.948597 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/84dc223e-f01c-424c-802a-3e1a5ad819be-ring-data-devices\") pod \"swift-ring-rebalance-w2gs8\" (UID: \"84dc223e-f01c-424c-802a-3e1a5ad819be\") " pod="openstack/swift-ring-rebalance-w2gs8" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.951520 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/84dc223e-f01c-424c-802a-3e1a5ad819be-swiftconf\") pod \"swift-ring-rebalance-w2gs8\" (UID: \"84dc223e-f01c-424c-802a-3e1a5ad819be\") " pod="openstack/swift-ring-rebalance-w2gs8" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.954794 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/84dc223e-f01c-424c-802a-3e1a5ad819be-dispersionconf\") pod \"swift-ring-rebalance-w2gs8\" (UID: \"84dc223e-f01c-424c-802a-3e1a5ad819be\") " pod="openstack/swift-ring-rebalance-w2gs8" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.961492 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84dc223e-f01c-424c-802a-3e1a5ad819be-combined-ca-bundle\") pod \"swift-ring-rebalance-w2gs8\" (UID: \"84dc223e-f01c-424c-802a-3e1a5ad819be\") " pod="openstack/swift-ring-rebalance-w2gs8" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.961828 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw2v6\" (UniqueName: \"kubernetes.io/projected/84dc223e-f01c-424c-802a-3e1a5ad819be-kube-api-access-lw2v6\") pod \"swift-ring-rebalance-w2gs8\" (UID: \"84dc223e-f01c-424c-802a-3e1a5ad819be\") " pod="openstack/swift-ring-rebalance-w2gs8" Feb 16 17:20:39 crc kubenswrapper[4794]: I0216 17:20:39.989446 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-dndb2" Feb 16 17:20:40 crc kubenswrapper[4794]: I0216 17:20:40.055672 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-w2gs8" Feb 16 17:20:40 crc kubenswrapper[4794]: I0216 17:20:40.298813 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-l72f2"] Feb 16 17:20:40 crc kubenswrapper[4794]: I0216 17:20:40.300557 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-l72f2" Feb 16 17:20:40 crc kubenswrapper[4794]: I0216 17:20:40.325834 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-l72f2"] Feb 16 17:20:40 crc kubenswrapper[4794]: I0216 17:20:40.453622 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-42a4-account-create-update-r755d"] Feb 16 17:20:40 crc kubenswrapper[4794]: I0216 17:20:40.454937 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-42a4-account-create-update-r755d" Feb 16 17:20:40 crc kubenswrapper[4794]: I0216 17:20:40.459453 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aee4dca2-9581-44c6-91db-ce6516f9b05e-operator-scripts\") pod \"glance-db-create-l72f2\" (UID: \"aee4dca2-9581-44c6-91db-ce6516f9b05e\") " pod="openstack/glance-db-create-l72f2" Feb 16 17:20:40 crc kubenswrapper[4794]: I0216 17:20:40.459665 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwhv9\" (UniqueName: \"kubernetes.io/projected/aee4dca2-9581-44c6-91db-ce6516f9b05e-kube-api-access-jwhv9\") pod \"glance-db-create-l72f2\" (UID: \"aee4dca2-9581-44c6-91db-ce6516f9b05e\") " pod="openstack/glance-db-create-l72f2" Feb 16 17:20:40 crc kubenswrapper[4794]: I0216 17:20:40.462041 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 16 17:20:40 crc kubenswrapper[4794]: I0216 17:20:40.488327 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-42a4-account-create-update-r755d"] Feb 16 17:20:40 crc kubenswrapper[4794]: I0216 17:20:40.562121 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj8gr\" (UniqueName: \"kubernetes.io/projected/b4ed7df7-08c2-4c06-bd2b-14ea362191d1-kube-api-access-gj8gr\") pod \"glance-42a4-account-create-update-r755d\" (UID: \"b4ed7df7-08c2-4c06-bd2b-14ea362191d1\") " pod="openstack/glance-42a4-account-create-update-r755d" Feb 16 17:20:40 crc kubenswrapper[4794]: I0216 17:20:40.562210 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4ed7df7-08c2-4c06-bd2b-14ea362191d1-operator-scripts\") pod \"glance-42a4-account-create-update-r755d\" (UID: \"b4ed7df7-08c2-4c06-bd2b-14ea362191d1\") " pod="openstack/glance-42a4-account-create-update-r755d" Feb 16 17:20:40 crc kubenswrapper[4794]: I0216 17:20:40.562341 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwhv9\" (UniqueName: \"kubernetes.io/projected/aee4dca2-9581-44c6-91db-ce6516f9b05e-kube-api-access-jwhv9\") pod \"glance-db-create-l72f2\" (UID: \"aee4dca2-9581-44c6-91db-ce6516f9b05e\") " pod="openstack/glance-db-create-l72f2" Feb 16 17:20:40 crc kubenswrapper[4794]: I0216 17:20:40.562449 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aee4dca2-9581-44c6-91db-ce6516f9b05e-operator-scripts\") pod \"glance-db-create-l72f2\" (UID: \"aee4dca2-9581-44c6-91db-ce6516f9b05e\") " pod="openstack/glance-db-create-l72f2" Feb 16 17:20:40 crc kubenswrapper[4794]: I0216 17:20:40.565811 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aee4dca2-9581-44c6-91db-ce6516f9b05e-operator-scripts\") pod \"glance-db-create-l72f2\" (UID: \"aee4dca2-9581-44c6-91db-ce6516f9b05e\") " pod="openstack/glance-db-create-l72f2" Feb 16 17:20:40 crc kubenswrapper[4794]: I0216 17:20:40.627026 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwhv9\" (UniqueName: \"kubernetes.io/projected/aee4dca2-9581-44c6-91db-ce6516f9b05e-kube-api-access-jwhv9\") pod \"glance-db-create-l72f2\" (UID: \"aee4dca2-9581-44c6-91db-ce6516f9b05e\") " pod="openstack/glance-db-create-l72f2" Feb 16 17:20:40 crc kubenswrapper[4794]: I0216 17:20:40.650906 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-l72f2" Feb 16 17:20:40 crc kubenswrapper[4794]: I0216 17:20:40.665094 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj8gr\" (UniqueName: \"kubernetes.io/projected/b4ed7df7-08c2-4c06-bd2b-14ea362191d1-kube-api-access-gj8gr\") pod \"glance-42a4-account-create-update-r755d\" (UID: \"b4ed7df7-08c2-4c06-bd2b-14ea362191d1\") " pod="openstack/glance-42a4-account-create-update-r755d" Feb 16 17:20:40 crc kubenswrapper[4794]: I0216 17:20:40.665176 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4ed7df7-08c2-4c06-bd2b-14ea362191d1-operator-scripts\") pod \"glance-42a4-account-create-update-r755d\" (UID: \"b4ed7df7-08c2-4c06-bd2b-14ea362191d1\") " pod="openstack/glance-42a4-account-create-update-r755d" Feb 16 17:20:40 crc kubenswrapper[4794]: I0216 17:20:40.666337 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4ed7df7-08c2-4c06-bd2b-14ea362191d1-operator-scripts\") pod \"glance-42a4-account-create-update-r755d\" (UID: \"b4ed7df7-08c2-4c06-bd2b-14ea362191d1\") " pod="openstack/glance-42a4-account-create-update-r755d" Feb 16 17:20:40 crc kubenswrapper[4794]: I0216 17:20:40.705679 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj8gr\" (UniqueName: \"kubernetes.io/projected/b4ed7df7-08c2-4c06-bd2b-14ea362191d1-kube-api-access-gj8gr\") pod \"glance-42a4-account-create-update-r755d\" (UID: \"b4ed7df7-08c2-4c06-bd2b-14ea362191d1\") " pod="openstack/glance-42a4-account-create-update-r755d" Feb 16 17:20:40 crc kubenswrapper[4794]: I0216 17:20:40.816203 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-42a4-account-create-update-r755d" Feb 16 17:20:40 crc kubenswrapper[4794]: I0216 17:20:40.822320 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d282d685-2404-4141-9865-03de5d0928c5" path="/var/lib/kubelet/pods/d282d685-2404-4141-9865-03de5d0928c5/volumes" Feb 16 17:20:40 crc kubenswrapper[4794]: I0216 17:20:40.974800 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/54acc9db-6bd7-463f-8637-6aa39ed3eb11-etc-swift\") pod \"swift-storage-0\" (UID: \"54acc9db-6bd7-463f-8637-6aa39ed3eb11\") " pod="openstack/swift-storage-0" Feb 16 17:20:40 crc kubenswrapper[4794]: E0216 17:20:40.980710 4794 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 17:20:40 crc kubenswrapper[4794]: E0216 17:20:40.980749 4794 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 17:20:40 crc kubenswrapper[4794]: E0216 17:20:40.980804 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54acc9db-6bd7-463f-8637-6aa39ed3eb11-etc-swift podName:54acc9db-6bd7-463f-8637-6aa39ed3eb11 nodeName:}" failed. No retries permitted until 2026-02-16 17:20:42.980783192 +0000 UTC m=+1268.928877899 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/54acc9db-6bd7-463f-8637-6aa39ed3eb11-etc-swift") pod "swift-storage-0" (UID: "54acc9db-6bd7-463f-8637-6aa39ed3eb11") : configmap "swift-ring-files" not found Feb 16 17:20:42 crc kubenswrapper[4794]: I0216 17:20:42.180383 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-z7z5p"] Feb 16 17:20:42 crc kubenswrapper[4794]: I0216 17:20:42.182342 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-z7z5p" Feb 16 17:20:42 crc kubenswrapper[4794]: I0216 17:20:42.185873 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 16 17:20:42 crc kubenswrapper[4794]: I0216 17:20:42.200670 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-z7z5p"] Feb 16 17:20:42 crc kubenswrapper[4794]: I0216 17:20:42.305766 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcdlc\" (UniqueName: \"kubernetes.io/projected/d475f629-e8d0-4167-ba4d-37918b079499-kube-api-access-kcdlc\") pod \"root-account-create-update-z7z5p\" (UID: \"d475f629-e8d0-4167-ba4d-37918b079499\") " pod="openstack/root-account-create-update-z7z5p" Feb 16 17:20:42 crc kubenswrapper[4794]: I0216 17:20:42.306081 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d475f629-e8d0-4167-ba4d-37918b079499-operator-scripts\") pod \"root-account-create-update-z7z5p\" (UID: \"d475f629-e8d0-4167-ba4d-37918b079499\") " pod="openstack/root-account-create-update-z7z5p" Feb 16 17:20:42 crc kubenswrapper[4794]: I0216 17:20:42.408767 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcdlc\" (UniqueName: \"kubernetes.io/projected/d475f629-e8d0-4167-ba4d-37918b079499-kube-api-access-kcdlc\") pod \"root-account-create-update-z7z5p\" (UID: \"d475f629-e8d0-4167-ba4d-37918b079499\") " pod="openstack/root-account-create-update-z7z5p" Feb 16 17:20:42 crc kubenswrapper[4794]: I0216 17:20:42.408927 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d475f629-e8d0-4167-ba4d-37918b079499-operator-scripts\") pod \"root-account-create-update-z7z5p\" (UID: \"d475f629-e8d0-4167-ba4d-37918b079499\") " pod="openstack/root-account-create-update-z7z5p" Feb 16 17:20:42 crc kubenswrapper[4794]: I0216 17:20:42.409820 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d475f629-e8d0-4167-ba4d-37918b079499-operator-scripts\") pod \"root-account-create-update-z7z5p\" (UID: \"d475f629-e8d0-4167-ba4d-37918b079499\") " pod="openstack/root-account-create-update-z7z5p" Feb 16 17:20:42 crc kubenswrapper[4794]: I0216 17:20:42.429612 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcdlc\" (UniqueName: \"kubernetes.io/projected/d475f629-e8d0-4167-ba4d-37918b079499-kube-api-access-kcdlc\") pod \"root-account-create-update-z7z5p\" (UID: \"d475f629-e8d0-4167-ba4d-37918b079499\") " pod="openstack/root-account-create-update-z7z5p" Feb 16 17:20:42 crc kubenswrapper[4794]: I0216 17:20:42.507413 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-z7z5p" Feb 16 17:20:43 crc kubenswrapper[4794]: I0216 17:20:43.021616 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/54acc9db-6bd7-463f-8637-6aa39ed3eb11-etc-swift\") pod \"swift-storage-0\" (UID: \"54acc9db-6bd7-463f-8637-6aa39ed3eb11\") " pod="openstack/swift-storage-0" Feb 16 17:20:43 crc kubenswrapper[4794]: E0216 17:20:43.021959 4794 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 17:20:43 crc kubenswrapper[4794]: E0216 17:20:43.022023 4794 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 17:20:43 crc kubenswrapper[4794]: E0216 17:20:43.022122 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54acc9db-6bd7-463f-8637-6aa39ed3eb11-etc-swift podName:54acc9db-6bd7-463f-8637-6aa39ed3eb11 nodeName:}" failed. No retries permitted until 2026-02-16 17:20:47.022094853 +0000 UTC m=+1272.970189500 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/54acc9db-6bd7-463f-8637-6aa39ed3eb11-etc-swift") pod "swift-storage-0" (UID: "54acc9db-6bd7-463f-8637-6aa39ed3eb11") : configmap "swift-ring-files" not found Feb 16 17:20:43 crc kubenswrapper[4794]: W0216 17:20:43.606992 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7fd9bb0_100b_4941_80d2_1a9ec63423be.slice/crio-955e628a8d6bae76f634d0260259c4ecb630dc303bb5a8da417f7caeaf453ad0 WatchSource:0}: Error finding container 955e628a8d6bae76f634d0260259c4ecb630dc303bb5a8da417f7caeaf453ad0: Status 404 returned error can't find the container with id 955e628a8d6bae76f634d0260259c4ecb630dc303bb5a8da417f7caeaf453ad0 Feb 16 17:20:43 crc kubenswrapper[4794]: I0216 17:20:43.666562 4794 scope.go:117] "RemoveContainer" containerID="987d6dce6a954698dda23d72d893856fd8c7fa8bc59e1cf5803f69c670e9a145" Feb 16 17:20:43 crc kubenswrapper[4794]: I0216 17:20:43.922130 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6wxqb" Feb 16 17:20:43 crc kubenswrapper[4794]: I0216 17:20:43.982128 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-576f6bf7c-mkh5d" podUID="eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c" containerName="console" containerID="cri-o://bb82561ee7b85bb649642db64d1c9def75f7f9722c2e24704b38b18398a51d21" gracePeriod=15 Feb 16 17:20:43 crc kubenswrapper[4794]: I0216 17:20:43.983203 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c135-account-create-update-79qz9" Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.058665 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7351df94-ade5-4e5e-b281-b195301dc37d-operator-scripts\") pod \"7351df94-ade5-4e5e-b281-b195301dc37d\" (UID: \"7351df94-ade5-4e5e-b281-b195301dc37d\") " Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.059048 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfphx\" (UniqueName: \"kubernetes.io/projected/7351df94-ade5-4e5e-b281-b195301dc37d-kube-api-access-bfphx\") pod \"7351df94-ade5-4e5e-b281-b195301dc37d\" (UID: \"7351df94-ade5-4e5e-b281-b195301dc37d\") " Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.059328 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7351df94-ade5-4e5e-b281-b195301dc37d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7351df94-ade5-4e5e-b281-b195301dc37d" (UID: "7351df94-ade5-4e5e-b281-b195301dc37d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.059933 4794 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7351df94-ade5-4e5e-b281-b195301dc37d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.092699 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7351df94-ade5-4e5e-b281-b195301dc37d-kube-api-access-bfphx" (OuterVolumeSpecName: "kube-api-access-bfphx") pod "7351df94-ade5-4e5e-b281-b195301dc37d" (UID: "7351df94-ade5-4e5e-b281-b195301dc37d"). InnerVolumeSpecName "kube-api-access-bfphx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.096714 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-s4fk6" Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.161435 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22nw2\" (UniqueName: \"kubernetes.io/projected/b8cd1b17-5173-42b6-a51d-e2a057d404f4-kube-api-access-22nw2\") pod \"b8cd1b17-5173-42b6-a51d-e2a057d404f4\" (UID: \"b8cd1b17-5173-42b6-a51d-e2a057d404f4\") " Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.161567 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8cd1b17-5173-42b6-a51d-e2a057d404f4-operator-scripts\") pod \"b8cd1b17-5173-42b6-a51d-e2a057d404f4\" (UID: \"b8cd1b17-5173-42b6-a51d-e2a057d404f4\") " Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.162283 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfphx\" (UniqueName: \"kubernetes.io/projected/7351df94-ade5-4e5e-b281-b195301dc37d-kube-api-access-bfphx\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.164933 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8cd1b17-5173-42b6-a51d-e2a057d404f4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b8cd1b17-5173-42b6-a51d-e2a057d404f4" (UID: "b8cd1b17-5173-42b6-a51d-e2a057d404f4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.167022 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8cd1b17-5173-42b6-a51d-e2a057d404f4-kube-api-access-22nw2" (OuterVolumeSpecName: "kube-api-access-22nw2") pod "b8cd1b17-5173-42b6-a51d-e2a057d404f4" (UID: "b8cd1b17-5173-42b6-a51d-e2a057d404f4"). InnerVolumeSpecName "kube-api-access-22nw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.187759 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0fed-account-create-update-tb2gr" Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.242620 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-l72f2"] Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.269409 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05c86def-4e37-40ef-847d-ccb9dd6c99a9-operator-scripts\") pod \"05c86def-4e37-40ef-847d-ccb9dd6c99a9\" (UID: \"05c86def-4e37-40ef-847d-ccb9dd6c99a9\") " Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.269491 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d346472-4e86-4519-8307-ee7cf5f74280-operator-scripts\") pod \"3d346472-4e86-4519-8307-ee7cf5f74280\" (UID: \"3d346472-4e86-4519-8307-ee7cf5f74280\") " Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.269763 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhnq2\" (UniqueName: \"kubernetes.io/projected/3d346472-4e86-4519-8307-ee7cf5f74280-kube-api-access-fhnq2\") pod \"3d346472-4e86-4519-8307-ee7cf5f74280\" (UID: \"3d346472-4e86-4519-8307-ee7cf5f74280\") " Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.269842 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7w2h\" (UniqueName: \"kubernetes.io/projected/05c86def-4e37-40ef-847d-ccb9dd6c99a9-kube-api-access-j7w2h\") pod \"05c86def-4e37-40ef-847d-ccb9dd6c99a9\" (UID: \"05c86def-4e37-40ef-847d-ccb9dd6c99a9\") " Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.270523 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05c86def-4e37-40ef-847d-ccb9dd6c99a9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "05c86def-4e37-40ef-847d-ccb9dd6c99a9" (UID: "05c86def-4e37-40ef-847d-ccb9dd6c99a9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.271493 4794 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05c86def-4e37-40ef-847d-ccb9dd6c99a9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.271524 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22nw2\" (UniqueName: \"kubernetes.io/projected/b8cd1b17-5173-42b6-a51d-e2a057d404f4-kube-api-access-22nw2\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.271537 4794 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8cd1b17-5173-42b6-a51d-e2a057d404f4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.272089 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d346472-4e86-4519-8307-ee7cf5f74280-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3d346472-4e86-4519-8307-ee7cf5f74280" (UID: "3d346472-4e86-4519-8307-ee7cf5f74280"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.280283 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d346472-4e86-4519-8307-ee7cf5f74280-kube-api-access-fhnq2" (OuterVolumeSpecName: "kube-api-access-fhnq2") pod "3d346472-4e86-4519-8307-ee7cf5f74280" (UID: "3d346472-4e86-4519-8307-ee7cf5f74280"). InnerVolumeSpecName "kube-api-access-fhnq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.285261 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05c86def-4e37-40ef-847d-ccb9dd6c99a9-kube-api-access-j7w2h" (OuterVolumeSpecName: "kube-api-access-j7w2h") pod "05c86def-4e37-40ef-847d-ccb9dd6c99a9" (UID: "05c86def-4e37-40ef-847d-ccb9dd6c99a9"). InnerVolumeSpecName "kube-api-access-j7w2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.375221 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhnq2\" (UniqueName: \"kubernetes.io/projected/3d346472-4e86-4519-8307-ee7cf5f74280-kube-api-access-fhnq2\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.375266 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7w2h\" (UniqueName: \"kubernetes.io/projected/05c86def-4e37-40ef-847d-ccb9dd6c99a9-kube-api-access-j7w2h\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.375280 4794 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d346472-4e86-4519-8307-ee7cf5f74280-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.422743 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c135-account-create-update-79qz9" event={"ID":"b8cd1b17-5173-42b6-a51d-e2a057d404f4","Type":"ContainerDied","Data":"a18a475b0ddbf454953172821d7daca3a60e362fd7f771fb530e3705bec1aa3f"} Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.422788 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a18a475b0ddbf454953172821d7daca3a60e362fd7f771fb530e3705bec1aa3f" Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.422821 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c135-account-create-update-79qz9" Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.424683 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d6f0b7b-1214-4425-a850-09933e0e9a6e","Type":"ContainerStarted","Data":"379efa73cff06de727c8054915ead633b64ca17382d422e7cbdf46cece02fb7e"} Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.426867 4794 generic.go:334] "Generic (PLEG): container finished" podID="a7fd9bb0-100b-4941-80d2-1a9ec63423be" containerID="f8ac86bc80c5233684c1b47c179a1df5d96139bfd69fb1eaf0d71038282f797d" exitCode=0 Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.426956 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-7b9c-account-create-update-xqtkk" event={"ID":"a7fd9bb0-100b-4941-80d2-1a9ec63423be","Type":"ContainerDied","Data":"f8ac86bc80c5233684c1b47c179a1df5d96139bfd69fb1eaf0d71038282f797d"} Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.426985 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-7b9c-account-create-update-xqtkk" event={"ID":"a7fd9bb0-100b-4941-80d2-1a9ec63423be","Type":"ContainerStarted","Data":"955e628a8d6bae76f634d0260259c4ecb630dc303bb5a8da417f7caeaf453ad0"} Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.429104 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-s4fk6" event={"ID":"05c86def-4e37-40ef-847d-ccb9dd6c99a9","Type":"ContainerDied","Data":"75c2c512d3771305d55efe20065a2ba93d53bda2b6f7ccaec687462a87e457b8"} Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.429125 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75c2c512d3771305d55efe20065a2ba93d53bda2b6f7ccaec687462a87e457b8" Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.429142 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-s4fk6" Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.431084 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0fed-account-create-update-tb2gr" event={"ID":"3d346472-4e86-4519-8307-ee7cf5f74280","Type":"ContainerDied","Data":"5e75c2407cb58d65dfd3b0abee6726eff8a9b3f622e2eaed8c18f301f08baf8a"} Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.431109 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e75c2407cb58d65dfd3b0abee6726eff8a9b3f622e2eaed8c18f301f08baf8a" Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.431095 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0fed-account-create-update-tb2gr" Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.432872 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6wxqb" Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.432873 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6wxqb" event={"ID":"7351df94-ade5-4e5e-b281-b195301dc37d","Type":"ContainerDied","Data":"172106df5f4ac4e68344de8996ff18743b4169dd116b1c7b2abccddaeb8cc1c2"} Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.432905 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="172106df5f4ac4e68344de8996ff18743b4169dd116b1c7b2abccddaeb8cc1c2" Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.434735 4794 generic.go:334] "Generic (PLEG): container finished" podID="2f564c83-65cd-4eb0-81b3-155b5a221041" containerID="9a1377941f258a19d948dcca0bb9670bdaac5c722217194a63ccabb43428ad31" exitCode=0 Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.434777 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lswtm" event={"ID":"2f564c83-65cd-4eb0-81b3-155b5a221041","Type":"ContainerDied","Data":"9a1377941f258a19d948dcca0bb9670bdaac5c722217194a63ccabb43428ad31"} Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.434796 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lswtm" event={"ID":"2f564c83-65cd-4eb0-81b3-155b5a221041","Type":"ContainerStarted","Data":"4d44083cae3fae77c9ae90af61e3dbf2c76a470baa3b1941555216813438bf24"} Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.448939 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-576f6bf7c-mkh5d_eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c/console/0.log" Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.448985 4794 generic.go:334] "Generic (PLEG): container finished" podID="eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c" containerID="bb82561ee7b85bb649642db64d1c9def75f7f9722c2e24704b38b18398a51d21" exitCode=2 Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.449078 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-576f6bf7c-mkh5d" event={"ID":"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c","Type":"ContainerDied","Data":"bb82561ee7b85bb649642db64d1c9def75f7f9722c2e24704b38b18398a51d21"} Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.451195 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-nbn72" event={"ID":"c3bc8a6c-f954-4825-8853-316738b0eb94","Type":"ContainerStarted","Data":"d329847a9ebf9636e9b55cd869afe7fc46d427b0ca5c513703af27dd785771ff"} Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.451220 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-nbn72" event={"ID":"c3bc8a6c-f954-4825-8853-316738b0eb94","Type":"ContainerStarted","Data":"8c5f7f273f82665a0aace2ffcedd7b6d62c31fc396c850fe09c601f92667a7cd"} Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.453689 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-l72f2" event={"ID":"aee4dca2-9581-44c6-91db-ce6516f9b05e","Type":"ContainerStarted","Data":"59ced11604585b9d648afc208f54a7434f86b50d152324df8d282f2df49c0503"} Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.502857 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-db-create-nbn72" podStartSLOduration=7.50283405 podStartE2EDuration="7.50283405s" podCreationTimestamp="2026-02-16 17:20:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:20:44.475188131 +0000 UTC m=+1270.423282778" watchObservedRunningTime="2026-02-16 17:20:44.50283405 +0000 UTC m=+1270.450928697" Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.574856 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-z7z5p"] Feb 16 17:20:44 crc kubenswrapper[4794]: W0216 17:20:44.589706 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd475f629_e8d0_4167_ba4d_37918b079499.slice/crio-2ffcc0e3e02640ccc54dbb66e3184c3c5b7b7ba125b04cd1a463b7699ca4ae03 WatchSource:0}: Error finding container 2ffcc0e3e02640ccc54dbb66e3184c3c5b7b7ba125b04cd1a463b7699ca4ae03: Status 404 returned error can't find the container with id 2ffcc0e3e02640ccc54dbb66e3184c3c5b7b7ba125b04cd1a463b7699ca4ae03 Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.622595 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-w2gs8"] Feb 16 17:20:44 crc kubenswrapper[4794]: W0216 17:20:44.624799 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4ed7df7_08c2_4c06_bd2b_14ea362191d1.slice/crio-e535ce4fba577997250d32e5383ad29aa697318dfb4f624a8e56b91ab34fcc31 WatchSource:0}: Error finding container e535ce4fba577997250d32e5383ad29aa697318dfb4f624a8e56b91ab34fcc31: Status 404 returned error can't find the container with id e535ce4fba577997250d32e5383ad29aa697318dfb4f624a8e56b91ab34fcc31 Feb 16 17:20:44 crc kubenswrapper[4794]: W0216 17:20:44.637948 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84dc223e_f01c_424c_802a_3e1a5ad819be.slice/crio-025087f3ff8e2c57bcf1bcd9580bbdad5bcb800ac9a5df526acc759fcd226ab0 WatchSource:0}: Error finding container 025087f3ff8e2c57bcf1bcd9580bbdad5bcb800ac9a5df526acc759fcd226ab0: Status 404 returned error can't find the container with id 025087f3ff8e2c57bcf1bcd9580bbdad5bcb800ac9a5df526acc759fcd226ab0 Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.655816 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-42a4-account-create-update-r755d"] Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.890340 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-576f6bf7c-mkh5d_eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c/console/0.log" Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.890605 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-576f6bf7c-mkh5d" Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.988140 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d74cx\" (UniqueName: \"kubernetes.io/projected/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-kube-api-access-d74cx\") pod \"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c\" (UID: \"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c\") " Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.988206 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-console-config\") pod \"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c\" (UID: \"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c\") " Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.988234 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-trusted-ca-bundle\") pod \"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c\" (UID: \"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c\") " Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.988286 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-oauth-serving-cert\") pod \"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c\" (UID: \"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c\") " Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.988463 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-console-serving-cert\") pod \"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c\" (UID: \"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c\") " Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.988538 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-console-oauth-config\") pod \"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c\" (UID: \"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c\") " Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.988585 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-service-ca\") pod \"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c\" (UID: \"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c\") " Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.989935 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c" (UID: "eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.989930 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-service-ca" (OuterVolumeSpecName: "service-ca") pod "eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c" (UID: "eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.989944 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-console-config" (OuterVolumeSpecName: "console-config") pod "eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c" (UID: "eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.990651 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c" (UID: "eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.996065 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c" (UID: "eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.996215 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-kube-api-access-d74cx" (OuterVolumeSpecName: "kube-api-access-d74cx") pod "eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c" (UID: "eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c"). InnerVolumeSpecName "kube-api-access-d74cx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:44 crc kubenswrapper[4794]: I0216 17:20:44.998535 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c" (UID: "eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:20:45 crc kubenswrapper[4794]: I0216 17:20:45.091264 4794 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:45 crc kubenswrapper[4794]: I0216 17:20:45.091321 4794 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:45 crc kubenswrapper[4794]: I0216 17:20:45.091394 4794 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:45 crc kubenswrapper[4794]: I0216 17:20:45.091406 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d74cx\" (UniqueName: \"kubernetes.io/projected/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-kube-api-access-d74cx\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:45 crc kubenswrapper[4794]: I0216 17:20:45.091420 4794 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:45 crc kubenswrapper[4794]: I0216 17:20:45.091431 4794 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:45 crc kubenswrapper[4794]: I0216 17:20:45.091441 4794 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:45 crc kubenswrapper[4794]: I0216 17:20:45.466483 4794 generic.go:334] "Generic (PLEG): container finished" podID="c3bc8a6c-f954-4825-8853-316738b0eb94" containerID="d329847a9ebf9636e9b55cd869afe7fc46d427b0ca5c513703af27dd785771ff" exitCode=0 Feb 16 17:20:45 crc kubenswrapper[4794]: I0216 17:20:45.466579 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-nbn72" event={"ID":"c3bc8a6c-f954-4825-8853-316738b0eb94","Type":"ContainerDied","Data":"d329847a9ebf9636e9b55cd869afe7fc46d427b0ca5c513703af27dd785771ff"} Feb 16 17:20:45 crc kubenswrapper[4794]: I0216 17:20:45.470471 4794 generic.go:334] "Generic (PLEG): container finished" podID="aee4dca2-9581-44c6-91db-ce6516f9b05e" containerID="f37f4684a09448f6f61fc02bd7ce900a1e3657f204183b4716858e9c36fae406" exitCode=0 Feb 16 17:20:45 crc kubenswrapper[4794]: I0216 17:20:45.470549 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-l72f2" event={"ID":"aee4dca2-9581-44c6-91db-ce6516f9b05e","Type":"ContainerDied","Data":"f37f4684a09448f6f61fc02bd7ce900a1e3657f204183b4716858e9c36fae406"} Feb 16 17:20:45 crc kubenswrapper[4794]: I0216 17:20:45.472517 4794 generic.go:334] "Generic (PLEG): container finished" podID="b4ed7df7-08c2-4c06-bd2b-14ea362191d1" containerID="ae5408138554b5b91af1e51726e147d638e8ba51378075aaa6abe78224194f31" exitCode=0 Feb 16 17:20:45 crc kubenswrapper[4794]: I0216 17:20:45.472568 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-42a4-account-create-update-r755d" event={"ID":"b4ed7df7-08c2-4c06-bd2b-14ea362191d1","Type":"ContainerDied","Data":"ae5408138554b5b91af1e51726e147d638e8ba51378075aaa6abe78224194f31"} Feb 16 17:20:45 crc kubenswrapper[4794]: I0216 17:20:45.472590 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-42a4-account-create-update-r755d" event={"ID":"b4ed7df7-08c2-4c06-bd2b-14ea362191d1","Type":"ContainerStarted","Data":"e535ce4fba577997250d32e5383ad29aa697318dfb4f624a8e56b91ab34fcc31"} Feb 16 17:20:45 crc kubenswrapper[4794]: I0216 17:20:45.474888 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lswtm" event={"ID":"2f564c83-65cd-4eb0-81b3-155b5a221041","Type":"ContainerStarted","Data":"a9197f571a6a4ec904f6ebf4455d0bbf732cd435435fcc0805cffabdeb5ad6df"} Feb 16 17:20:45 crc kubenswrapper[4794]: I0216 17:20:45.474962 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-lswtm" Feb 16 17:20:45 crc kubenswrapper[4794]: I0216 17:20:45.487119 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-576f6bf7c-mkh5d_eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c/console/0.log" Feb 16 17:20:45 crc kubenswrapper[4794]: I0216 17:20:45.487246 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-576f6bf7c-mkh5d" event={"ID":"eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c","Type":"ContainerDied","Data":"d75f1d1aa108dfa2b7102778f83ee9b54bc07371a3b99352544184204aab2d65"} Feb 16 17:20:45 crc kubenswrapper[4794]: I0216 17:20:45.487290 4794 scope.go:117] "RemoveContainer" containerID="bb82561ee7b85bb649642db64d1c9def75f7f9722c2e24704b38b18398a51d21" Feb 16 17:20:45 crc kubenswrapper[4794]: I0216 17:20:45.487418 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-576f6bf7c-mkh5d" Feb 16 17:20:45 crc kubenswrapper[4794]: I0216 17:20:45.489426 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-w2gs8" event={"ID":"84dc223e-f01c-424c-802a-3e1a5ad819be","Type":"ContainerStarted","Data":"025087f3ff8e2c57bcf1bcd9580bbdad5bcb800ac9a5df526acc759fcd226ab0"} Feb 16 17:20:45 crc kubenswrapper[4794]: I0216 17:20:45.491044 4794 generic.go:334] "Generic (PLEG): container finished" podID="d475f629-e8d0-4167-ba4d-37918b079499" containerID="59917e61f52528956f2e22aba28ce904d4a6214fa1d600aeff7c7ed4187f0a79" exitCode=0 Feb 16 17:20:45 crc kubenswrapper[4794]: I0216 17:20:45.491116 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-z7z5p" event={"ID":"d475f629-e8d0-4167-ba4d-37918b079499","Type":"ContainerDied","Data":"59917e61f52528956f2e22aba28ce904d4a6214fa1d600aeff7c7ed4187f0a79"} Feb 16 17:20:45 crc kubenswrapper[4794]: I0216 17:20:45.491165 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-z7z5p" event={"ID":"d475f629-e8d0-4167-ba4d-37918b079499","Type":"ContainerStarted","Data":"2ffcc0e3e02640ccc54dbb66e3184c3c5b7b7ba125b04cd1a463b7699ca4ae03"} Feb 16 17:20:45 crc kubenswrapper[4794]: I0216 17:20:45.536587 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-lswtm" podStartSLOduration=8.536569386 podStartE2EDuration="8.536569386s" podCreationTimestamp="2026-02-16 17:20:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:20:45.518963721 +0000 UTC m=+1271.467058368" watchObservedRunningTime="2026-02-16 17:20:45.536569386 +0000 UTC m=+1271.484664033" Feb 16 17:20:45 crc kubenswrapper[4794]: I0216 17:20:45.585457 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-576f6bf7c-mkh5d"] Feb 16 17:20:45 crc kubenswrapper[4794]: I0216 17:20:45.593802 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-576f6bf7c-mkh5d"] Feb 16 17:20:45 crc kubenswrapper[4794]: I0216 17:20:45.957243 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-7b9c-account-create-update-xqtkk" Feb 16 17:20:46 crc kubenswrapper[4794]: I0216 17:20:46.024583 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7fd9bb0-100b-4941-80d2-1a9ec63423be-operator-scripts\") pod \"a7fd9bb0-100b-4941-80d2-1a9ec63423be\" (UID: \"a7fd9bb0-100b-4941-80d2-1a9ec63423be\") " Feb 16 17:20:46 crc kubenswrapper[4794]: I0216 17:20:46.024685 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvcvw\" (UniqueName: \"kubernetes.io/projected/a7fd9bb0-100b-4941-80d2-1a9ec63423be-kube-api-access-vvcvw\") pod \"a7fd9bb0-100b-4941-80d2-1a9ec63423be\" (UID: \"a7fd9bb0-100b-4941-80d2-1a9ec63423be\") " Feb 16 17:20:46 crc kubenswrapper[4794]: I0216 17:20:46.025498 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7fd9bb0-100b-4941-80d2-1a9ec63423be-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a7fd9bb0-100b-4941-80d2-1a9ec63423be" (UID: "a7fd9bb0-100b-4941-80d2-1a9ec63423be"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:46 crc kubenswrapper[4794]: I0216 17:20:46.034714 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7fd9bb0-100b-4941-80d2-1a9ec63423be-kube-api-access-vvcvw" (OuterVolumeSpecName: "kube-api-access-vvcvw") pod "a7fd9bb0-100b-4941-80d2-1a9ec63423be" (UID: "a7fd9bb0-100b-4941-80d2-1a9ec63423be"). InnerVolumeSpecName "kube-api-access-vvcvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:46 crc kubenswrapper[4794]: I0216 17:20:46.127619 4794 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7fd9bb0-100b-4941-80d2-1a9ec63423be-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:46 crc kubenswrapper[4794]: I0216 17:20:46.127653 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvcvw\" (UniqueName: \"kubernetes.io/projected/a7fd9bb0-100b-4941-80d2-1a9ec63423be-kube-api-access-vvcvw\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:46 crc kubenswrapper[4794]: I0216 17:20:46.517882 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-7b9c-account-create-update-xqtkk" event={"ID":"a7fd9bb0-100b-4941-80d2-1a9ec63423be","Type":"ContainerDied","Data":"955e628a8d6bae76f634d0260259c4ecb630dc303bb5a8da417f7caeaf453ad0"} Feb 16 17:20:46 crc kubenswrapper[4794]: I0216 17:20:46.517927 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="955e628a8d6bae76f634d0260259c4ecb630dc303bb5a8da417f7caeaf453ad0" Feb 16 17:20:46 crc kubenswrapper[4794]: I0216 17:20:46.518004 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-7b9c-account-create-update-xqtkk" Feb 16 17:20:46 crc kubenswrapper[4794]: I0216 17:20:46.538102 4794 generic.go:334] "Generic (PLEG): container finished" podID="14a6d353-2dbd-49f5-b69f-1fdcd5c13db8" containerID="b004f25d6252ce636e11c9fcd2ce973a1cb440882c3b2a80e3a5d3acf1ec4abf" exitCode=0 Feb 16 17:20:46 crc kubenswrapper[4794]: I0216 17:20:46.538209 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8","Type":"ContainerDied","Data":"b004f25d6252ce636e11c9fcd2ce973a1cb440882c3b2a80e3a5d3acf1ec4abf"} Feb 16 17:20:46 crc kubenswrapper[4794]: I0216 17:20:46.581178 4794 generic.go:334] "Generic (PLEG): container finished" podID="8fb6be66-7fef-4554-897b-30d9f4637138" containerID="a5611785ff80a2040a0e9583d8fe5567236fc1088f42337abc77e4841bba2724" exitCode=0 Feb 16 17:20:46 crc kubenswrapper[4794]: I0216 17:20:46.581821 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"8fb6be66-7fef-4554-897b-30d9f4637138","Type":"ContainerDied","Data":"a5611785ff80a2040a0e9583d8fe5567236fc1088f42337abc77e4841bba2724"} Feb 16 17:20:46 crc kubenswrapper[4794]: I0216 17:20:46.614515 4794 generic.go:334] "Generic (PLEG): container finished" podID="026253d8-eaea-4c12-91e0-455331cdaa5e" containerID="5f28321fb236a1745593b9c7644f21bfbf3b8430f0f512f514c4f8f1c040ee02" exitCode=0 Feb 16 17:20:46 crc kubenswrapper[4794]: I0216 17:20:46.614604 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"026253d8-eaea-4c12-91e0-455331cdaa5e","Type":"ContainerDied","Data":"5f28321fb236a1745593b9c7644f21bfbf3b8430f0f512f514c4f8f1c040ee02"} Feb 16 17:20:46 crc kubenswrapper[4794]: I0216 17:20:46.626985 4794 generic.go:334] "Generic (PLEG): container finished" podID="47572286-fbbf-4189-9c6f-feb54624ee2a" containerID="92a5854561520f29512043bfa53b1c5f9a1f3caae385e57af28b57dc0df64414" exitCode=0 Feb 16 17:20:46 crc kubenswrapper[4794]: I0216 17:20:46.627149 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"47572286-fbbf-4189-9c6f-feb54624ee2a","Type":"ContainerDied","Data":"92a5854561520f29512043bfa53b1c5f9a1f3caae385e57af28b57dc0df64414"} Feb 16 17:20:46 crc kubenswrapper[4794]: I0216 17:20:46.805716 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c" path="/var/lib/kubelet/pods/eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c/volumes" Feb 16 17:20:47 crc kubenswrapper[4794]: I0216 17:20:47.062296 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/54acc9db-6bd7-463f-8637-6aa39ed3eb11-etc-swift\") pod \"swift-storage-0\" (UID: \"54acc9db-6bd7-463f-8637-6aa39ed3eb11\") " pod="openstack/swift-storage-0" Feb 16 17:20:47 crc kubenswrapper[4794]: E0216 17:20:47.062462 4794 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 17:20:47 crc kubenswrapper[4794]: E0216 17:20:47.062482 4794 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 17:20:47 crc kubenswrapper[4794]: E0216 17:20:47.062538 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54acc9db-6bd7-463f-8637-6aa39ed3eb11-etc-swift podName:54acc9db-6bd7-463f-8637-6aa39ed3eb11 nodeName:}" failed. No retries permitted until 2026-02-16 17:20:55.062519546 +0000 UTC m=+1281.010614193 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/54acc9db-6bd7-463f-8637-6aa39ed3eb11-etc-swift") pod "swift-storage-0" (UID: "54acc9db-6bd7-463f-8637-6aa39ed3eb11") : configmap "swift-ring-files" not found Feb 16 17:20:47 crc kubenswrapper[4794]: I0216 17:20:47.600798 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-42a4-account-create-update-r755d" Feb 16 17:20:47 crc kubenswrapper[4794]: I0216 17:20:47.638393 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d6f0b7b-1214-4425-a850-09933e0e9a6e","Type":"ContainerStarted","Data":"b3d56ccbfff73e0edcfb351fa25da3358d8bb011f8536b7a1e3d48a7ef197ab0"} Feb 16 17:20:47 crc kubenswrapper[4794]: I0216 17:20:47.640103 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-42a4-account-create-update-r755d" event={"ID":"b4ed7df7-08c2-4c06-bd2b-14ea362191d1","Type":"ContainerDied","Data":"e535ce4fba577997250d32e5383ad29aa697318dfb4f624a8e56b91ab34fcc31"} Feb 16 17:20:47 crc kubenswrapper[4794]: I0216 17:20:47.640142 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e535ce4fba577997250d32e5383ad29aa697318dfb4f624a8e56b91ab34fcc31" Feb 16 17:20:47 crc kubenswrapper[4794]: I0216 17:20:47.640194 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-42a4-account-create-update-r755d" Feb 16 17:20:47 crc kubenswrapper[4794]: I0216 17:20:47.779353 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gj8gr\" (UniqueName: \"kubernetes.io/projected/b4ed7df7-08c2-4c06-bd2b-14ea362191d1-kube-api-access-gj8gr\") pod \"b4ed7df7-08c2-4c06-bd2b-14ea362191d1\" (UID: \"b4ed7df7-08c2-4c06-bd2b-14ea362191d1\") " Feb 16 17:20:47 crc kubenswrapper[4794]: I0216 17:20:47.779513 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4ed7df7-08c2-4c06-bd2b-14ea362191d1-operator-scripts\") pod \"b4ed7df7-08c2-4c06-bd2b-14ea362191d1\" (UID: \"b4ed7df7-08c2-4c06-bd2b-14ea362191d1\") " Feb 16 17:20:47 crc kubenswrapper[4794]: I0216 17:20:47.780297 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4ed7df7-08c2-4c06-bd2b-14ea362191d1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b4ed7df7-08c2-4c06-bd2b-14ea362191d1" (UID: "b4ed7df7-08c2-4c06-bd2b-14ea362191d1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:47 crc kubenswrapper[4794]: I0216 17:20:47.783121 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4ed7df7-08c2-4c06-bd2b-14ea362191d1-kube-api-access-gj8gr" (OuterVolumeSpecName: "kube-api-access-gj8gr") pod "b4ed7df7-08c2-4c06-bd2b-14ea362191d1" (UID: "b4ed7df7-08c2-4c06-bd2b-14ea362191d1"). InnerVolumeSpecName "kube-api-access-gj8gr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:47 crc kubenswrapper[4794]: I0216 17:20:47.882695 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gj8gr\" (UniqueName: \"kubernetes.io/projected/b4ed7df7-08c2-4c06-bd2b-14ea362191d1-kube-api-access-gj8gr\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:47 crc kubenswrapper[4794]: I0216 17:20:47.882726 4794 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4ed7df7-08c2-4c06-bd2b-14ea362191d1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:48 crc kubenswrapper[4794]: E0216 17:20:48.103484 4794 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4ed7df7_08c2_4c06_bd2b_14ea362191d1.slice\": RecentStats: unable to find data in memory cache]" Feb 16 17:20:49 crc kubenswrapper[4794]: I0216 17:20:49.990791 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.140924 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.140997 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.320792 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-z7z5p" Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.365047 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d475f629-e8d0-4167-ba4d-37918b079499-operator-scripts\") pod \"d475f629-e8d0-4167-ba4d-37918b079499\" (UID: \"d475f629-e8d0-4167-ba4d-37918b079499\") " Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.365375 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcdlc\" (UniqueName: \"kubernetes.io/projected/d475f629-e8d0-4167-ba4d-37918b079499-kube-api-access-kcdlc\") pod \"d475f629-e8d0-4167-ba4d-37918b079499\" (UID: \"d475f629-e8d0-4167-ba4d-37918b079499\") " Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.365546 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d475f629-e8d0-4167-ba4d-37918b079499-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d475f629-e8d0-4167-ba4d-37918b079499" (UID: "d475f629-e8d0-4167-ba4d-37918b079499"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.366353 4794 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d475f629-e8d0-4167-ba4d-37918b079499-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.373568 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d475f629-e8d0-4167-ba4d-37918b079499-kube-api-access-kcdlc" (OuterVolumeSpecName: "kube-api-access-kcdlc") pod "d475f629-e8d0-4167-ba4d-37918b079499" (UID: "d475f629-e8d0-4167-ba4d-37918b079499"). InnerVolumeSpecName "kube-api-access-kcdlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.446444 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-l72f2" Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.467463 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwhv9\" (UniqueName: \"kubernetes.io/projected/aee4dca2-9581-44c6-91db-ce6516f9b05e-kube-api-access-jwhv9\") pod \"aee4dca2-9581-44c6-91db-ce6516f9b05e\" (UID: \"aee4dca2-9581-44c6-91db-ce6516f9b05e\") " Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.467662 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aee4dca2-9581-44c6-91db-ce6516f9b05e-operator-scripts\") pod \"aee4dca2-9581-44c6-91db-ce6516f9b05e\" (UID: \"aee4dca2-9581-44c6-91db-ce6516f9b05e\") " Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.468388 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcdlc\" (UniqueName: \"kubernetes.io/projected/d475f629-e8d0-4167-ba4d-37918b079499-kube-api-access-kcdlc\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.469570 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aee4dca2-9581-44c6-91db-ce6516f9b05e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aee4dca2-9581-44c6-91db-ce6516f9b05e" (UID: "aee4dca2-9581-44c6-91db-ce6516f9b05e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.473211 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aee4dca2-9581-44c6-91db-ce6516f9b05e-kube-api-access-jwhv9" (OuterVolumeSpecName: "kube-api-access-jwhv9") pod "aee4dca2-9581-44c6-91db-ce6516f9b05e" (UID: "aee4dca2-9581-44c6-91db-ce6516f9b05e"). InnerVolumeSpecName "kube-api-access-jwhv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.481711 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-nbn72" Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.569959 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shs8l\" (UniqueName: \"kubernetes.io/projected/c3bc8a6c-f954-4825-8853-316738b0eb94-kube-api-access-shs8l\") pod \"c3bc8a6c-f954-4825-8853-316738b0eb94\" (UID: \"c3bc8a6c-f954-4825-8853-316738b0eb94\") " Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.570437 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3bc8a6c-f954-4825-8853-316738b0eb94-operator-scripts\") pod \"c3bc8a6c-f954-4825-8853-316738b0eb94\" (UID: \"c3bc8a6c-f954-4825-8853-316738b0eb94\") " Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.571122 4794 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aee4dca2-9581-44c6-91db-ce6516f9b05e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.571407 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwhv9\" (UniqueName: \"kubernetes.io/projected/aee4dca2-9581-44c6-91db-ce6516f9b05e-kube-api-access-jwhv9\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.573555 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3bc8a6c-f954-4825-8853-316738b0eb94-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c3bc8a6c-f954-4825-8853-316738b0eb94" (UID: "c3bc8a6c-f954-4825-8853-316738b0eb94"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.579005 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3bc8a6c-f954-4825-8853-316738b0eb94-kube-api-access-shs8l" (OuterVolumeSpecName: "kube-api-access-shs8l") pod "c3bc8a6c-f954-4825-8853-316738b0eb94" (UID: "c3bc8a6c-f954-4825-8853-316738b0eb94"). InnerVolumeSpecName "kube-api-access-shs8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.673421 4794 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3bc8a6c-f954-4825-8853-316738b0eb94-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.673460 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shs8l\" (UniqueName: \"kubernetes.io/projected/c3bc8a6c-f954-4825-8853-316738b0eb94-kube-api-access-shs8l\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.676277 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"026253d8-eaea-4c12-91e0-455331cdaa5e","Type":"ContainerStarted","Data":"a25e90d93e73c28d15594943c02e0b9a83bbf60e93bace161d3cb1740bb284e8"} Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.676514 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.680088 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-z7z5p" event={"ID":"d475f629-e8d0-4167-ba4d-37918b079499","Type":"ContainerDied","Data":"2ffcc0e3e02640ccc54dbb66e3184c3c5b7b7ba125b04cd1a463b7699ca4ae03"} Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.680118 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ffcc0e3e02640ccc54dbb66e3184c3c5b7b7ba125b04cd1a463b7699ca4ae03" Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.680165 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-z7z5p" Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.687592 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-nbn72" Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.688383 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-nbn72" event={"ID":"c3bc8a6c-f954-4825-8853-316738b0eb94","Type":"ContainerDied","Data":"8c5f7f273f82665a0aace2ffcedd7b6d62c31fc396c850fe09c601f92667a7cd"} Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.688457 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c5f7f273f82665a0aace2ffcedd7b6d62c31fc396c850fe09c601f92667a7cd" Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.711643 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-l72f2" Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.711860 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-l72f2" event={"ID":"aee4dca2-9581-44c6-91db-ce6516f9b05e","Type":"ContainerDied","Data":"59ced11604585b9d648afc208f54a7434f86b50d152324df8d282f2df49c0503"} Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.711938 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59ced11604585b9d648afc208f54a7434f86b50d152324df8d282f2df49c0503" Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.714346 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"47572286-fbbf-4189-9c6f-feb54624ee2a","Type":"ContainerStarted","Data":"78060d4db70d41c4b478fe59a79e973c4b66567fab8194633868092f4711eba2"} Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.715228 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.719062 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=41.536146242 podStartE2EDuration="1m0.719041774s" podCreationTimestamp="2026-02-16 17:19:50 +0000 UTC" firstStartedPulling="2026-02-16 17:19:53.11844428 +0000 UTC m=+1219.066538937" lastFinishedPulling="2026-02-16 17:20:12.301339822 +0000 UTC m=+1238.249434469" observedRunningTime="2026-02-16 17:20:50.707858441 +0000 UTC m=+1276.655953088" watchObservedRunningTime="2026-02-16 17:20:50.719041774 +0000 UTC m=+1276.667136421" Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.750230 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8","Type":"ContainerStarted","Data":"a1ccf81377d5eb39238f66da309168f15f2ef4541d8767081e5210e38916edef"} Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.750835 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.754901 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-w2gs8" event={"ID":"84dc223e-f01c-424c-802a-3e1a5ad819be","Type":"ContainerStarted","Data":"b25823c2a524cd064cea3525dd73d61d5012a1849f3dd84d5f3e499b993e3220"} Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.759581 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"8fb6be66-7fef-4554-897b-30d9f4637138","Type":"ContainerStarted","Data":"523fb59c255a777ff296c7e21c97e54cffbe6d1d35fb7cb70cd1ded47a89b767"} Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.760509 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.782332 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=46.589456847 podStartE2EDuration="59.782243782s" podCreationTimestamp="2026-02-16 17:19:51 +0000 UTC" firstStartedPulling="2026-02-16 17:19:59.094506491 +0000 UTC m=+1225.042601138" lastFinishedPulling="2026-02-16 17:20:12.287293426 +0000 UTC m=+1238.235388073" observedRunningTime="2026-02-16 17:20:50.767823087 +0000 UTC m=+1276.715917734" watchObservedRunningTime="2026-02-16 17:20:50.782243782 +0000 UTC m=+1276.730338429" Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.800833 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=41.850297043 podStartE2EDuration="1m0.800817191s" podCreationTimestamp="2026-02-16 17:19:50 +0000 UTC" firstStartedPulling="2026-02-16 17:19:53.329876234 +0000 UTC m=+1219.277970881" lastFinishedPulling="2026-02-16 17:20:12.280396382 +0000 UTC m=+1238.228491029" observedRunningTime="2026-02-16 17:20:50.794579084 +0000 UTC m=+1276.742673731" watchObservedRunningTime="2026-02-16 17:20:50.800817191 +0000 UTC m=+1276.748911838" Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.836043 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=42.100943452 podStartE2EDuration="1m0.836020862s" podCreationTimestamp="2026-02-16 17:19:50 +0000 UTC" firstStartedPulling="2026-02-16 17:19:53.554169436 +0000 UTC m=+1219.502264083" lastFinishedPulling="2026-02-16 17:20:12.289246846 +0000 UTC m=+1238.237341493" observedRunningTime="2026-02-16 17:20:50.82250198 +0000 UTC m=+1276.770596647" watchObservedRunningTime="2026-02-16 17:20:50.836020862 +0000 UTC m=+1276.784115509" Feb 16 17:20:50 crc kubenswrapper[4794]: I0216 17:20:50.852208 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-w2gs8" podStartSLOduration=6.175833036 podStartE2EDuration="11.85219204s" podCreationTimestamp="2026-02-16 17:20:39 +0000 UTC" firstStartedPulling="2026-02-16 17:20:44.64005711 +0000 UTC m=+1270.588151757" lastFinishedPulling="2026-02-16 17:20:50.316416114 +0000 UTC m=+1276.264510761" observedRunningTime="2026-02-16 17:20:50.841929581 +0000 UTC m=+1276.790024248" watchObservedRunningTime="2026-02-16 17:20:50.85219204 +0000 UTC m=+1276.800286687" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.337420 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-j5dwq"] Feb 16 17:20:53 crc kubenswrapper[4794]: E0216 17:20:53.342705 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7fd9bb0-100b-4941-80d2-1a9ec63423be" containerName="mariadb-account-create-update" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.342935 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7fd9bb0-100b-4941-80d2-1a9ec63423be" containerName="mariadb-account-create-update" Feb 16 17:20:53 crc kubenswrapper[4794]: E0216 17:20:53.343000 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aee4dca2-9581-44c6-91db-ce6516f9b05e" containerName="mariadb-database-create" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.343051 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="aee4dca2-9581-44c6-91db-ce6516f9b05e" containerName="mariadb-database-create" Feb 16 17:20:53 crc kubenswrapper[4794]: E0216 17:20:53.343113 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05c86def-4e37-40ef-847d-ccb9dd6c99a9" containerName="mariadb-database-create" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.343167 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="05c86def-4e37-40ef-847d-ccb9dd6c99a9" containerName="mariadb-database-create" Feb 16 17:20:53 crc kubenswrapper[4794]: E0216 17:20:53.343220 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4ed7df7-08c2-4c06-bd2b-14ea362191d1" containerName="mariadb-account-create-update" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.343268 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4ed7df7-08c2-4c06-bd2b-14ea362191d1" containerName="mariadb-account-create-update" Feb 16 17:20:53 crc kubenswrapper[4794]: E0216 17:20:53.343343 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7351df94-ade5-4e5e-b281-b195301dc37d" containerName="mariadb-database-create" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.343407 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="7351df94-ade5-4e5e-b281-b195301dc37d" containerName="mariadb-database-create" Feb 16 17:20:53 crc kubenswrapper[4794]: E0216 17:20:53.343463 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c" containerName="console" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.343515 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c" containerName="console" Feb 16 17:20:53 crc kubenswrapper[4794]: E0216 17:20:53.343585 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d346472-4e86-4519-8307-ee7cf5f74280" containerName="mariadb-account-create-update" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.343634 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d346472-4e86-4519-8307-ee7cf5f74280" containerName="mariadb-account-create-update" Feb 16 17:20:53 crc kubenswrapper[4794]: E0216 17:20:53.343686 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d475f629-e8d0-4167-ba4d-37918b079499" containerName="mariadb-account-create-update" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.343736 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="d475f629-e8d0-4167-ba4d-37918b079499" containerName="mariadb-account-create-update" Feb 16 17:20:53 crc kubenswrapper[4794]: E0216 17:20:53.343793 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8cd1b17-5173-42b6-a51d-e2a057d404f4" containerName="mariadb-account-create-update" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.343844 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8cd1b17-5173-42b6-a51d-e2a057d404f4" containerName="mariadb-account-create-update" Feb 16 17:20:53 crc kubenswrapper[4794]: E0216 17:20:53.343900 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3bc8a6c-f954-4825-8853-316738b0eb94" containerName="mariadb-database-create" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.343948 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3bc8a6c-f954-4825-8853-316738b0eb94" containerName="mariadb-database-create" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.344247 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="d475f629-e8d0-4167-ba4d-37918b079499" containerName="mariadb-account-create-update" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.344329 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="aee4dca2-9581-44c6-91db-ce6516f9b05e" containerName="mariadb-database-create" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.344391 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8cd1b17-5173-42b6-a51d-e2a057d404f4" containerName="mariadb-account-create-update" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.344454 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="7351df94-ade5-4e5e-b281-b195301dc37d" containerName="mariadb-database-create" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.344509 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3bc8a6c-f954-4825-8853-316738b0eb94" containerName="mariadb-database-create" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.344596 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7fd9bb0-100b-4941-80d2-1a9ec63423be" containerName="mariadb-account-create-update" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.344656 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="05c86def-4e37-40ef-847d-ccb9dd6c99a9" containerName="mariadb-database-create" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.344722 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4ed7df7-08c2-4c06-bd2b-14ea362191d1" containerName="mariadb-account-create-update" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.344776 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb40b9f0-5e73-4c9c-b8e9-24ca7a81451c" containerName="console" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.344827 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d346472-4e86-4519-8307-ee7cf5f74280" containerName="mariadb-account-create-update" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.345633 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-j5dwq" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.349387 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-j5dwq"] Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.429644 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxcb6\" (UniqueName: \"kubernetes.io/projected/806a0c64-26e2-4021-875a-b7224b615057-kube-api-access-fxcb6\") pod \"mysqld-exporter-openstack-cell1-db-create-j5dwq\" (UID: \"806a0c64-26e2-4021-875a-b7224b615057\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-j5dwq" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.429729 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/806a0c64-26e2-4021-875a-b7224b615057-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-j5dwq\" (UID: \"806a0c64-26e2-4021-875a-b7224b615057\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-j5dwq" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.472496 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-lswtm" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.537927 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-dndb2"] Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.539491 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxcb6\" (UniqueName: \"kubernetes.io/projected/806a0c64-26e2-4021-875a-b7224b615057-kube-api-access-fxcb6\") pod \"mysqld-exporter-openstack-cell1-db-create-j5dwq\" (UID: \"806a0c64-26e2-4021-875a-b7224b615057\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-j5dwq" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.556100 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/806a0c64-26e2-4021-875a-b7224b615057-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-j5dwq\" (UID: \"806a0c64-26e2-4021-875a-b7224b615057\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-j5dwq" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.555702 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-dndb2" podUID="53be55ab-28ee-4368-8650-f5c90340992a" containerName="dnsmasq-dns" containerID="cri-o://6eff68df668384167a35834cfcb270bdd8a30fee88c7235d549d43a0f2df31b2" gracePeriod=10 Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.557195 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/806a0c64-26e2-4021-875a-b7224b615057-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-j5dwq\" (UID: \"806a0c64-26e2-4021-875a-b7224b615057\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-j5dwq" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.589687 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-3566-account-create-update-8tq2m"] Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.591250 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-3566-account-create-update-8tq2m" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.593287 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.595719 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxcb6\" (UniqueName: \"kubernetes.io/projected/806a0c64-26e2-4021-875a-b7224b615057-kube-api-access-fxcb6\") pod \"mysqld-exporter-openstack-cell1-db-create-j5dwq\" (UID: \"806a0c64-26e2-4021-875a-b7224b615057\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-j5dwq" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.626261 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-3566-account-create-update-8tq2m"] Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.660546 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk6nh\" (UniqueName: \"kubernetes.io/projected/b39da8d5-1def-498c-9a64-d015fa5de3b3-kube-api-access-zk6nh\") pod \"mysqld-exporter-3566-account-create-update-8tq2m\" (UID: \"b39da8d5-1def-498c-9a64-d015fa5de3b3\") " pod="openstack/mysqld-exporter-3566-account-create-update-8tq2m" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.660611 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b39da8d5-1def-498c-9a64-d015fa5de3b3-operator-scripts\") pod \"mysqld-exporter-3566-account-create-update-8tq2m\" (UID: \"b39da8d5-1def-498c-9a64-d015fa5de3b3\") " pod="openstack/mysqld-exporter-3566-account-create-update-8tq2m" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.688036 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-z7z5p"] Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.699853 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-j5dwq" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.703239 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-z7z5p"] Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.762725 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk6nh\" (UniqueName: \"kubernetes.io/projected/b39da8d5-1def-498c-9a64-d015fa5de3b3-kube-api-access-zk6nh\") pod \"mysqld-exporter-3566-account-create-update-8tq2m\" (UID: \"b39da8d5-1def-498c-9a64-d015fa5de3b3\") " pod="openstack/mysqld-exporter-3566-account-create-update-8tq2m" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.763048 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b39da8d5-1def-498c-9a64-d015fa5de3b3-operator-scripts\") pod \"mysqld-exporter-3566-account-create-update-8tq2m\" (UID: \"b39da8d5-1def-498c-9a64-d015fa5de3b3\") " pod="openstack/mysqld-exporter-3566-account-create-update-8tq2m" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.763920 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b39da8d5-1def-498c-9a64-d015fa5de3b3-operator-scripts\") pod \"mysqld-exporter-3566-account-create-update-8tq2m\" (UID: \"b39da8d5-1def-498c-9a64-d015fa5de3b3\") " pod="openstack/mysqld-exporter-3566-account-create-update-8tq2m" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.801173 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk6nh\" (UniqueName: \"kubernetes.io/projected/b39da8d5-1def-498c-9a64-d015fa5de3b3-kube-api-access-zk6nh\") pod \"mysqld-exporter-3566-account-create-update-8tq2m\" (UID: \"b39da8d5-1def-498c-9a64-d015fa5de3b3\") " pod="openstack/mysqld-exporter-3566-account-create-update-8tq2m" Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.828828 4794 generic.go:334] "Generic (PLEG): container finished" podID="53be55ab-28ee-4368-8650-f5c90340992a" containerID="6eff68df668384167a35834cfcb270bdd8a30fee88c7235d549d43a0f2df31b2" exitCode=0 Feb 16 17:20:53 crc kubenswrapper[4794]: I0216 17:20:53.828888 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-dndb2" event={"ID":"53be55ab-28ee-4368-8650-f5c90340992a","Type":"ContainerDied","Data":"6eff68df668384167a35834cfcb270bdd8a30fee88c7235d549d43a0f2df31b2"} Feb 16 17:20:54 crc kubenswrapper[4794]: I0216 17:20:54.032480 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-3566-account-create-update-8tq2m" Feb 16 17:20:54 crc kubenswrapper[4794]: I0216 17:20:54.426383 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-dndb2" Feb 16 17:20:54 crc kubenswrapper[4794]: I0216 17:20:54.482543 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53be55ab-28ee-4368-8650-f5c90340992a-dns-svc\") pod \"53be55ab-28ee-4368-8650-f5c90340992a\" (UID: \"53be55ab-28ee-4368-8650-f5c90340992a\") " Feb 16 17:20:54 crc kubenswrapper[4794]: I0216 17:20:54.482649 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53be55ab-28ee-4368-8650-f5c90340992a-ovsdbserver-sb\") pod \"53be55ab-28ee-4368-8650-f5c90340992a\" (UID: \"53be55ab-28ee-4368-8650-f5c90340992a\") " Feb 16 17:20:54 crc kubenswrapper[4794]: I0216 17:20:54.482736 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53be55ab-28ee-4368-8650-f5c90340992a-ovsdbserver-nb\") pod \"53be55ab-28ee-4368-8650-f5c90340992a\" (UID: \"53be55ab-28ee-4368-8650-f5c90340992a\") " Feb 16 17:20:54 crc kubenswrapper[4794]: I0216 17:20:54.482760 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27mbw\" (UniqueName: \"kubernetes.io/projected/53be55ab-28ee-4368-8650-f5c90340992a-kube-api-access-27mbw\") pod \"53be55ab-28ee-4368-8650-f5c90340992a\" (UID: \"53be55ab-28ee-4368-8650-f5c90340992a\") " Feb 16 17:20:54 crc kubenswrapper[4794]: I0216 17:20:54.482882 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53be55ab-28ee-4368-8650-f5c90340992a-config\") pod \"53be55ab-28ee-4368-8650-f5c90340992a\" (UID: \"53be55ab-28ee-4368-8650-f5c90340992a\") " Feb 16 17:20:54 crc kubenswrapper[4794]: I0216 17:20:54.494705 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53be55ab-28ee-4368-8650-f5c90340992a-kube-api-access-27mbw" (OuterVolumeSpecName: "kube-api-access-27mbw") pod "53be55ab-28ee-4368-8650-f5c90340992a" (UID: "53be55ab-28ee-4368-8650-f5c90340992a"). InnerVolumeSpecName "kube-api-access-27mbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:54 crc kubenswrapper[4794]: I0216 17:20:54.551744 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53be55ab-28ee-4368-8650-f5c90340992a-config" (OuterVolumeSpecName: "config") pod "53be55ab-28ee-4368-8650-f5c90340992a" (UID: "53be55ab-28ee-4368-8650-f5c90340992a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:54 crc kubenswrapper[4794]: I0216 17:20:54.573274 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53be55ab-28ee-4368-8650-f5c90340992a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "53be55ab-28ee-4368-8650-f5c90340992a" (UID: "53be55ab-28ee-4368-8650-f5c90340992a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:54 crc kubenswrapper[4794]: I0216 17:20:54.586688 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53be55ab-28ee-4368-8650-f5c90340992a-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:54 crc kubenswrapper[4794]: I0216 17:20:54.587053 4794 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53be55ab-28ee-4368-8650-f5c90340992a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:54 crc kubenswrapper[4794]: I0216 17:20:54.587159 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27mbw\" (UniqueName: \"kubernetes.io/projected/53be55ab-28ee-4368-8650-f5c90340992a-kube-api-access-27mbw\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:54 crc kubenswrapper[4794]: I0216 17:20:54.620202 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53be55ab-28ee-4368-8650-f5c90340992a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "53be55ab-28ee-4368-8650-f5c90340992a" (UID: "53be55ab-28ee-4368-8650-f5c90340992a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:54 crc kubenswrapper[4794]: I0216 17:20:54.624546 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53be55ab-28ee-4368-8650-f5c90340992a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "53be55ab-28ee-4368-8650-f5c90340992a" (UID: "53be55ab-28ee-4368-8650-f5c90340992a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:54 crc kubenswrapper[4794]: I0216 17:20:54.652997 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-j5dwq"] Feb 16 17:20:54 crc kubenswrapper[4794]: I0216 17:20:54.688617 4794 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53be55ab-28ee-4368-8650-f5c90340992a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:54 crc kubenswrapper[4794]: I0216 17:20:54.688646 4794 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53be55ab-28ee-4368-8650-f5c90340992a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:54 crc kubenswrapper[4794]: I0216 17:20:54.803354 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d475f629-e8d0-4167-ba4d-37918b079499" path="/var/lib/kubelet/pods/d475f629-e8d0-4167-ba4d-37918b079499/volumes" Feb 16 17:20:54 crc kubenswrapper[4794]: I0216 17:20:54.839979 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-dndb2" Feb 16 17:20:54 crc kubenswrapper[4794]: I0216 17:20:54.839968 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-dndb2" event={"ID":"53be55ab-28ee-4368-8650-f5c90340992a","Type":"ContainerDied","Data":"68bb0ae72a7d8a136a4e8f5738f5b3bb80b1dd6b0631373eddb17968d55bbc75"} Feb 16 17:20:54 crc kubenswrapper[4794]: I0216 17:20:54.840113 4794 scope.go:117] "RemoveContainer" containerID="6eff68df668384167a35834cfcb270bdd8a30fee88c7235d549d43a0f2df31b2" Feb 16 17:20:54 crc kubenswrapper[4794]: I0216 17:20:54.845493 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-j5dwq" event={"ID":"806a0c64-26e2-4021-875a-b7224b615057","Type":"ContainerStarted","Data":"e2f13f985a850f6899b1339002c16eb010d5c58d143c5944c24edffc4d10c689"} Feb 16 17:20:54 crc kubenswrapper[4794]: I0216 17:20:54.849230 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d6f0b7b-1214-4425-a850-09933e0e9a6e","Type":"ContainerStarted","Data":"265c2301b55f664d7617bffeb0465ac9aebe7fe9748a52cef3c76c4e8113c166"} Feb 16 17:20:54 crc kubenswrapper[4794]: I0216 17:20:54.869787 4794 scope.go:117] "RemoveContainer" containerID="1436df60f3e91227f02112b0eaafa1b9cc5675ad4b28fc5d7583876df114bda0" Feb 16 17:20:54 crc kubenswrapper[4794]: I0216 17:20:54.870885 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-dndb2"] Feb 16 17:20:54 crc kubenswrapper[4794]: I0216 17:20:54.896380 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-dndb2"] Feb 16 17:20:54 crc kubenswrapper[4794]: W0216 17:20:54.901183 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb39da8d5_1def_498c_9a64_d015fa5de3b3.slice/crio-2f67279c4e5ac515643d2eb23e752e29c9c6cb92b5f9017c298453092ae6ddbf WatchSource:0}: Error finding container 2f67279c4e5ac515643d2eb23e752e29c9c6cb92b5f9017c298453092ae6ddbf: Status 404 returned error can't find the container with id 2f67279c4e5ac515643d2eb23e752e29c9c6cb92b5f9017c298453092ae6ddbf Feb 16 17:20:54 crc kubenswrapper[4794]: I0216 17:20:54.907273 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-3566-account-create-update-8tq2m"] Feb 16 17:20:54 crc kubenswrapper[4794]: I0216 17:20:54.954675 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=17.849348194 podStartE2EDuration="57.954651483s" podCreationTimestamp="2026-02-16 17:19:57 +0000 UTC" firstStartedPulling="2026-02-16 17:20:14.065950967 +0000 UTC m=+1240.014045614" lastFinishedPulling="2026-02-16 17:20:54.171254266 +0000 UTC m=+1280.119348903" observedRunningTime="2026-02-16 17:20:54.92052696 +0000 UTC m=+1280.868621607" watchObservedRunningTime="2026-02-16 17:20:54.954651483 +0000 UTC m=+1280.902746130" Feb 16 17:20:55 crc kubenswrapper[4794]: I0216 17:20:55.102260 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/54acc9db-6bd7-463f-8637-6aa39ed3eb11-etc-swift\") pod \"swift-storage-0\" (UID: \"54acc9db-6bd7-463f-8637-6aa39ed3eb11\") " pod="openstack/swift-storage-0" Feb 16 17:20:55 crc kubenswrapper[4794]: E0216 17:20:55.102468 4794 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 17:20:55 crc kubenswrapper[4794]: E0216 17:20:55.102763 4794 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 17:20:55 crc kubenswrapper[4794]: E0216 17:20:55.102811 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54acc9db-6bd7-463f-8637-6aa39ed3eb11-etc-swift podName:54acc9db-6bd7-463f-8637-6aa39ed3eb11 nodeName:}" failed. No retries permitted until 2026-02-16 17:21:11.102793539 +0000 UTC m=+1297.050888186 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/54acc9db-6bd7-463f-8637-6aa39ed3eb11-etc-swift") pod "swift-storage-0" (UID: "54acc9db-6bd7-463f-8637-6aa39ed3eb11") : configmap "swift-ring-files" not found Feb 16 17:20:55 crc kubenswrapper[4794]: I0216 17:20:55.460238 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-vc8d5"] Feb 16 17:20:55 crc kubenswrapper[4794]: E0216 17:20:55.460718 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53be55ab-28ee-4368-8650-f5c90340992a" containerName="init" Feb 16 17:20:55 crc kubenswrapper[4794]: I0216 17:20:55.460734 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="53be55ab-28ee-4368-8650-f5c90340992a" containerName="init" Feb 16 17:20:55 crc kubenswrapper[4794]: E0216 17:20:55.460751 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53be55ab-28ee-4368-8650-f5c90340992a" containerName="dnsmasq-dns" Feb 16 17:20:55 crc kubenswrapper[4794]: I0216 17:20:55.460757 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="53be55ab-28ee-4368-8650-f5c90340992a" containerName="dnsmasq-dns" Feb 16 17:20:55 crc kubenswrapper[4794]: I0216 17:20:55.460964 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="53be55ab-28ee-4368-8650-f5c90340992a" containerName="dnsmasq-dns" Feb 16 17:20:55 crc kubenswrapper[4794]: I0216 17:20:55.461890 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-vc8d5" Feb 16 17:20:55 crc kubenswrapper[4794]: I0216 17:20:55.470206 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-6gc5c" Feb 16 17:20:55 crc kubenswrapper[4794]: I0216 17:20:55.470495 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 16 17:20:55 crc kubenswrapper[4794]: I0216 17:20:55.495160 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-vc8d5"] Feb 16 17:20:55 crc kubenswrapper[4794]: I0216 17:20:55.510434 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjp46\" (UniqueName: \"kubernetes.io/projected/fb8edc26-5ad8-440e-9d5b-942b0a287ea4-kube-api-access-kjp46\") pod \"glance-db-sync-vc8d5\" (UID: \"fb8edc26-5ad8-440e-9d5b-942b0a287ea4\") " pod="openstack/glance-db-sync-vc8d5" Feb 16 17:20:55 crc kubenswrapper[4794]: I0216 17:20:55.510546 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb8edc26-5ad8-440e-9d5b-942b0a287ea4-config-data\") pod \"glance-db-sync-vc8d5\" (UID: \"fb8edc26-5ad8-440e-9d5b-942b0a287ea4\") " pod="openstack/glance-db-sync-vc8d5" Feb 16 17:20:55 crc kubenswrapper[4794]: I0216 17:20:55.510681 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fb8edc26-5ad8-440e-9d5b-942b0a287ea4-db-sync-config-data\") pod \"glance-db-sync-vc8d5\" (UID: \"fb8edc26-5ad8-440e-9d5b-942b0a287ea4\") " pod="openstack/glance-db-sync-vc8d5" Feb 16 17:20:55 crc kubenswrapper[4794]: I0216 17:20:55.510737 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb8edc26-5ad8-440e-9d5b-942b0a287ea4-combined-ca-bundle\") pod \"glance-db-sync-vc8d5\" (UID: \"fb8edc26-5ad8-440e-9d5b-942b0a287ea4\") " pod="openstack/glance-db-sync-vc8d5" Feb 16 17:20:55 crc kubenswrapper[4794]: I0216 17:20:55.612779 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fb8edc26-5ad8-440e-9d5b-942b0a287ea4-db-sync-config-data\") pod \"glance-db-sync-vc8d5\" (UID: \"fb8edc26-5ad8-440e-9d5b-942b0a287ea4\") " pod="openstack/glance-db-sync-vc8d5" Feb 16 17:20:55 crc kubenswrapper[4794]: I0216 17:20:55.612859 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb8edc26-5ad8-440e-9d5b-942b0a287ea4-combined-ca-bundle\") pod \"glance-db-sync-vc8d5\" (UID: \"fb8edc26-5ad8-440e-9d5b-942b0a287ea4\") " pod="openstack/glance-db-sync-vc8d5" Feb 16 17:20:55 crc kubenswrapper[4794]: I0216 17:20:55.612984 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjp46\" (UniqueName: \"kubernetes.io/projected/fb8edc26-5ad8-440e-9d5b-942b0a287ea4-kube-api-access-kjp46\") pod \"glance-db-sync-vc8d5\" (UID: \"fb8edc26-5ad8-440e-9d5b-942b0a287ea4\") " pod="openstack/glance-db-sync-vc8d5" Feb 16 17:20:55 crc kubenswrapper[4794]: I0216 17:20:55.613058 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb8edc26-5ad8-440e-9d5b-942b0a287ea4-config-data\") pod \"glance-db-sync-vc8d5\" (UID: \"fb8edc26-5ad8-440e-9d5b-942b0a287ea4\") " pod="openstack/glance-db-sync-vc8d5" Feb 16 17:20:55 crc kubenswrapper[4794]: I0216 17:20:55.620205 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fb8edc26-5ad8-440e-9d5b-942b0a287ea4-db-sync-config-data\") pod \"glance-db-sync-vc8d5\" (UID: \"fb8edc26-5ad8-440e-9d5b-942b0a287ea4\") " pod="openstack/glance-db-sync-vc8d5" Feb 16 17:20:55 crc kubenswrapper[4794]: I0216 17:20:55.622863 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb8edc26-5ad8-440e-9d5b-942b0a287ea4-config-data\") pod \"glance-db-sync-vc8d5\" (UID: \"fb8edc26-5ad8-440e-9d5b-942b0a287ea4\") " pod="openstack/glance-db-sync-vc8d5" Feb 16 17:20:55 crc kubenswrapper[4794]: I0216 17:20:55.623628 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb8edc26-5ad8-440e-9d5b-942b0a287ea4-combined-ca-bundle\") pod \"glance-db-sync-vc8d5\" (UID: \"fb8edc26-5ad8-440e-9d5b-942b0a287ea4\") " pod="openstack/glance-db-sync-vc8d5" Feb 16 17:20:55 crc kubenswrapper[4794]: I0216 17:20:55.648900 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjp46\" (UniqueName: \"kubernetes.io/projected/fb8edc26-5ad8-440e-9d5b-942b0a287ea4-kube-api-access-kjp46\") pod \"glance-db-sync-vc8d5\" (UID: \"fb8edc26-5ad8-440e-9d5b-942b0a287ea4\") " pod="openstack/glance-db-sync-vc8d5" Feb 16 17:20:55 crc kubenswrapper[4794]: I0216 17:20:55.807064 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-vc8d5" Feb 16 17:20:55 crc kubenswrapper[4794]: I0216 17:20:55.917673 4794 generic.go:334] "Generic (PLEG): container finished" podID="b39da8d5-1def-498c-9a64-d015fa5de3b3" containerID="040c3bcf07f107ec2e2e9901c34cbdf2916f485148627912dccbc483778aa13c" exitCode=0 Feb 16 17:20:55 crc kubenswrapper[4794]: I0216 17:20:55.918167 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-3566-account-create-update-8tq2m" event={"ID":"b39da8d5-1def-498c-9a64-d015fa5de3b3","Type":"ContainerDied","Data":"040c3bcf07f107ec2e2e9901c34cbdf2916f485148627912dccbc483778aa13c"} Feb 16 17:20:55 crc kubenswrapper[4794]: I0216 17:20:55.918212 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-3566-account-create-update-8tq2m" event={"ID":"b39da8d5-1def-498c-9a64-d015fa5de3b3","Type":"ContainerStarted","Data":"2f67279c4e5ac515643d2eb23e752e29c9c6cb92b5f9017c298453092ae6ddbf"} Feb 16 17:20:55 crc kubenswrapper[4794]: I0216 17:20:55.931588 4794 generic.go:334] "Generic (PLEG): container finished" podID="806a0c64-26e2-4021-875a-b7224b615057" containerID="4fb324774e2f6f84e3afb9ea82687141fc92d7dda51c974ea093be5619e031dd" exitCode=0 Feb 16 17:20:55 crc kubenswrapper[4794]: I0216 17:20:55.932287 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-j5dwq" event={"ID":"806a0c64-26e2-4021-875a-b7224b615057","Type":"ContainerDied","Data":"4fb324774e2f6f84e3afb9ea82687141fc92d7dda51c974ea093be5619e031dd"} Feb 16 17:20:56 crc kubenswrapper[4794]: I0216 17:20:56.107621 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-frfcd" podUID="e6ba4ad1-ede1-49d7-a317-8f6d71134947" containerName="ovn-controller" probeResult="failure" output=< Feb 16 17:20:56 crc kubenswrapper[4794]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 16 17:20:56 crc kubenswrapper[4794]: > Feb 16 17:20:56 crc kubenswrapper[4794]: I0216 17:20:56.114274 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-jgbgf" Feb 16 17:20:56 crc kubenswrapper[4794]: I0216 17:20:56.180376 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-jgbgf" Feb 16 17:20:56 crc kubenswrapper[4794]: I0216 17:20:56.567017 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-frfcd-config-56k8q"] Feb 16 17:20:56 crc kubenswrapper[4794]: I0216 17:20:56.568970 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-frfcd-config-56k8q" Feb 16 17:20:56 crc kubenswrapper[4794]: I0216 17:20:56.581409 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 16 17:20:56 crc kubenswrapper[4794]: I0216 17:20:56.593423 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-frfcd-config-56k8q"] Feb 16 17:20:56 crc kubenswrapper[4794]: I0216 17:20:56.634739 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gk9f\" (UniqueName: \"kubernetes.io/projected/d9fe5dcd-8e91-4385-a1a0-bea6de56ee48-kube-api-access-2gk9f\") pod \"ovn-controller-frfcd-config-56k8q\" (UID: \"d9fe5dcd-8e91-4385-a1a0-bea6de56ee48\") " pod="openstack/ovn-controller-frfcd-config-56k8q" Feb 16 17:20:56 crc kubenswrapper[4794]: I0216 17:20:56.634818 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d9fe5dcd-8e91-4385-a1a0-bea6de56ee48-var-log-ovn\") pod \"ovn-controller-frfcd-config-56k8q\" (UID: \"d9fe5dcd-8e91-4385-a1a0-bea6de56ee48\") " pod="openstack/ovn-controller-frfcd-config-56k8q" Feb 16 17:20:56 crc kubenswrapper[4794]: I0216 17:20:56.634854 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d9fe5dcd-8e91-4385-a1a0-bea6de56ee48-var-run\") pod \"ovn-controller-frfcd-config-56k8q\" (UID: \"d9fe5dcd-8e91-4385-a1a0-bea6de56ee48\") " pod="openstack/ovn-controller-frfcd-config-56k8q" Feb 16 17:20:56 crc kubenswrapper[4794]: I0216 17:20:56.635241 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d9fe5dcd-8e91-4385-a1a0-bea6de56ee48-additional-scripts\") pod \"ovn-controller-frfcd-config-56k8q\" (UID: \"d9fe5dcd-8e91-4385-a1a0-bea6de56ee48\") " pod="openstack/ovn-controller-frfcd-config-56k8q" Feb 16 17:20:56 crc kubenswrapper[4794]: I0216 17:20:56.635396 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d9fe5dcd-8e91-4385-a1a0-bea6de56ee48-var-run-ovn\") pod \"ovn-controller-frfcd-config-56k8q\" (UID: \"d9fe5dcd-8e91-4385-a1a0-bea6de56ee48\") " pod="openstack/ovn-controller-frfcd-config-56k8q" Feb 16 17:20:56 crc kubenswrapper[4794]: I0216 17:20:56.635739 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9fe5dcd-8e91-4385-a1a0-bea6de56ee48-scripts\") pod \"ovn-controller-frfcd-config-56k8q\" (UID: \"d9fe5dcd-8e91-4385-a1a0-bea6de56ee48\") " pod="openstack/ovn-controller-frfcd-config-56k8q" Feb 16 17:20:56 crc kubenswrapper[4794]: I0216 17:20:56.686378 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-vc8d5"] Feb 16 17:20:56 crc kubenswrapper[4794]: I0216 17:20:56.738790 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d9fe5dcd-8e91-4385-a1a0-bea6de56ee48-var-log-ovn\") pod \"ovn-controller-frfcd-config-56k8q\" (UID: \"d9fe5dcd-8e91-4385-a1a0-bea6de56ee48\") " pod="openstack/ovn-controller-frfcd-config-56k8q" Feb 16 17:20:56 crc kubenswrapper[4794]: I0216 17:20:56.738883 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d9fe5dcd-8e91-4385-a1a0-bea6de56ee48-var-run\") pod \"ovn-controller-frfcd-config-56k8q\" (UID: \"d9fe5dcd-8e91-4385-a1a0-bea6de56ee48\") " pod="openstack/ovn-controller-frfcd-config-56k8q" Feb 16 17:20:56 crc kubenswrapper[4794]: I0216 17:20:56.738951 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d9fe5dcd-8e91-4385-a1a0-bea6de56ee48-additional-scripts\") pod \"ovn-controller-frfcd-config-56k8q\" (UID: \"d9fe5dcd-8e91-4385-a1a0-bea6de56ee48\") " pod="openstack/ovn-controller-frfcd-config-56k8q" Feb 16 17:20:56 crc kubenswrapper[4794]: I0216 17:20:56.738985 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d9fe5dcd-8e91-4385-a1a0-bea6de56ee48-var-run-ovn\") pod \"ovn-controller-frfcd-config-56k8q\" (UID: \"d9fe5dcd-8e91-4385-a1a0-bea6de56ee48\") " pod="openstack/ovn-controller-frfcd-config-56k8q" Feb 16 17:20:56 crc kubenswrapper[4794]: I0216 17:20:56.739095 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9fe5dcd-8e91-4385-a1a0-bea6de56ee48-scripts\") pod \"ovn-controller-frfcd-config-56k8q\" (UID: \"d9fe5dcd-8e91-4385-a1a0-bea6de56ee48\") " pod="openstack/ovn-controller-frfcd-config-56k8q" Feb 16 17:20:56 crc kubenswrapper[4794]: I0216 17:20:56.739171 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gk9f\" (UniqueName: \"kubernetes.io/projected/d9fe5dcd-8e91-4385-a1a0-bea6de56ee48-kube-api-access-2gk9f\") pod \"ovn-controller-frfcd-config-56k8q\" (UID: \"d9fe5dcd-8e91-4385-a1a0-bea6de56ee48\") " pod="openstack/ovn-controller-frfcd-config-56k8q" Feb 16 17:20:56 crc kubenswrapper[4794]: I0216 17:20:56.739420 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d9fe5dcd-8e91-4385-a1a0-bea6de56ee48-var-run\") pod \"ovn-controller-frfcd-config-56k8q\" (UID: \"d9fe5dcd-8e91-4385-a1a0-bea6de56ee48\") " pod="openstack/ovn-controller-frfcd-config-56k8q" Feb 16 17:20:56 crc kubenswrapper[4794]: I0216 17:20:56.739448 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d9fe5dcd-8e91-4385-a1a0-bea6de56ee48-var-log-ovn\") pod \"ovn-controller-frfcd-config-56k8q\" (UID: \"d9fe5dcd-8e91-4385-a1a0-bea6de56ee48\") " pod="openstack/ovn-controller-frfcd-config-56k8q" Feb 16 17:20:56 crc kubenswrapper[4794]: I0216 17:20:56.739462 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d9fe5dcd-8e91-4385-a1a0-bea6de56ee48-var-run-ovn\") pod \"ovn-controller-frfcd-config-56k8q\" (UID: \"d9fe5dcd-8e91-4385-a1a0-bea6de56ee48\") " pod="openstack/ovn-controller-frfcd-config-56k8q" Feb 16 17:20:56 crc kubenswrapper[4794]: I0216 17:20:56.740110 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d9fe5dcd-8e91-4385-a1a0-bea6de56ee48-additional-scripts\") pod \"ovn-controller-frfcd-config-56k8q\" (UID: \"d9fe5dcd-8e91-4385-a1a0-bea6de56ee48\") " pod="openstack/ovn-controller-frfcd-config-56k8q" Feb 16 17:20:56 crc kubenswrapper[4794]: I0216 17:20:56.741655 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9fe5dcd-8e91-4385-a1a0-bea6de56ee48-scripts\") pod \"ovn-controller-frfcd-config-56k8q\" (UID: \"d9fe5dcd-8e91-4385-a1a0-bea6de56ee48\") " pod="openstack/ovn-controller-frfcd-config-56k8q" Feb 16 17:20:56 crc kubenswrapper[4794]: I0216 17:20:56.765133 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gk9f\" (UniqueName: \"kubernetes.io/projected/d9fe5dcd-8e91-4385-a1a0-bea6de56ee48-kube-api-access-2gk9f\") pod \"ovn-controller-frfcd-config-56k8q\" (UID: \"d9fe5dcd-8e91-4385-a1a0-bea6de56ee48\") " pod="openstack/ovn-controller-frfcd-config-56k8q" Feb 16 17:20:56 crc kubenswrapper[4794]: I0216 17:20:56.803689 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53be55ab-28ee-4368-8650-f5c90340992a" path="/var/lib/kubelet/pods/53be55ab-28ee-4368-8650-f5c90340992a/volumes" Feb 16 17:20:56 crc kubenswrapper[4794]: I0216 17:20:56.891973 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-frfcd-config-56k8q" Feb 16 17:20:56 crc kubenswrapper[4794]: I0216 17:20:56.948904 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-vc8d5" event={"ID":"fb8edc26-5ad8-440e-9d5b-942b0a287ea4","Type":"ContainerStarted","Data":"0711b1e3030ef020187ce0f0267a0c3cfda62765bb915a83b935c700fa60e163"} Feb 16 17:20:57 crc kubenswrapper[4794]: I0216 17:20:57.534389 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-3566-account-create-update-8tq2m" Feb 16 17:20:57 crc kubenswrapper[4794]: I0216 17:20:57.575334 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b39da8d5-1def-498c-9a64-d015fa5de3b3-operator-scripts\") pod \"b39da8d5-1def-498c-9a64-d015fa5de3b3\" (UID: \"b39da8d5-1def-498c-9a64-d015fa5de3b3\") " Feb 16 17:20:57 crc kubenswrapper[4794]: I0216 17:20:57.575430 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zk6nh\" (UniqueName: \"kubernetes.io/projected/b39da8d5-1def-498c-9a64-d015fa5de3b3-kube-api-access-zk6nh\") pod \"b39da8d5-1def-498c-9a64-d015fa5de3b3\" (UID: \"b39da8d5-1def-498c-9a64-d015fa5de3b3\") " Feb 16 17:20:57 crc kubenswrapper[4794]: I0216 17:20:57.582837 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b39da8d5-1def-498c-9a64-d015fa5de3b3-kube-api-access-zk6nh" (OuterVolumeSpecName: "kube-api-access-zk6nh") pod "b39da8d5-1def-498c-9a64-d015fa5de3b3" (UID: "b39da8d5-1def-498c-9a64-d015fa5de3b3"). InnerVolumeSpecName "kube-api-access-zk6nh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:57 crc kubenswrapper[4794]: I0216 17:20:57.584614 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b39da8d5-1def-498c-9a64-d015fa5de3b3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b39da8d5-1def-498c-9a64-d015fa5de3b3" (UID: "b39da8d5-1def-498c-9a64-d015fa5de3b3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:57 crc kubenswrapper[4794]: I0216 17:20:57.604041 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-j5dwq" Feb 16 17:20:57 crc kubenswrapper[4794]: I0216 17:20:57.677633 4794 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b39da8d5-1def-498c-9a64-d015fa5de3b3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:57 crc kubenswrapper[4794]: I0216 17:20:57.677674 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zk6nh\" (UniqueName: \"kubernetes.io/projected/b39da8d5-1def-498c-9a64-d015fa5de3b3-kube-api-access-zk6nh\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:57 crc kubenswrapper[4794]: I0216 17:20:57.739315 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-frfcd-config-56k8q"] Feb 16 17:20:57 crc kubenswrapper[4794]: W0216 17:20:57.767096 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9fe5dcd_8e91_4385_a1a0_bea6de56ee48.slice/crio-2754355c7464fbd0a026185bc0f44f9edf0d747d1bbdca77c3596b1b94c92dcc WatchSource:0}: Error finding container 2754355c7464fbd0a026185bc0f44f9edf0d747d1bbdca77c3596b1b94c92dcc: Status 404 returned error can't find the container with id 2754355c7464fbd0a026185bc0f44f9edf0d747d1bbdca77c3596b1b94c92dcc Feb 16 17:20:57 crc kubenswrapper[4794]: I0216 17:20:57.779460 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxcb6\" (UniqueName: \"kubernetes.io/projected/806a0c64-26e2-4021-875a-b7224b615057-kube-api-access-fxcb6\") pod \"806a0c64-26e2-4021-875a-b7224b615057\" (UID: \"806a0c64-26e2-4021-875a-b7224b615057\") " Feb 16 17:20:57 crc kubenswrapper[4794]: I0216 17:20:57.779591 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/806a0c64-26e2-4021-875a-b7224b615057-operator-scripts\") pod \"806a0c64-26e2-4021-875a-b7224b615057\" (UID: \"806a0c64-26e2-4021-875a-b7224b615057\") " Feb 16 17:20:57 crc kubenswrapper[4794]: I0216 17:20:57.781067 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/806a0c64-26e2-4021-875a-b7224b615057-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "806a0c64-26e2-4021-875a-b7224b615057" (UID: "806a0c64-26e2-4021-875a-b7224b615057"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:57 crc kubenswrapper[4794]: I0216 17:20:57.783339 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/806a0c64-26e2-4021-875a-b7224b615057-kube-api-access-fxcb6" (OuterVolumeSpecName: "kube-api-access-fxcb6") pod "806a0c64-26e2-4021-875a-b7224b615057" (UID: "806a0c64-26e2-4021-875a-b7224b615057"). InnerVolumeSpecName "kube-api-access-fxcb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:57 crc kubenswrapper[4794]: I0216 17:20:57.883860 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxcb6\" (UniqueName: \"kubernetes.io/projected/806a0c64-26e2-4021-875a-b7224b615057-kube-api-access-fxcb6\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:57 crc kubenswrapper[4794]: I0216 17:20:57.884174 4794 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/806a0c64-26e2-4021-875a-b7224b615057-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:57 crc kubenswrapper[4794]: I0216 17:20:57.959281 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-3566-account-create-update-8tq2m" Feb 16 17:20:57 crc kubenswrapper[4794]: I0216 17:20:57.959284 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-3566-account-create-update-8tq2m" event={"ID":"b39da8d5-1def-498c-9a64-d015fa5de3b3","Type":"ContainerDied","Data":"2f67279c4e5ac515643d2eb23e752e29c9c6cb92b5f9017c298453092ae6ddbf"} Feb 16 17:20:57 crc kubenswrapper[4794]: I0216 17:20:57.959447 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f67279c4e5ac515643d2eb23e752e29c9c6cb92b5f9017c298453092ae6ddbf" Feb 16 17:20:57 crc kubenswrapper[4794]: I0216 17:20:57.961443 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-j5dwq" Feb 16 17:20:57 crc kubenswrapper[4794]: I0216 17:20:57.961442 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-j5dwq" event={"ID":"806a0c64-26e2-4021-875a-b7224b615057","Type":"ContainerDied","Data":"e2f13f985a850f6899b1339002c16eb010d5c58d143c5944c24edffc4d10c689"} Feb 16 17:20:57 crc kubenswrapper[4794]: I0216 17:20:57.961488 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2f13f985a850f6899b1339002c16eb010d5c58d143c5944c24edffc4d10c689" Feb 16 17:20:57 crc kubenswrapper[4794]: I0216 17:20:57.963260 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-frfcd-config-56k8q" event={"ID":"d9fe5dcd-8e91-4385-a1a0-bea6de56ee48","Type":"ContainerStarted","Data":"2754355c7464fbd0a026185bc0f44f9edf0d747d1bbdca77c3596b1b94c92dcc"} Feb 16 17:20:58 crc kubenswrapper[4794]: I0216 17:20:58.770510 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-9bqgp"] Feb 16 17:20:58 crc kubenswrapper[4794]: E0216 17:20:58.771253 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b39da8d5-1def-498c-9a64-d015fa5de3b3" containerName="mariadb-account-create-update" Feb 16 17:20:58 crc kubenswrapper[4794]: I0216 17:20:58.771270 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="b39da8d5-1def-498c-9a64-d015fa5de3b3" containerName="mariadb-account-create-update" Feb 16 17:20:58 crc kubenswrapper[4794]: E0216 17:20:58.771293 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="806a0c64-26e2-4021-875a-b7224b615057" containerName="mariadb-database-create" Feb 16 17:20:58 crc kubenswrapper[4794]: I0216 17:20:58.771318 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="806a0c64-26e2-4021-875a-b7224b615057" containerName="mariadb-database-create" Feb 16 17:20:58 crc kubenswrapper[4794]: I0216 17:20:58.771597 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="b39da8d5-1def-498c-9a64-d015fa5de3b3" containerName="mariadb-account-create-update" Feb 16 17:20:58 crc kubenswrapper[4794]: I0216 17:20:58.771615 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="806a0c64-26e2-4021-875a-b7224b615057" containerName="mariadb-database-create" Feb 16 17:20:58 crc kubenswrapper[4794]: I0216 17:20:58.772605 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9bqgp" Feb 16 17:20:58 crc kubenswrapper[4794]: I0216 17:20:58.779566 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 16 17:20:58 crc kubenswrapper[4794]: I0216 17:20:58.787613 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9bqgp"] Feb 16 17:20:58 crc kubenswrapper[4794]: I0216 17:20:58.915285 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d7a9db0-dcc3-4dba-a84a-e5de6ca5ebeb-operator-scripts\") pod \"root-account-create-update-9bqgp\" (UID: \"6d7a9db0-dcc3-4dba-a84a-e5de6ca5ebeb\") " pod="openstack/root-account-create-update-9bqgp" Feb 16 17:20:58 crc kubenswrapper[4794]: I0216 17:20:58.915425 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbn5j\" (UniqueName: \"kubernetes.io/projected/6d7a9db0-dcc3-4dba-a84a-e5de6ca5ebeb-kube-api-access-zbn5j\") pod \"root-account-create-update-9bqgp\" (UID: \"6d7a9db0-dcc3-4dba-a84a-e5de6ca5ebeb\") " pod="openstack/root-account-create-update-9bqgp" Feb 16 17:20:58 crc kubenswrapper[4794]: I0216 17:20:58.991291 4794 generic.go:334] "Generic (PLEG): container finished" podID="d9fe5dcd-8e91-4385-a1a0-bea6de56ee48" containerID="90ce9cab9f6d005ccfe078c26004325c9eaba9f760b189549e61db3ce47e0448" exitCode=0 Feb 16 17:20:58 crc kubenswrapper[4794]: I0216 17:20:58.991581 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-frfcd-config-56k8q" event={"ID":"d9fe5dcd-8e91-4385-a1a0-bea6de56ee48","Type":"ContainerDied","Data":"90ce9cab9f6d005ccfe078c26004325c9eaba9f760b189549e61db3ce47e0448"} Feb 16 17:20:59 crc kubenswrapper[4794]: I0216 17:20:59.001965 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:59 crc kubenswrapper[4794]: I0216 17:20:59.004038 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:59 crc kubenswrapper[4794]: I0216 17:20:59.010421 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:59 crc kubenswrapper[4794]: I0216 17:20:59.018438 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d7a9db0-dcc3-4dba-a84a-e5de6ca5ebeb-operator-scripts\") pod \"root-account-create-update-9bqgp\" (UID: \"6d7a9db0-dcc3-4dba-a84a-e5de6ca5ebeb\") " pod="openstack/root-account-create-update-9bqgp" Feb 16 17:20:59 crc kubenswrapper[4794]: I0216 17:20:59.018489 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbn5j\" (UniqueName: \"kubernetes.io/projected/6d7a9db0-dcc3-4dba-a84a-e5de6ca5ebeb-kube-api-access-zbn5j\") pod \"root-account-create-update-9bqgp\" (UID: \"6d7a9db0-dcc3-4dba-a84a-e5de6ca5ebeb\") " pod="openstack/root-account-create-update-9bqgp" Feb 16 17:20:59 crc kubenswrapper[4794]: I0216 17:20:59.019709 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d7a9db0-dcc3-4dba-a84a-e5de6ca5ebeb-operator-scripts\") pod \"root-account-create-update-9bqgp\" (UID: \"6d7a9db0-dcc3-4dba-a84a-e5de6ca5ebeb\") " pod="openstack/root-account-create-update-9bqgp" Feb 16 17:20:59 crc kubenswrapper[4794]: I0216 17:20:59.080098 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbn5j\" (UniqueName: \"kubernetes.io/projected/6d7a9db0-dcc3-4dba-a84a-e5de6ca5ebeb-kube-api-access-zbn5j\") pod \"root-account-create-update-9bqgp\" (UID: \"6d7a9db0-dcc3-4dba-a84a-e5de6ca5ebeb\") " pod="openstack/root-account-create-update-9bqgp" Feb 16 17:20:59 crc kubenswrapper[4794]: I0216 17:20:59.093368 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9bqgp" Feb 16 17:20:59 crc kubenswrapper[4794]: I0216 17:20:59.758796 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9bqgp"] Feb 16 17:21:00 crc kubenswrapper[4794]: I0216 17:21:00.005103 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9bqgp" event={"ID":"6d7a9db0-dcc3-4dba-a84a-e5de6ca5ebeb","Type":"ContainerStarted","Data":"edef52711be596bfd6444d76e113561a71b58162f94ae292e6d955183e4cbea0"} Feb 16 17:21:00 crc kubenswrapper[4794]: I0216 17:21:00.008778 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:00 crc kubenswrapper[4794]: I0216 17:21:00.562319 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-frfcd-config-56k8q" Feb 16 17:21:00 crc kubenswrapper[4794]: I0216 17:21:00.657032 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d9fe5dcd-8e91-4385-a1a0-bea6de56ee48-var-run\") pod \"d9fe5dcd-8e91-4385-a1a0-bea6de56ee48\" (UID: \"d9fe5dcd-8e91-4385-a1a0-bea6de56ee48\") " Feb 16 17:21:00 crc kubenswrapper[4794]: I0216 17:21:00.658646 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d9fe5dcd-8e91-4385-a1a0-bea6de56ee48-var-log-ovn\") pod \"d9fe5dcd-8e91-4385-a1a0-bea6de56ee48\" (UID: \"d9fe5dcd-8e91-4385-a1a0-bea6de56ee48\") " Feb 16 17:21:00 crc kubenswrapper[4794]: I0216 17:21:00.658727 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gk9f\" (UniqueName: \"kubernetes.io/projected/d9fe5dcd-8e91-4385-a1a0-bea6de56ee48-kube-api-access-2gk9f\") pod \"d9fe5dcd-8e91-4385-a1a0-bea6de56ee48\" (UID: \"d9fe5dcd-8e91-4385-a1a0-bea6de56ee48\") " Feb 16 17:21:00 crc kubenswrapper[4794]: I0216 17:21:00.658764 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d9fe5dcd-8e91-4385-a1a0-bea6de56ee48-var-run-ovn\") pod \"d9fe5dcd-8e91-4385-a1a0-bea6de56ee48\" (UID: \"d9fe5dcd-8e91-4385-a1a0-bea6de56ee48\") " Feb 16 17:21:00 crc kubenswrapper[4794]: I0216 17:21:00.658795 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d9fe5dcd-8e91-4385-a1a0-bea6de56ee48-additional-scripts\") pod \"d9fe5dcd-8e91-4385-a1a0-bea6de56ee48\" (UID: \"d9fe5dcd-8e91-4385-a1a0-bea6de56ee48\") " Feb 16 17:21:00 crc kubenswrapper[4794]: I0216 17:21:00.658931 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9fe5dcd-8e91-4385-a1a0-bea6de56ee48-scripts\") pod \"d9fe5dcd-8e91-4385-a1a0-bea6de56ee48\" (UID: \"d9fe5dcd-8e91-4385-a1a0-bea6de56ee48\") " Feb 16 17:21:00 crc kubenswrapper[4794]: I0216 17:21:00.657126 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9fe5dcd-8e91-4385-a1a0-bea6de56ee48-var-run" (OuterVolumeSpecName: "var-run") pod "d9fe5dcd-8e91-4385-a1a0-bea6de56ee48" (UID: "d9fe5dcd-8e91-4385-a1a0-bea6de56ee48"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:21:00 crc kubenswrapper[4794]: I0216 17:21:00.659260 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9fe5dcd-8e91-4385-a1a0-bea6de56ee48-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "d9fe5dcd-8e91-4385-a1a0-bea6de56ee48" (UID: "d9fe5dcd-8e91-4385-a1a0-bea6de56ee48"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:21:00 crc kubenswrapper[4794]: I0216 17:21:00.659317 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9fe5dcd-8e91-4385-a1a0-bea6de56ee48-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "d9fe5dcd-8e91-4385-a1a0-bea6de56ee48" (UID: "d9fe5dcd-8e91-4385-a1a0-bea6de56ee48"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:21:00 crc kubenswrapper[4794]: I0216 17:21:00.660035 4794 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d9fe5dcd-8e91-4385-a1a0-bea6de56ee48-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:00 crc kubenswrapper[4794]: I0216 17:21:00.660052 4794 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d9fe5dcd-8e91-4385-a1a0-bea6de56ee48-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:00 crc kubenswrapper[4794]: I0216 17:21:00.660067 4794 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d9fe5dcd-8e91-4385-a1a0-bea6de56ee48-var-run\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:00 crc kubenswrapper[4794]: I0216 17:21:00.660066 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9fe5dcd-8e91-4385-a1a0-bea6de56ee48-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "d9fe5dcd-8e91-4385-a1a0-bea6de56ee48" (UID: "d9fe5dcd-8e91-4385-a1a0-bea6de56ee48"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:00 crc kubenswrapper[4794]: I0216 17:21:00.660622 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9fe5dcd-8e91-4385-a1a0-bea6de56ee48-scripts" (OuterVolumeSpecName: "scripts") pod "d9fe5dcd-8e91-4385-a1a0-bea6de56ee48" (UID: "d9fe5dcd-8e91-4385-a1a0-bea6de56ee48"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:00 crc kubenswrapper[4794]: I0216 17:21:00.666334 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9fe5dcd-8e91-4385-a1a0-bea6de56ee48-kube-api-access-2gk9f" (OuterVolumeSpecName: "kube-api-access-2gk9f") pod "d9fe5dcd-8e91-4385-a1a0-bea6de56ee48" (UID: "d9fe5dcd-8e91-4385-a1a0-bea6de56ee48"). InnerVolumeSpecName "kube-api-access-2gk9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:00 crc kubenswrapper[4794]: I0216 17:21:00.761631 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gk9f\" (UniqueName: \"kubernetes.io/projected/d9fe5dcd-8e91-4385-a1a0-bea6de56ee48-kube-api-access-2gk9f\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:00 crc kubenswrapper[4794]: I0216 17:21:00.761670 4794 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d9fe5dcd-8e91-4385-a1a0-bea6de56ee48-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:00 crc kubenswrapper[4794]: I0216 17:21:00.761682 4794 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9fe5dcd-8e91-4385-a1a0-bea6de56ee48-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:00 crc kubenswrapper[4794]: I0216 17:21:00.954935 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-frfcd" Feb 16 17:21:01 crc kubenswrapper[4794]: I0216 17:21:01.014472 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-frfcd-config-56k8q" Feb 16 17:21:01 crc kubenswrapper[4794]: I0216 17:21:01.014503 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-frfcd-config-56k8q" event={"ID":"d9fe5dcd-8e91-4385-a1a0-bea6de56ee48","Type":"ContainerDied","Data":"2754355c7464fbd0a026185bc0f44f9edf0d747d1bbdca77c3596b1b94c92dcc"} Feb 16 17:21:01 crc kubenswrapper[4794]: I0216 17:21:01.014583 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2754355c7464fbd0a026185bc0f44f9edf0d747d1bbdca77c3596b1b94c92dcc" Feb 16 17:21:01 crc kubenswrapper[4794]: I0216 17:21:01.016323 4794 generic.go:334] "Generic (PLEG): container finished" podID="6d7a9db0-dcc3-4dba-a84a-e5de6ca5ebeb" containerID="e8830bf7dd6f89c0101e2fbd6ed08deab0a66b1aad535c5baccb6b9493aea4ea" exitCode=0 Feb 16 17:21:01 crc kubenswrapper[4794]: I0216 17:21:01.016398 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9bqgp" event={"ID":"6d7a9db0-dcc3-4dba-a84a-e5de6ca5ebeb","Type":"ContainerDied","Data":"e8830bf7dd6f89c0101e2fbd6ed08deab0a66b1aad535c5baccb6b9493aea4ea"} Feb 16 17:21:01 crc kubenswrapper[4794]: I0216 17:21:01.019018 4794 generic.go:334] "Generic (PLEG): container finished" podID="84dc223e-f01c-424c-802a-3e1a5ad819be" containerID="b25823c2a524cd064cea3525dd73d61d5012a1849f3dd84d5f3e499b993e3220" exitCode=0 Feb 16 17:21:01 crc kubenswrapper[4794]: I0216 17:21:01.019095 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-w2gs8" event={"ID":"84dc223e-f01c-424c-802a-3e1a5ad819be","Type":"ContainerDied","Data":"b25823c2a524cd064cea3525dd73d61d5012a1849f3dd84d5f3e499b993e3220"} Feb 16 17:21:01 crc kubenswrapper[4794]: I0216 17:21:01.675530 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-frfcd-config-56k8q"] Feb 16 17:21:01 crc kubenswrapper[4794]: I0216 17:21:01.684318 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-frfcd-config-56k8q"] Feb 16 17:21:01 crc kubenswrapper[4794]: I0216 17:21:01.905280 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-frfcd-config-9cr9k"] Feb 16 17:21:01 crc kubenswrapper[4794]: E0216 17:21:01.905958 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9fe5dcd-8e91-4385-a1a0-bea6de56ee48" containerName="ovn-config" Feb 16 17:21:01 crc kubenswrapper[4794]: I0216 17:21:01.905981 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9fe5dcd-8e91-4385-a1a0-bea6de56ee48" containerName="ovn-config" Feb 16 17:21:01 crc kubenswrapper[4794]: I0216 17:21:01.906245 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9fe5dcd-8e91-4385-a1a0-bea6de56ee48" containerName="ovn-config" Feb 16 17:21:01 crc kubenswrapper[4794]: I0216 17:21:01.907330 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-frfcd-config-9cr9k" Feb 16 17:21:01 crc kubenswrapper[4794]: I0216 17:21:01.910636 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 16 17:21:01 crc kubenswrapper[4794]: I0216 17:21:01.918787 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-frfcd-config-9cr9k"] Feb 16 17:21:01 crc kubenswrapper[4794]: I0216 17:21:01.987320 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/eede4ea8-dde9-4676-81b2-39a7b63c22cb-additional-scripts\") pod \"ovn-controller-frfcd-config-9cr9k\" (UID: \"eede4ea8-dde9-4676-81b2-39a7b63c22cb\") " pod="openstack/ovn-controller-frfcd-config-9cr9k" Feb 16 17:21:01 crc kubenswrapper[4794]: I0216 17:21:01.987723 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/eede4ea8-dde9-4676-81b2-39a7b63c22cb-var-run-ovn\") pod \"ovn-controller-frfcd-config-9cr9k\" (UID: \"eede4ea8-dde9-4676-81b2-39a7b63c22cb\") " pod="openstack/ovn-controller-frfcd-config-9cr9k" Feb 16 17:21:01 crc kubenswrapper[4794]: I0216 17:21:01.987810 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/eede4ea8-dde9-4676-81b2-39a7b63c22cb-var-log-ovn\") pod \"ovn-controller-frfcd-config-9cr9k\" (UID: \"eede4ea8-dde9-4676-81b2-39a7b63c22cb\") " pod="openstack/ovn-controller-frfcd-config-9cr9k" Feb 16 17:21:01 crc kubenswrapper[4794]: I0216 17:21:01.987945 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/eede4ea8-dde9-4676-81b2-39a7b63c22cb-var-run\") pod \"ovn-controller-frfcd-config-9cr9k\" (UID: \"eede4ea8-dde9-4676-81b2-39a7b63c22cb\") " pod="openstack/ovn-controller-frfcd-config-9cr9k" Feb 16 17:21:01 crc kubenswrapper[4794]: I0216 17:21:01.988129 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eede4ea8-dde9-4676-81b2-39a7b63c22cb-scripts\") pod \"ovn-controller-frfcd-config-9cr9k\" (UID: \"eede4ea8-dde9-4676-81b2-39a7b63c22cb\") " pod="openstack/ovn-controller-frfcd-config-9cr9k" Feb 16 17:21:01 crc kubenswrapper[4794]: I0216 17:21:01.988202 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6fm9\" (UniqueName: \"kubernetes.io/projected/eede4ea8-dde9-4676-81b2-39a7b63c22cb-kube-api-access-t6fm9\") pod \"ovn-controller-frfcd-config-9cr9k\" (UID: \"eede4ea8-dde9-4676-81b2-39a7b63c22cb\") " pod="openstack/ovn-controller-frfcd-config-9cr9k" Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.090664 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/eede4ea8-dde9-4676-81b2-39a7b63c22cb-additional-scripts\") pod \"ovn-controller-frfcd-config-9cr9k\" (UID: \"eede4ea8-dde9-4676-81b2-39a7b63c22cb\") " pod="openstack/ovn-controller-frfcd-config-9cr9k" Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.090762 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/eede4ea8-dde9-4676-81b2-39a7b63c22cb-var-run-ovn\") pod \"ovn-controller-frfcd-config-9cr9k\" (UID: \"eede4ea8-dde9-4676-81b2-39a7b63c22cb\") " pod="openstack/ovn-controller-frfcd-config-9cr9k" Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.090792 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/eede4ea8-dde9-4676-81b2-39a7b63c22cb-var-log-ovn\") pod \"ovn-controller-frfcd-config-9cr9k\" (UID: \"eede4ea8-dde9-4676-81b2-39a7b63c22cb\") " pod="openstack/ovn-controller-frfcd-config-9cr9k" Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.090878 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/eede4ea8-dde9-4676-81b2-39a7b63c22cb-var-run\") pod \"ovn-controller-frfcd-config-9cr9k\" (UID: \"eede4ea8-dde9-4676-81b2-39a7b63c22cb\") " pod="openstack/ovn-controller-frfcd-config-9cr9k" Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.090976 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eede4ea8-dde9-4676-81b2-39a7b63c22cb-scripts\") pod \"ovn-controller-frfcd-config-9cr9k\" (UID: \"eede4ea8-dde9-4676-81b2-39a7b63c22cb\") " pod="openstack/ovn-controller-frfcd-config-9cr9k" Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.091002 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6fm9\" (UniqueName: \"kubernetes.io/projected/eede4ea8-dde9-4676-81b2-39a7b63c22cb-kube-api-access-t6fm9\") pod \"ovn-controller-frfcd-config-9cr9k\" (UID: \"eede4ea8-dde9-4676-81b2-39a7b63c22cb\") " pod="openstack/ovn-controller-frfcd-config-9cr9k" Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.091281 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/eede4ea8-dde9-4676-81b2-39a7b63c22cb-var-run-ovn\") pod \"ovn-controller-frfcd-config-9cr9k\" (UID: \"eede4ea8-dde9-4676-81b2-39a7b63c22cb\") " pod="openstack/ovn-controller-frfcd-config-9cr9k" Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.091328 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/eede4ea8-dde9-4676-81b2-39a7b63c22cb-var-log-ovn\") pod \"ovn-controller-frfcd-config-9cr9k\" (UID: \"eede4ea8-dde9-4676-81b2-39a7b63c22cb\") " pod="openstack/ovn-controller-frfcd-config-9cr9k" Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.091367 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/eede4ea8-dde9-4676-81b2-39a7b63c22cb-var-run\") pod \"ovn-controller-frfcd-config-9cr9k\" (UID: \"eede4ea8-dde9-4676-81b2-39a7b63c22cb\") " pod="openstack/ovn-controller-frfcd-config-9cr9k" Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.093494 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eede4ea8-dde9-4676-81b2-39a7b63c22cb-scripts\") pod \"ovn-controller-frfcd-config-9cr9k\" (UID: \"eede4ea8-dde9-4676-81b2-39a7b63c22cb\") " pod="openstack/ovn-controller-frfcd-config-9cr9k" Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.094222 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/eede4ea8-dde9-4676-81b2-39a7b63c22cb-additional-scripts\") pod \"ovn-controller-frfcd-config-9cr9k\" (UID: \"eede4ea8-dde9-4676-81b2-39a7b63c22cb\") " pod="openstack/ovn-controller-frfcd-config-9cr9k" Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.119811 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6fm9\" (UniqueName: \"kubernetes.io/projected/eede4ea8-dde9-4676-81b2-39a7b63c22cb-kube-api-access-t6fm9\") pod \"ovn-controller-frfcd-config-9cr9k\" (UID: \"eede4ea8-dde9-4676-81b2-39a7b63c22cb\") " pod="openstack/ovn-controller-frfcd-config-9cr9k" Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.244004 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-frfcd-config-9cr9k" Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.464262 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="026253d8-eaea-4c12-91e0-455331cdaa5e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.519162 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="8fb6be66-7fef-4554-897b-30d9f4637138" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.521411 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="14a6d353-2dbd-49f5-b69f-1fdcd5c13db8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.530765 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="47572286-fbbf-4189-9c6f-feb54624ee2a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: connect: connection refused" Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.660103 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9bqgp" Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.756596 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-w2gs8" Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.811846 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9fe5dcd-8e91-4385-a1a0-bea6de56ee48" path="/var/lib/kubelet/pods/d9fe5dcd-8e91-4385-a1a0-bea6de56ee48/volumes" Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.824326 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbn5j\" (UniqueName: \"kubernetes.io/projected/6d7a9db0-dcc3-4dba-a84a-e5de6ca5ebeb-kube-api-access-zbn5j\") pod \"6d7a9db0-dcc3-4dba-a84a-e5de6ca5ebeb\" (UID: \"6d7a9db0-dcc3-4dba-a84a-e5de6ca5ebeb\") " Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.824647 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d7a9db0-dcc3-4dba-a84a-e5de6ca5ebeb-operator-scripts\") pod \"6d7a9db0-dcc3-4dba-a84a-e5de6ca5ebeb\" (UID: \"6d7a9db0-dcc3-4dba-a84a-e5de6ca5ebeb\") " Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.829460 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d7a9db0-dcc3-4dba-a84a-e5de6ca5ebeb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6d7a9db0-dcc3-4dba-a84a-e5de6ca5ebeb" (UID: "6d7a9db0-dcc3-4dba-a84a-e5de6ca5ebeb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.834314 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d7a9db0-dcc3-4dba-a84a-e5de6ca5ebeb-kube-api-access-zbn5j" (OuterVolumeSpecName: "kube-api-access-zbn5j") pod "6d7a9db0-dcc3-4dba-a84a-e5de6ca5ebeb" (UID: "6d7a9db0-dcc3-4dba-a84a-e5de6ca5ebeb"). InnerVolumeSpecName "kube-api-access-zbn5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.936267 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/84dc223e-f01c-424c-802a-3e1a5ad819be-etc-swift\") pod \"84dc223e-f01c-424c-802a-3e1a5ad819be\" (UID: \"84dc223e-f01c-424c-802a-3e1a5ad819be\") " Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.936537 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84dc223e-f01c-424c-802a-3e1a5ad819be-combined-ca-bundle\") pod \"84dc223e-f01c-424c-802a-3e1a5ad819be\" (UID: \"84dc223e-f01c-424c-802a-3e1a5ad819be\") " Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.936593 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/84dc223e-f01c-424c-802a-3e1a5ad819be-ring-data-devices\") pod \"84dc223e-f01c-424c-802a-3e1a5ad819be\" (UID: \"84dc223e-f01c-424c-802a-3e1a5ad819be\") " Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.936640 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/84dc223e-f01c-424c-802a-3e1a5ad819be-scripts\") pod \"84dc223e-f01c-424c-802a-3e1a5ad819be\" (UID: \"84dc223e-f01c-424c-802a-3e1a5ad819be\") " Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.936680 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lw2v6\" (UniqueName: \"kubernetes.io/projected/84dc223e-f01c-424c-802a-3e1a5ad819be-kube-api-access-lw2v6\") pod \"84dc223e-f01c-424c-802a-3e1a5ad819be\" (UID: \"84dc223e-f01c-424c-802a-3e1a5ad819be\") " Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.936748 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/84dc223e-f01c-424c-802a-3e1a5ad819be-dispersionconf\") pod \"84dc223e-f01c-424c-802a-3e1a5ad819be\" (UID: \"84dc223e-f01c-424c-802a-3e1a5ad819be\") " Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.936802 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/84dc223e-f01c-424c-802a-3e1a5ad819be-swiftconf\") pod \"84dc223e-f01c-424c-802a-3e1a5ad819be\" (UID: \"84dc223e-f01c-424c-802a-3e1a5ad819be\") " Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.937514 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84dc223e-f01c-424c-802a-3e1a5ad819be-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "84dc223e-f01c-424c-802a-3e1a5ad819be" (UID: "84dc223e-f01c-424c-802a-3e1a5ad819be"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.937566 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbn5j\" (UniqueName: \"kubernetes.io/projected/6d7a9db0-dcc3-4dba-a84a-e5de6ca5ebeb-kube-api-access-zbn5j\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.937745 4794 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d7a9db0-dcc3-4dba-a84a-e5de6ca5ebeb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.941612 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.944474 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84dc223e-f01c-424c-802a-3e1a5ad819be-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "84dc223e-f01c-424c-802a-3e1a5ad819be" (UID: "84dc223e-f01c-424c-802a-3e1a5ad819be"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.948598 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84dc223e-f01c-424c-802a-3e1a5ad819be-kube-api-access-lw2v6" (OuterVolumeSpecName: "kube-api-access-lw2v6") pod "84dc223e-f01c-424c-802a-3e1a5ad819be" (UID: "84dc223e-f01c-424c-802a-3e1a5ad819be"). InnerVolumeSpecName "kube-api-access-lw2v6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.954173 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84dc223e-f01c-424c-802a-3e1a5ad819be-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "84dc223e-f01c-424c-802a-3e1a5ad819be" (UID: "84dc223e-f01c-424c-802a-3e1a5ad819be"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.988409 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84dc223e-f01c-424c-802a-3e1a5ad819be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84dc223e-f01c-424c-802a-3e1a5ad819be" (UID: "84dc223e-f01c-424c-802a-3e1a5ad819be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.989996 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84dc223e-f01c-424c-802a-3e1a5ad819be-scripts" (OuterVolumeSpecName: "scripts") pod "84dc223e-f01c-424c-802a-3e1a5ad819be" (UID: "84dc223e-f01c-424c-802a-3e1a5ad819be"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.991602 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84dc223e-f01c-424c-802a-3e1a5ad819be-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "84dc223e-f01c-424c-802a-3e1a5ad819be" (UID: "84dc223e-f01c-424c-802a-3e1a5ad819be"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:02 crc kubenswrapper[4794]: I0216 17:21:02.995896 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-frfcd-config-9cr9k"] Feb 16 17:21:03 crc kubenswrapper[4794]: I0216 17:21:03.043751 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84dc223e-f01c-424c-802a-3e1a5ad819be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:03 crc kubenswrapper[4794]: I0216 17:21:03.043792 4794 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/84dc223e-f01c-424c-802a-3e1a5ad819be-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:03 crc kubenswrapper[4794]: I0216 17:21:03.043803 4794 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/84dc223e-f01c-424c-802a-3e1a5ad819be-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:03 crc kubenswrapper[4794]: I0216 17:21:03.043815 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lw2v6\" (UniqueName: \"kubernetes.io/projected/84dc223e-f01c-424c-802a-3e1a5ad819be-kube-api-access-lw2v6\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:03 crc kubenswrapper[4794]: I0216 17:21:03.043829 4794 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/84dc223e-f01c-424c-802a-3e1a5ad819be-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:03 crc kubenswrapper[4794]: I0216 17:21:03.043839 4794 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/84dc223e-f01c-424c-802a-3e1a5ad819be-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:03 crc kubenswrapper[4794]: I0216 17:21:03.043861 4794 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/84dc223e-f01c-424c-802a-3e1a5ad819be-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:03 crc kubenswrapper[4794]: I0216 17:21:03.069220 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9bqgp" event={"ID":"6d7a9db0-dcc3-4dba-a84a-e5de6ca5ebeb","Type":"ContainerDied","Data":"edef52711be596bfd6444d76e113561a71b58162f94ae292e6d955183e4cbea0"} Feb 16 17:21:03 crc kubenswrapper[4794]: I0216 17:21:03.069634 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edef52711be596bfd6444d76e113561a71b58162f94ae292e6d955183e4cbea0" Feb 16 17:21:03 crc kubenswrapper[4794]: I0216 17:21:03.069510 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9bqgp" Feb 16 17:21:03 crc kubenswrapper[4794]: I0216 17:21:03.072947 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-w2gs8" event={"ID":"84dc223e-f01c-424c-802a-3e1a5ad819be","Type":"ContainerDied","Data":"025087f3ff8e2c57bcf1bcd9580bbdad5bcb800ac9a5df526acc759fcd226ab0"} Feb 16 17:21:03 crc kubenswrapper[4794]: I0216 17:21:03.072989 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="025087f3ff8e2c57bcf1bcd9580bbdad5bcb800ac9a5df526acc759fcd226ab0" Feb 16 17:21:03 crc kubenswrapper[4794]: I0216 17:21:03.073069 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-w2gs8" Feb 16 17:21:03 crc kubenswrapper[4794]: I0216 17:21:03.077910 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9d6f0b7b-1214-4425-a850-09933e0e9a6e" containerName="prometheus" containerID="cri-o://379efa73cff06de727c8054915ead633b64ca17382d422e7cbdf46cece02fb7e" gracePeriod=600 Feb 16 17:21:03 crc kubenswrapper[4794]: I0216 17:21:03.078103 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-frfcd-config-9cr9k" event={"ID":"eede4ea8-dde9-4676-81b2-39a7b63c22cb","Type":"ContainerStarted","Data":"7646dbd68811f2ddbf7ed0dbba681f3a3ad412f6a98c55446c9c16803442fc25"} Feb 16 17:21:03 crc kubenswrapper[4794]: I0216 17:21:03.078514 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9d6f0b7b-1214-4425-a850-09933e0e9a6e" containerName="thanos-sidecar" containerID="cri-o://265c2301b55f664d7617bffeb0465ac9aebe7fe9748a52cef3c76c4e8113c166" gracePeriod=600 Feb 16 17:21:03 crc kubenswrapper[4794]: I0216 17:21:03.078639 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9d6f0b7b-1214-4425-a850-09933e0e9a6e" containerName="config-reloader" containerID="cri-o://b3d56ccbfff73e0edcfb351fa25da3358d8bb011f8536b7a1e3d48a7ef197ab0" gracePeriod=600 Feb 16 17:21:03 crc kubenswrapper[4794]: I0216 17:21:03.696934 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 17:21:03 crc kubenswrapper[4794]: E0216 17:21:03.697496 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84dc223e-f01c-424c-802a-3e1a5ad819be" containerName="swift-ring-rebalance" Feb 16 17:21:03 crc kubenswrapper[4794]: I0216 17:21:03.697511 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="84dc223e-f01c-424c-802a-3e1a5ad819be" containerName="swift-ring-rebalance" Feb 16 17:21:03 crc kubenswrapper[4794]: E0216 17:21:03.697546 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d7a9db0-dcc3-4dba-a84a-e5de6ca5ebeb" containerName="mariadb-account-create-update" Feb 16 17:21:03 crc kubenswrapper[4794]: I0216 17:21:03.697552 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d7a9db0-dcc3-4dba-a84a-e5de6ca5ebeb" containerName="mariadb-account-create-update" Feb 16 17:21:03 crc kubenswrapper[4794]: I0216 17:21:03.697770 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="84dc223e-f01c-424c-802a-3e1a5ad819be" containerName="swift-ring-rebalance" Feb 16 17:21:03 crc kubenswrapper[4794]: I0216 17:21:03.697779 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d7a9db0-dcc3-4dba-a84a-e5de6ca5ebeb" containerName="mariadb-account-create-update" Feb 16 17:21:03 crc kubenswrapper[4794]: I0216 17:21:03.698861 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 16 17:21:03 crc kubenswrapper[4794]: I0216 17:21:03.705495 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 16 17:21:03 crc kubenswrapper[4794]: I0216 17:21:03.743880 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 17:21:03 crc kubenswrapper[4794]: I0216 17:21:03.878108 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e26909b-581a-4945-adf3-58a96cdf5b85-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"0e26909b-581a-4945-adf3-58a96cdf5b85\") " pod="openstack/mysqld-exporter-0" Feb 16 17:21:03 crc kubenswrapper[4794]: I0216 17:21:03.878236 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e26909b-581a-4945-adf3-58a96cdf5b85-config-data\") pod \"mysqld-exporter-0\" (UID: \"0e26909b-581a-4945-adf3-58a96cdf5b85\") " pod="openstack/mysqld-exporter-0" Feb 16 17:21:03 crc kubenswrapper[4794]: I0216 17:21:03.878323 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jspnz\" (UniqueName: \"kubernetes.io/projected/0e26909b-581a-4945-adf3-58a96cdf5b85-kube-api-access-jspnz\") pod \"mysqld-exporter-0\" (UID: \"0e26909b-581a-4945-adf3-58a96cdf5b85\") " pod="openstack/mysqld-exporter-0" Feb 16 17:21:03 crc kubenswrapper[4794]: I0216 17:21:03.980655 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jspnz\" (UniqueName: \"kubernetes.io/projected/0e26909b-581a-4945-adf3-58a96cdf5b85-kube-api-access-jspnz\") pod \"mysqld-exporter-0\" (UID: \"0e26909b-581a-4945-adf3-58a96cdf5b85\") " pod="openstack/mysqld-exporter-0" Feb 16 17:21:03 crc kubenswrapper[4794]: I0216 17:21:03.980806 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e26909b-581a-4945-adf3-58a96cdf5b85-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"0e26909b-581a-4945-adf3-58a96cdf5b85\") " pod="openstack/mysqld-exporter-0" Feb 16 17:21:03 crc kubenswrapper[4794]: I0216 17:21:03.980905 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e26909b-581a-4945-adf3-58a96cdf5b85-config-data\") pod \"mysqld-exporter-0\" (UID: \"0e26909b-581a-4945-adf3-58a96cdf5b85\") " pod="openstack/mysqld-exporter-0" Feb 16 17:21:03 crc kubenswrapper[4794]: I0216 17:21:03.989635 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e26909b-581a-4945-adf3-58a96cdf5b85-config-data\") pod \"mysqld-exporter-0\" (UID: \"0e26909b-581a-4945-adf3-58a96cdf5b85\") " pod="openstack/mysqld-exporter-0" Feb 16 17:21:03 crc kubenswrapper[4794]: I0216 17:21:03.993494 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e26909b-581a-4945-adf3-58a96cdf5b85-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"0e26909b-581a-4945-adf3-58a96cdf5b85\") " pod="openstack/mysqld-exporter-0" Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.024278 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jspnz\" (UniqueName: \"kubernetes.io/projected/0e26909b-581a-4945-adf3-58a96cdf5b85-kube-api-access-jspnz\") pod \"mysqld-exporter-0\" (UID: \"0e26909b-581a-4945-adf3-58a96cdf5b85\") " pod="openstack/mysqld-exporter-0" Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.091400 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-frfcd-config-9cr9k" event={"ID":"eede4ea8-dde9-4676-81b2-39a7b63c22cb","Type":"ContainerStarted","Data":"24980b9a1476a21f65f65abd38013b3b1da41240e84dcd20727b56adf4c610f9"} Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.091476 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.118794 4794 generic.go:334] "Generic (PLEG): container finished" podID="9d6f0b7b-1214-4425-a850-09933e0e9a6e" containerID="265c2301b55f664d7617bffeb0465ac9aebe7fe9748a52cef3c76c4e8113c166" exitCode=0 Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.118831 4794 generic.go:334] "Generic (PLEG): container finished" podID="9d6f0b7b-1214-4425-a850-09933e0e9a6e" containerID="b3d56ccbfff73e0edcfb351fa25da3358d8bb011f8536b7a1e3d48a7ef197ab0" exitCode=0 Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.118841 4794 generic.go:334] "Generic (PLEG): container finished" podID="9d6f0b7b-1214-4425-a850-09933e0e9a6e" containerID="379efa73cff06de727c8054915ead633b64ca17382d422e7cbdf46cece02fb7e" exitCode=0 Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.118857 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d6f0b7b-1214-4425-a850-09933e0e9a6e","Type":"ContainerDied","Data":"265c2301b55f664d7617bffeb0465ac9aebe7fe9748a52cef3c76c4e8113c166"} Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.118910 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d6f0b7b-1214-4425-a850-09933e0e9a6e","Type":"ContainerDied","Data":"b3d56ccbfff73e0edcfb351fa25da3358d8bb011f8536b7a1e3d48a7ef197ab0"} Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.118938 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d6f0b7b-1214-4425-a850-09933e0e9a6e","Type":"ContainerDied","Data":"379efa73cff06de727c8054915ead633b64ca17382d422e7cbdf46cece02fb7e"} Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.131898 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-frfcd-config-9cr9k" podStartSLOduration=3.131877181 podStartE2EDuration="3.131877181s" podCreationTimestamp="2026-02-16 17:21:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:21:04.127198512 +0000 UTC m=+1290.075293159" watchObservedRunningTime="2026-02-16 17:21:04.131877181 +0000 UTC m=+1290.079971828" Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.366928 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.497821 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9d6f0b7b-1214-4425-a850-09933e0e9a6e-web-config\") pod \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.498446 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9d6f0b7b-1214-4425-a850-09933e0e9a6e-tls-assets\") pod \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.498720 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-847c3bfd-2842-4b87-9058-2b4210d0df84\") pod \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.498815 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9d6f0b7b-1214-4425-a850-09933e0e9a6e-prometheus-metric-storage-rulefiles-2\") pod \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.498969 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9d6f0b7b-1214-4425-a850-09933e0e9a6e-config-out\") pod \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.499030 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9d6f0b7b-1214-4425-a850-09933e0e9a6e-thanos-prometheus-http-client-file\") pod \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.499103 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9d6f0b7b-1214-4425-a850-09933e0e9a6e-config\") pod \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.499140 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4lnq\" (UniqueName: \"kubernetes.io/projected/9d6f0b7b-1214-4425-a850-09933e0e9a6e-kube-api-access-x4lnq\") pod \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.499194 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9d6f0b7b-1214-4425-a850-09933e0e9a6e-prometheus-metric-storage-rulefiles-1\") pod \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.499234 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9d6f0b7b-1214-4425-a850-09933e0e9a6e-prometheus-metric-storage-rulefiles-0\") pod \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\" (UID: \"9d6f0b7b-1214-4425-a850-09933e0e9a6e\") " Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.503364 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d6f0b7b-1214-4425-a850-09933e0e9a6e-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "9d6f0b7b-1214-4425-a850-09933e0e9a6e" (UID: "9d6f0b7b-1214-4425-a850-09933e0e9a6e"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.506465 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d6f0b7b-1214-4425-a850-09933e0e9a6e-config-out" (OuterVolumeSpecName: "config-out") pod "9d6f0b7b-1214-4425-a850-09933e0e9a6e" (UID: "9d6f0b7b-1214-4425-a850-09933e0e9a6e"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.508056 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d6f0b7b-1214-4425-a850-09933e0e9a6e-kube-api-access-x4lnq" (OuterVolumeSpecName: "kube-api-access-x4lnq") pod "9d6f0b7b-1214-4425-a850-09933e0e9a6e" (UID: "9d6f0b7b-1214-4425-a850-09933e0e9a6e"). InnerVolumeSpecName "kube-api-access-x4lnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.510764 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d6f0b7b-1214-4425-a850-09933e0e9a6e-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "9d6f0b7b-1214-4425-a850-09933e0e9a6e" (UID: "9d6f0b7b-1214-4425-a850-09933e0e9a6e"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.511135 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d6f0b7b-1214-4425-a850-09933e0e9a6e-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "9d6f0b7b-1214-4425-a850-09933e0e9a6e" (UID: "9d6f0b7b-1214-4425-a850-09933e0e9a6e"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.511317 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d6f0b7b-1214-4425-a850-09933e0e9a6e-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "9d6f0b7b-1214-4425-a850-09933e0e9a6e" (UID: "9d6f0b7b-1214-4425-a850-09933e0e9a6e"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.525576 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d6f0b7b-1214-4425-a850-09933e0e9a6e-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "9d6f0b7b-1214-4425-a850-09933e0e9a6e" (UID: "9d6f0b7b-1214-4425-a850-09933e0e9a6e"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.530224 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d6f0b7b-1214-4425-a850-09933e0e9a6e-config" (OuterVolumeSpecName: "config") pod "9d6f0b7b-1214-4425-a850-09933e0e9a6e" (UID: "9d6f0b7b-1214-4425-a850-09933e0e9a6e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.540203 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-847c3bfd-2842-4b87-9058-2b4210d0df84" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "9d6f0b7b-1214-4425-a850-09933e0e9a6e" (UID: "9d6f0b7b-1214-4425-a850-09933e0e9a6e"). InnerVolumeSpecName "pvc-847c3bfd-2842-4b87-9058-2b4210d0df84". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.558578 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d6f0b7b-1214-4425-a850-09933e0e9a6e-web-config" (OuterVolumeSpecName: "web-config") pod "9d6f0b7b-1214-4425-a850-09933e0e9a6e" (UID: "9d6f0b7b-1214-4425-a850-09933e0e9a6e"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.602483 4794 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9d6f0b7b-1214-4425-a850-09933e0e9a6e-config-out\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.602537 4794 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9d6f0b7b-1214-4425-a850-09933e0e9a6e-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.602551 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9d6f0b7b-1214-4425-a850-09933e0e9a6e-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.602566 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4lnq\" (UniqueName: \"kubernetes.io/projected/9d6f0b7b-1214-4425-a850-09933e0e9a6e-kube-api-access-x4lnq\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.602576 4794 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9d6f0b7b-1214-4425-a850-09933e0e9a6e-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.602586 4794 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9d6f0b7b-1214-4425-a850-09933e0e9a6e-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.602598 4794 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9d6f0b7b-1214-4425-a850-09933e0e9a6e-web-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.602608 4794 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9d6f0b7b-1214-4425-a850-09933e0e9a6e-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.602650 4794 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-847c3bfd-2842-4b87-9058-2b4210d0df84\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-847c3bfd-2842-4b87-9058-2b4210d0df84\") on node \"crc\" " Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.602661 4794 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9d6f0b7b-1214-4425-a850-09933e0e9a6e-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.777902 4794 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.778103 4794 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-847c3bfd-2842-4b87-9058-2b4210d0df84" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-847c3bfd-2842-4b87-9058-2b4210d0df84") on node "crc" Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.806191 4794 reconciler_common.go:293] "Volume detached for volume \"pvc-847c3bfd-2842-4b87-9058-2b4210d0df84\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-847c3bfd-2842-4b87-9058-2b4210d0df84\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.841426 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 17:21:04 crc kubenswrapper[4794]: W0216 17:21:04.843293 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e26909b_581a_4945_adf3_58a96cdf5b85.slice/crio-92d29819be5843285c04b2fed813cfce457e88f8f791888294353a438790c96b WatchSource:0}: Error finding container 92d29819be5843285c04b2fed813cfce457e88f8f791888294353a438790c96b: Status 404 returned error can't find the container with id 92d29819be5843285c04b2fed813cfce457e88f8f791888294353a438790c96b Feb 16 17:21:04 crc kubenswrapper[4794]: I0216 17:21:04.864754 4794 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.135634 4794 generic.go:334] "Generic (PLEG): container finished" podID="eede4ea8-dde9-4676-81b2-39a7b63c22cb" containerID="24980b9a1476a21f65f65abd38013b3b1da41240e84dcd20727b56adf4c610f9" exitCode=0 Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.135984 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-frfcd-config-9cr9k" event={"ID":"eede4ea8-dde9-4676-81b2-39a7b63c22cb","Type":"ContainerDied","Data":"24980b9a1476a21f65f65abd38013b3b1da41240e84dcd20727b56adf4c610f9"} Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.145841 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d6f0b7b-1214-4425-a850-09933e0e9a6e","Type":"ContainerDied","Data":"a1e81dae2c43526579221a535c210d4d750db4ff55ce4a55759e79c6ccdb7e7f"} Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.145888 4794 scope.go:117] "RemoveContainer" containerID="265c2301b55f664d7617bffeb0465ac9aebe7fe9748a52cef3c76c4e8113c166" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.146001 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.156719 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"0e26909b-581a-4945-adf3-58a96cdf5b85","Type":"ContainerStarted","Data":"92d29819be5843285c04b2fed813cfce457e88f8f791888294353a438790c96b"} Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.210443 4794 scope.go:117] "RemoveContainer" containerID="b3d56ccbfff73e0edcfb351fa25da3358d8bb011f8536b7a1e3d48a7ef197ab0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.240550 4794 scope.go:117] "RemoveContainer" containerID="379efa73cff06de727c8054915ead633b64ca17382d422e7cbdf46cece02fb7e" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.244528 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.261085 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.291035 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 17:21:05 crc kubenswrapper[4794]: E0216 17:21:05.291596 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d6f0b7b-1214-4425-a850-09933e0e9a6e" containerName="thanos-sidecar" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.291621 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d6f0b7b-1214-4425-a850-09933e0e9a6e" containerName="thanos-sidecar" Feb 16 17:21:05 crc kubenswrapper[4794]: E0216 17:21:05.291647 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d6f0b7b-1214-4425-a850-09933e0e9a6e" containerName="init-config-reloader" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.291657 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d6f0b7b-1214-4425-a850-09933e0e9a6e" containerName="init-config-reloader" Feb 16 17:21:05 crc kubenswrapper[4794]: E0216 17:21:05.291676 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d6f0b7b-1214-4425-a850-09933e0e9a6e" containerName="config-reloader" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.291683 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d6f0b7b-1214-4425-a850-09933e0e9a6e" containerName="config-reloader" Feb 16 17:21:05 crc kubenswrapper[4794]: E0216 17:21:05.291701 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d6f0b7b-1214-4425-a850-09933e0e9a6e" containerName="prometheus" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.291709 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d6f0b7b-1214-4425-a850-09933e0e9a6e" containerName="prometheus" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.291986 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d6f0b7b-1214-4425-a850-09933e0e9a6e" containerName="thanos-sidecar" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.292010 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d6f0b7b-1214-4425-a850-09933e0e9a6e" containerName="config-reloader" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.292031 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d6f0b7b-1214-4425-a850-09933e0e9a6e" containerName="prometheus" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.294519 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.294630 4794 scope.go:117] "RemoveContainer" containerID="c685149ca5c1cacd22f1f520b28739a9acd18c22814651677ff962615d1dc812" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.296692 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.296861 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-8ndpc" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.297084 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.297215 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.299785 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.299948 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.300216 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.311932 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.317473 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.337622 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.441113 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/11e05321-0f2f-4688-abd5-0e3a019bf530-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.441237 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11e05321-0f2f-4688-abd5-0e3a019bf530-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.441270 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/11e05321-0f2f-4688-abd5-0e3a019bf530-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.441333 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/11e05321-0f2f-4688-abd5-0e3a019bf530-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.441366 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/11e05321-0f2f-4688-abd5-0e3a019bf530-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.441399 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/11e05321-0f2f-4688-abd5-0e3a019bf530-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.441452 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khwkv\" (UniqueName: \"kubernetes.io/projected/11e05321-0f2f-4688-abd5-0e3a019bf530-kube-api-access-khwkv\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.441479 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/11e05321-0f2f-4688-abd5-0e3a019bf530-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.441510 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/11e05321-0f2f-4688-abd5-0e3a019bf530-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.441680 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/11e05321-0f2f-4688-abd5-0e3a019bf530-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.441718 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/11e05321-0f2f-4688-abd5-0e3a019bf530-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.441742 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-847c3bfd-2842-4b87-9058-2b4210d0df84\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-847c3bfd-2842-4b87-9058-2b4210d0df84\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.441951 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/11e05321-0f2f-4688-abd5-0e3a019bf530-config\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.545202 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khwkv\" (UniqueName: \"kubernetes.io/projected/11e05321-0f2f-4688-abd5-0e3a019bf530-kube-api-access-khwkv\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.545285 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/11e05321-0f2f-4688-abd5-0e3a019bf530-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.545354 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/11e05321-0f2f-4688-abd5-0e3a019bf530-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.545416 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/11e05321-0f2f-4688-abd5-0e3a019bf530-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.545474 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/11e05321-0f2f-4688-abd5-0e3a019bf530-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.545505 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-847c3bfd-2842-4b87-9058-2b4210d0df84\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-847c3bfd-2842-4b87-9058-2b4210d0df84\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.545564 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/11e05321-0f2f-4688-abd5-0e3a019bf530-config\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.545628 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/11e05321-0f2f-4688-abd5-0e3a019bf530-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.545706 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11e05321-0f2f-4688-abd5-0e3a019bf530-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.545738 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/11e05321-0f2f-4688-abd5-0e3a019bf530-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.545790 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/11e05321-0f2f-4688-abd5-0e3a019bf530-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.545826 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/11e05321-0f2f-4688-abd5-0e3a019bf530-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.545861 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/11e05321-0f2f-4688-abd5-0e3a019bf530-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.548142 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/11e05321-0f2f-4688-abd5-0e3a019bf530-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.548953 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/11e05321-0f2f-4688-abd5-0e3a019bf530-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.549459 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/11e05321-0f2f-4688-abd5-0e3a019bf530-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.560095 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/11e05321-0f2f-4688-abd5-0e3a019bf530-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.565106 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/11e05321-0f2f-4688-abd5-0e3a019bf530-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.565176 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/11e05321-0f2f-4688-abd5-0e3a019bf530-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.567573 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11e05321-0f2f-4688-abd5-0e3a019bf530-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.569757 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/11e05321-0f2f-4688-abd5-0e3a019bf530-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.570146 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/11e05321-0f2f-4688-abd5-0e3a019bf530-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.570227 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/11e05321-0f2f-4688-abd5-0e3a019bf530-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.577253 4794 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.577474 4794 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-847c3bfd-2842-4b87-9058-2b4210d0df84\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-847c3bfd-2842-4b87-9058-2b4210d0df84\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/60fc82cc31f45bb9356123d8b01b5dfd7c96515a1e2ee078d5b084dc843df6e3/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.578952 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/11e05321-0f2f-4688-abd5-0e3a019bf530-config\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.588735 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khwkv\" (UniqueName: \"kubernetes.io/projected/11e05321-0f2f-4688-abd5-0e3a019bf530-kube-api-access-khwkv\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.732344 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-847c3bfd-2842-4b87-9058-2b4210d0df84\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-847c3bfd-2842-4b87-9058-2b4210d0df84\") pod \"prometheus-metric-storage-0\" (UID: \"11e05321-0f2f-4688-abd5-0e3a019bf530\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:05 crc kubenswrapper[4794]: I0216 17:21:05.997185 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:06 crc kubenswrapper[4794]: I0216 17:21:06.806608 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d6f0b7b-1214-4425-a850-09933e0e9a6e" path="/var/lib/kubelet/pods/9d6f0b7b-1214-4425-a850-09933e0e9a6e/volumes" Feb 16 17:21:07 crc kubenswrapper[4794]: I0216 17:21:07.002760 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="9d6f0b7b-1214-4425-a850-09933e0e9a6e" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.139:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 17:21:07 crc kubenswrapper[4794]: I0216 17:21:07.151501 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-frfcd-config-9cr9k" Feb 16 17:21:07 crc kubenswrapper[4794]: I0216 17:21:07.211114 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-frfcd-config-9cr9k" Feb 16 17:21:07 crc kubenswrapper[4794]: I0216 17:21:07.212006 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-frfcd-config-9cr9k" event={"ID":"eede4ea8-dde9-4676-81b2-39a7b63c22cb","Type":"ContainerDied","Data":"7646dbd68811f2ddbf7ed0dbba681f3a3ad412f6a98c55446c9c16803442fc25"} Feb 16 17:21:07 crc kubenswrapper[4794]: I0216 17:21:07.212103 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7646dbd68811f2ddbf7ed0dbba681f3a3ad412f6a98c55446c9c16803442fc25" Feb 16 17:21:07 crc kubenswrapper[4794]: I0216 17:21:07.298025 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eede4ea8-dde9-4676-81b2-39a7b63c22cb-scripts\") pod \"eede4ea8-dde9-4676-81b2-39a7b63c22cb\" (UID: \"eede4ea8-dde9-4676-81b2-39a7b63c22cb\") " Feb 16 17:21:07 crc kubenswrapper[4794]: I0216 17:21:07.298195 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6fm9\" (UniqueName: \"kubernetes.io/projected/eede4ea8-dde9-4676-81b2-39a7b63c22cb-kube-api-access-t6fm9\") pod \"eede4ea8-dde9-4676-81b2-39a7b63c22cb\" (UID: \"eede4ea8-dde9-4676-81b2-39a7b63c22cb\") " Feb 16 17:21:07 crc kubenswrapper[4794]: I0216 17:21:07.298251 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/eede4ea8-dde9-4676-81b2-39a7b63c22cb-var-log-ovn\") pod \"eede4ea8-dde9-4676-81b2-39a7b63c22cb\" (UID: \"eede4ea8-dde9-4676-81b2-39a7b63c22cb\") " Feb 16 17:21:07 crc kubenswrapper[4794]: I0216 17:21:07.298363 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/eede4ea8-dde9-4676-81b2-39a7b63c22cb-additional-scripts\") pod \"eede4ea8-dde9-4676-81b2-39a7b63c22cb\" (UID: \"eede4ea8-dde9-4676-81b2-39a7b63c22cb\") " Feb 16 17:21:07 crc kubenswrapper[4794]: I0216 17:21:07.298398 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/eede4ea8-dde9-4676-81b2-39a7b63c22cb-var-run-ovn\") pod \"eede4ea8-dde9-4676-81b2-39a7b63c22cb\" (UID: \"eede4ea8-dde9-4676-81b2-39a7b63c22cb\") " Feb 16 17:21:07 crc kubenswrapper[4794]: I0216 17:21:07.298465 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/eede4ea8-dde9-4676-81b2-39a7b63c22cb-var-run\") pod \"eede4ea8-dde9-4676-81b2-39a7b63c22cb\" (UID: \"eede4ea8-dde9-4676-81b2-39a7b63c22cb\") " Feb 16 17:21:07 crc kubenswrapper[4794]: I0216 17:21:07.298930 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eede4ea8-dde9-4676-81b2-39a7b63c22cb-var-run" (OuterVolumeSpecName: "var-run") pod "eede4ea8-dde9-4676-81b2-39a7b63c22cb" (UID: "eede4ea8-dde9-4676-81b2-39a7b63c22cb"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:21:07 crc kubenswrapper[4794]: I0216 17:21:07.299724 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eede4ea8-dde9-4676-81b2-39a7b63c22cb-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "eede4ea8-dde9-4676-81b2-39a7b63c22cb" (UID: "eede4ea8-dde9-4676-81b2-39a7b63c22cb"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:07 crc kubenswrapper[4794]: I0216 17:21:07.299760 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eede4ea8-dde9-4676-81b2-39a7b63c22cb-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "eede4ea8-dde9-4676-81b2-39a7b63c22cb" (UID: "eede4ea8-dde9-4676-81b2-39a7b63c22cb"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:21:07 crc kubenswrapper[4794]: I0216 17:21:07.299784 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eede4ea8-dde9-4676-81b2-39a7b63c22cb-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "eede4ea8-dde9-4676-81b2-39a7b63c22cb" (UID: "eede4ea8-dde9-4676-81b2-39a7b63c22cb"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:21:07 crc kubenswrapper[4794]: I0216 17:21:07.300155 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eede4ea8-dde9-4676-81b2-39a7b63c22cb-scripts" (OuterVolumeSpecName: "scripts") pod "eede4ea8-dde9-4676-81b2-39a7b63c22cb" (UID: "eede4ea8-dde9-4676-81b2-39a7b63c22cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:07 crc kubenswrapper[4794]: I0216 17:21:07.308686 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eede4ea8-dde9-4676-81b2-39a7b63c22cb-kube-api-access-t6fm9" (OuterVolumeSpecName: "kube-api-access-t6fm9") pod "eede4ea8-dde9-4676-81b2-39a7b63c22cb" (UID: "eede4ea8-dde9-4676-81b2-39a7b63c22cb"). InnerVolumeSpecName "kube-api-access-t6fm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:07 crc kubenswrapper[4794]: I0216 17:21:07.401409 4794 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eede4ea8-dde9-4676-81b2-39a7b63c22cb-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:07 crc kubenswrapper[4794]: I0216 17:21:07.401448 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6fm9\" (UniqueName: \"kubernetes.io/projected/eede4ea8-dde9-4676-81b2-39a7b63c22cb-kube-api-access-t6fm9\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:07 crc kubenswrapper[4794]: I0216 17:21:07.401461 4794 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/eede4ea8-dde9-4676-81b2-39a7b63c22cb-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:07 crc kubenswrapper[4794]: I0216 17:21:07.401473 4794 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/eede4ea8-dde9-4676-81b2-39a7b63c22cb-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:07 crc kubenswrapper[4794]: I0216 17:21:07.401486 4794 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/eede4ea8-dde9-4676-81b2-39a7b63c22cb-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:07 crc kubenswrapper[4794]: I0216 17:21:07.401496 4794 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/eede4ea8-dde9-4676-81b2-39a7b63c22cb-var-run\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:07 crc kubenswrapper[4794]: I0216 17:21:07.656648 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 17:21:07 crc kubenswrapper[4794]: W0216 17:21:07.669184 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11e05321_0f2f_4688_abd5_0e3a019bf530.slice/crio-be2004d3c3b51334b35475f78c6596579ea87215b1d75e2d64475a192e4c5e92 WatchSource:0}: Error finding container be2004d3c3b51334b35475f78c6596579ea87215b1d75e2d64475a192e4c5e92: Status 404 returned error can't find the container with id be2004d3c3b51334b35475f78c6596579ea87215b1d75e2d64475a192e4c5e92 Feb 16 17:21:08 crc kubenswrapper[4794]: I0216 17:21:08.225095 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"0e26909b-581a-4945-adf3-58a96cdf5b85","Type":"ContainerStarted","Data":"58c8cbb23abc1c53e580b5dc9b3856a5d8c80d3681c1dfe04ee4dfc70ad59332"} Feb 16 17:21:08 crc kubenswrapper[4794]: I0216 17:21:08.229097 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"11e05321-0f2f-4688-abd5-0e3a019bf530","Type":"ContainerStarted","Data":"be2004d3c3b51334b35475f78c6596579ea87215b1d75e2d64475a192e4c5e92"} Feb 16 17:21:08 crc kubenswrapper[4794]: I0216 17:21:08.255898 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.731619976 podStartE2EDuration="5.255870277s" podCreationTimestamp="2026-02-16 17:21:03 +0000 UTC" firstStartedPulling="2026-02-16 17:21:04.864434971 +0000 UTC m=+1290.812529618" lastFinishedPulling="2026-02-16 17:21:07.388685262 +0000 UTC m=+1293.336779919" observedRunningTime="2026-02-16 17:21:08.245985637 +0000 UTC m=+1294.194080284" watchObservedRunningTime="2026-02-16 17:21:08.255870277 +0000 UTC m=+1294.203964934" Feb 16 17:21:08 crc kubenswrapper[4794]: I0216 17:21:08.285141 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-frfcd-config-9cr9k"] Feb 16 17:21:08 crc kubenswrapper[4794]: I0216 17:21:08.297888 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-frfcd-config-9cr9k"] Feb 16 17:21:08 crc kubenswrapper[4794]: I0216 17:21:08.809216 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eede4ea8-dde9-4676-81b2-39a7b63c22cb" path="/var/lib/kubelet/pods/eede4ea8-dde9-4676-81b2-39a7b63c22cb/volumes" Feb 16 17:21:11 crc kubenswrapper[4794]: I0216 17:21:11.200159 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/54acc9db-6bd7-463f-8637-6aa39ed3eb11-etc-swift\") pod \"swift-storage-0\" (UID: \"54acc9db-6bd7-463f-8637-6aa39ed3eb11\") " pod="openstack/swift-storage-0" Feb 16 17:21:11 crc kubenswrapper[4794]: I0216 17:21:11.211095 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/54acc9db-6bd7-463f-8637-6aa39ed3eb11-etc-swift\") pod \"swift-storage-0\" (UID: \"54acc9db-6bd7-463f-8637-6aa39ed3eb11\") " pod="openstack/swift-storage-0" Feb 16 17:21:11 crc kubenswrapper[4794]: I0216 17:21:11.263782 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"11e05321-0f2f-4688-abd5-0e3a019bf530","Type":"ContainerStarted","Data":"e1ac4a6ed460c8bbe3393f1e735b7c4acac62ed4d9a8fbd03091532f9455ec11"} Feb 16 17:21:11 crc kubenswrapper[4794]: I0216 17:21:11.326642 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 16 17:21:12 crc kubenswrapper[4794]: I0216 17:21:12.463880 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="026253d8-eaea-4c12-91e0-455331cdaa5e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Feb 16 17:21:12 crc kubenswrapper[4794]: I0216 17:21:12.514067 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="8fb6be66-7fef-4554-897b-30d9f4637138" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Feb 16 17:21:12 crc kubenswrapper[4794]: I0216 17:21:12.519577 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="14a6d353-2dbd-49f5-b69f-1fdcd5c13db8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Feb 16 17:21:12 crc kubenswrapper[4794]: I0216 17:21:12.530401 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:21:17 crc kubenswrapper[4794]: I0216 17:21:17.332016 4794 generic.go:334] "Generic (PLEG): container finished" podID="11e05321-0f2f-4688-abd5-0e3a019bf530" containerID="e1ac4a6ed460c8bbe3393f1e735b7c4acac62ed4d9a8fbd03091532f9455ec11" exitCode=0 Feb 16 17:21:17 crc kubenswrapper[4794]: I0216 17:21:17.333054 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"11e05321-0f2f-4688-abd5-0e3a019bf530","Type":"ContainerDied","Data":"e1ac4a6ed460c8bbe3393f1e735b7c4acac62ed4d9a8fbd03091532f9455ec11"} Feb 16 17:21:17 crc kubenswrapper[4794]: I0216 17:21:17.355757 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 16 17:21:17 crc kubenswrapper[4794]: W0216 17:21:17.358701 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54acc9db_6bd7_463f_8637_6aa39ed3eb11.slice/crio-7043ded1d28328e1a895c673d03491f88e1b0ef1a1ee31c3d7045a21f46a8466 WatchSource:0}: Error finding container 7043ded1d28328e1a895c673d03491f88e1b0ef1a1ee31c3d7045a21f46a8466: Status 404 returned error can't find the container with id 7043ded1d28328e1a895c673d03491f88e1b0ef1a1ee31c3d7045a21f46a8466 Feb 16 17:21:18 crc kubenswrapper[4794]: I0216 17:21:18.369136 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"54acc9db-6bd7-463f-8637-6aa39ed3eb11","Type":"ContainerStarted","Data":"7043ded1d28328e1a895c673d03491f88e1b0ef1a1ee31c3d7045a21f46a8466"} Feb 16 17:21:18 crc kubenswrapper[4794]: I0216 17:21:18.378347 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-vc8d5" event={"ID":"fb8edc26-5ad8-440e-9d5b-942b0a287ea4","Type":"ContainerStarted","Data":"447ffb8d8b4495130e9739fabe034c2edbfe34d056da67cd871631699252a06d"} Feb 16 17:21:18 crc kubenswrapper[4794]: I0216 17:21:18.388537 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"11e05321-0f2f-4688-abd5-0e3a019bf530","Type":"ContainerStarted","Data":"008ff2bd9ce7d8893f5ee3ee702b3582c0d35c4149594e436e00b58b9e98c6cf"} Feb 16 17:21:18 crc kubenswrapper[4794]: I0216 17:21:18.426606 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-vc8d5" podStartSLOduration=3.272582422 podStartE2EDuration="23.426580693s" podCreationTimestamp="2026-02-16 17:20:55 +0000 UTC" firstStartedPulling="2026-02-16 17:20:56.658543673 +0000 UTC m=+1282.606638320" lastFinishedPulling="2026-02-16 17:21:16.812541944 +0000 UTC m=+1302.760636591" observedRunningTime="2026-02-16 17:21:18.417031961 +0000 UTC m=+1304.365126618" watchObservedRunningTime="2026-02-16 17:21:18.426580693 +0000 UTC m=+1304.374675350" Feb 16 17:21:19 crc kubenswrapper[4794]: I0216 17:21:19.403639 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"54acc9db-6bd7-463f-8637-6aa39ed3eb11","Type":"ContainerStarted","Data":"47557558d6be1aa2f6aeaa0bc43e079a5c6dbafad9293a7f037441e4f4d0c731"} Feb 16 17:21:20 crc kubenswrapper[4794]: I0216 17:21:20.141961 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:21:20 crc kubenswrapper[4794]: I0216 17:21:20.142650 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:21:20 crc kubenswrapper[4794]: I0216 17:21:20.142723 4794 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" Feb 16 17:21:20 crc kubenswrapper[4794]: I0216 17:21:20.143985 4794 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"07948fa6ee2afc937a020c1d294030183c36d82f2764d0a7fd3e60ea347005ea"} pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 17:21:20 crc kubenswrapper[4794]: I0216 17:21:20.144047 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" containerID="cri-o://07948fa6ee2afc937a020c1d294030183c36d82f2764d0a7fd3e60ea347005ea" gracePeriod=600 Feb 16 17:21:20 crc kubenswrapper[4794]: I0216 17:21:20.415039 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"54acc9db-6bd7-463f-8637-6aa39ed3eb11","Type":"ContainerStarted","Data":"6a5d6fa495fbee6bab044ac3878ffe03052866c9c25b1bdf7b8525765dbdd768"} Feb 16 17:21:20 crc kubenswrapper[4794]: I0216 17:21:20.417037 4794 generic.go:334] "Generic (PLEG): container finished" podID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerID="07948fa6ee2afc937a020c1d294030183c36d82f2764d0a7fd3e60ea347005ea" exitCode=0 Feb 16 17:21:20 crc kubenswrapper[4794]: I0216 17:21:20.417096 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerDied","Data":"07948fa6ee2afc937a020c1d294030183c36d82f2764d0a7fd3e60ea347005ea"} Feb 16 17:21:20 crc kubenswrapper[4794]: I0216 17:21:20.417169 4794 scope.go:117] "RemoveContainer" containerID="cce390b6213c7330d230e979677c08327d065b64facb3363518840eb14ee0ef8" Feb 16 17:21:21 crc kubenswrapper[4794]: I0216 17:21:21.439952 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerStarted","Data":"6e22a7be8018d748f49f3c871459e97f76ee04f859d3d1d0d46b4bbb2dd36691"} Feb 16 17:21:21 crc kubenswrapper[4794]: I0216 17:21:21.446805 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"11e05321-0f2f-4688-abd5-0e3a019bf530","Type":"ContainerStarted","Data":"5cc549ea8614bad5dfa98c57cc365d5a5a03f8153be06d8fd4b938c52276c082"} Feb 16 17:21:21 crc kubenswrapper[4794]: I0216 17:21:21.451091 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"54acc9db-6bd7-463f-8637-6aa39ed3eb11","Type":"ContainerStarted","Data":"ab9bae4c7cec590fbd8ce7d02108a6c09ce7b04a95c70048f00e95fe243c9e01"} Feb 16 17:21:21 crc kubenswrapper[4794]: I0216 17:21:21.451131 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"54acc9db-6bd7-463f-8637-6aa39ed3eb11","Type":"ContainerStarted","Data":"d1b834b3c8492937b2fe1df23e84902ee9c2a4c2d51782e301fdc4b0e50dd6c2"} Feb 16 17:21:22 crc kubenswrapper[4794]: I0216 17:21:22.463346 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"11e05321-0f2f-4688-abd5-0e3a019bf530","Type":"ContainerStarted","Data":"35ebe66994a2389f7c62afa55576d893ed356286ace4a244bc934a0393cde51d"} Feb 16 17:21:22 crc kubenswrapper[4794]: I0216 17:21:22.464519 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 16 17:21:22 crc kubenswrapper[4794]: I0216 17:21:22.515633 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 16 17:21:22 crc kubenswrapper[4794]: I0216 17:21:22.520574 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 16 17:21:22 crc kubenswrapper[4794]: I0216 17:21:22.527869 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=17.527845954 podStartE2EDuration="17.527845954s" podCreationTimestamp="2026-02-16 17:21:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:21:22.525629288 +0000 UTC m=+1308.473723935" watchObservedRunningTime="2026-02-16 17:21:22.527845954 +0000 UTC m=+1308.475940601" Feb 16 17:21:23 crc kubenswrapper[4794]: I0216 17:21:23.495565 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"54acc9db-6bd7-463f-8637-6aa39ed3eb11","Type":"ContainerStarted","Data":"3cb6e8821907858377eb69438b8072c2ea019734371e306a85887049b89c9e43"} Feb 16 17:21:23 crc kubenswrapper[4794]: I0216 17:21:23.496437 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"54acc9db-6bd7-463f-8637-6aa39ed3eb11","Type":"ContainerStarted","Data":"08654319d98cfb2a2c2a6a6086241904f7b129e344bca1af10650e3ec667e56b"} Feb 16 17:21:24 crc kubenswrapper[4794]: I0216 17:21:24.513866 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"54acc9db-6bd7-463f-8637-6aa39ed3eb11","Type":"ContainerStarted","Data":"0109ff9b26cd2b34bd9930d184c5757080d2e8f837b85c4b5fcbdf58229eec3c"} Feb 16 17:21:24 crc kubenswrapper[4794]: I0216 17:21:24.514432 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"54acc9db-6bd7-463f-8637-6aa39ed3eb11","Type":"ContainerStarted","Data":"51540b229a7caffd7b083e18684f0a6672e540163f9b6fd3828448ab0218cecd"} Feb 16 17:21:25 crc kubenswrapper[4794]: I0216 17:21:25.488507 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-6pcwt"] Feb 16 17:21:25 crc kubenswrapper[4794]: E0216 17:21:25.489631 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eede4ea8-dde9-4676-81b2-39a7b63c22cb" containerName="ovn-config" Feb 16 17:21:25 crc kubenswrapper[4794]: I0216 17:21:25.489667 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="eede4ea8-dde9-4676-81b2-39a7b63c22cb" containerName="ovn-config" Feb 16 17:21:25 crc kubenswrapper[4794]: I0216 17:21:25.496665 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="eede4ea8-dde9-4676-81b2-39a7b63c22cb" containerName="ovn-config" Feb 16 17:21:25 crc kubenswrapper[4794]: I0216 17:21:25.497772 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-6pcwt" Feb 16 17:21:25 crc kubenswrapper[4794]: I0216 17:21:25.528732 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-6pcwt"] Feb 16 17:21:25 crc kubenswrapper[4794]: I0216 17:21:25.572523 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9e97513-5c89-4917-8e5a-d2230e694e3f-operator-scripts\") pod \"heat-db-create-6pcwt\" (UID: \"f9e97513-5c89-4917-8e5a-d2230e694e3f\") " pod="openstack/heat-db-create-6pcwt" Feb 16 17:21:25 crc kubenswrapper[4794]: I0216 17:21:25.576812 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dw7m\" (UniqueName: \"kubernetes.io/projected/f9e97513-5c89-4917-8e5a-d2230e694e3f-kube-api-access-5dw7m\") pod \"heat-db-create-6pcwt\" (UID: \"f9e97513-5c89-4917-8e5a-d2230e694e3f\") " pod="openstack/heat-db-create-6pcwt" Feb 16 17:21:25 crc kubenswrapper[4794]: I0216 17:21:25.682277 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dw7m\" (UniqueName: \"kubernetes.io/projected/f9e97513-5c89-4917-8e5a-d2230e694e3f-kube-api-access-5dw7m\") pod \"heat-db-create-6pcwt\" (UID: \"f9e97513-5c89-4917-8e5a-d2230e694e3f\") " pod="openstack/heat-db-create-6pcwt" Feb 16 17:21:25 crc kubenswrapper[4794]: I0216 17:21:25.682537 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9e97513-5c89-4917-8e5a-d2230e694e3f-operator-scripts\") pod \"heat-db-create-6pcwt\" (UID: \"f9e97513-5c89-4917-8e5a-d2230e694e3f\") " pod="openstack/heat-db-create-6pcwt" Feb 16 17:21:25 crc kubenswrapper[4794]: I0216 17:21:25.683690 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9e97513-5c89-4917-8e5a-d2230e694e3f-operator-scripts\") pod \"heat-db-create-6pcwt\" (UID: \"f9e97513-5c89-4917-8e5a-d2230e694e3f\") " pod="openstack/heat-db-create-6pcwt" Feb 16 17:21:25 crc kubenswrapper[4794]: I0216 17:21:25.737743 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dw7m\" (UniqueName: \"kubernetes.io/projected/f9e97513-5c89-4917-8e5a-d2230e694e3f-kube-api-access-5dw7m\") pod \"heat-db-create-6pcwt\" (UID: \"f9e97513-5c89-4917-8e5a-d2230e694e3f\") " pod="openstack/heat-db-create-6pcwt" Feb 16 17:21:25 crc kubenswrapper[4794]: I0216 17:21:25.867900 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-6pcwt" Feb 16 17:21:25 crc kubenswrapper[4794]: I0216 17:21:25.915712 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-tk65m"] Feb 16 17:21:25 crc kubenswrapper[4794]: I0216 17:21:25.920298 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-tk65m" Feb 16 17:21:25 crc kubenswrapper[4794]: I0216 17:21:25.933258 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 17:21:25 crc kubenswrapper[4794]: I0216 17:21:25.933954 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 17:21:25 crc kubenswrapper[4794]: I0216 17:21:25.934122 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 17:21:25 crc kubenswrapper[4794]: I0216 17:21:25.934854 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gm9wc" Feb 16 17:21:25 crc kubenswrapper[4794]: I0216 17:21:25.944501 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-2ea9-account-create-update-7tt5t"] Feb 16 17:21:25 crc kubenswrapper[4794]: I0216 17:21:25.948418 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-2ea9-account-create-update-7tt5t" Feb 16 17:21:25 crc kubenswrapper[4794]: I0216 17:21:25.957427 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Feb 16 17:21:25 crc kubenswrapper[4794]: I0216 17:21:25.997563 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:25.999916 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c38fe9c-5f26-457a-9209-688ba917fc8c-config-data\") pod \"keystone-db-sync-tk65m\" (UID: \"9c38fe9c-5f26-457a-9209-688ba917fc8c\") " pod="openstack/keystone-db-sync-tk65m" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.000447 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22fec0db-d521-4e76-bd89-7c22ea6a8bb1-operator-scripts\") pod \"heat-2ea9-account-create-update-7tt5t\" (UID: \"22fec0db-d521-4e76-bd89-7c22ea6a8bb1\") " pod="openstack/heat-2ea9-account-create-update-7tt5t" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.000494 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlhp7\" (UniqueName: \"kubernetes.io/projected/22fec0db-d521-4e76-bd89-7c22ea6a8bb1-kube-api-access-xlhp7\") pod \"heat-2ea9-account-create-update-7tt5t\" (UID: \"22fec0db-d521-4e76-bd89-7c22ea6a8bb1\") " pod="openstack/heat-2ea9-account-create-update-7tt5t" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.000770 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c38fe9c-5f26-457a-9209-688ba917fc8c-combined-ca-bundle\") pod \"keystone-db-sync-tk65m\" (UID: \"9c38fe9c-5f26-457a-9209-688ba917fc8c\") " pod="openstack/keystone-db-sync-tk65m" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.000966 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpj5s\" (UniqueName: \"kubernetes.io/projected/9c38fe9c-5f26-457a-9209-688ba917fc8c-kube-api-access-vpj5s\") pod \"keystone-db-sync-tk65m\" (UID: \"9c38fe9c-5f26-457a-9209-688ba917fc8c\") " pod="openstack/keystone-db-sync-tk65m" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.012987 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-2ea9-account-create-update-7tt5t"] Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.040094 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-tk65m"] Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.058364 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-rrqcf"] Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.061426 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rrqcf" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.097637 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-3f33-account-create-update-bmfmg"] Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.104386 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c38fe9c-5f26-457a-9209-688ba917fc8c-combined-ca-bundle\") pod \"keystone-db-sync-tk65m\" (UID: \"9c38fe9c-5f26-457a-9209-688ba917fc8c\") " pod="openstack/keystone-db-sync-tk65m" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.104500 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpj5s\" (UniqueName: \"kubernetes.io/projected/9c38fe9c-5f26-457a-9209-688ba917fc8c-kube-api-access-vpj5s\") pod \"keystone-db-sync-tk65m\" (UID: \"9c38fe9c-5f26-457a-9209-688ba917fc8c\") " pod="openstack/keystone-db-sync-tk65m" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.104952 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3f33-account-create-update-bmfmg" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.112032 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c38fe9c-5f26-457a-9209-688ba917fc8c-config-data\") pod \"keystone-db-sync-tk65m\" (UID: \"9c38fe9c-5f26-457a-9209-688ba917fc8c\") " pod="openstack/keystone-db-sync-tk65m" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.112486 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22fec0db-d521-4e76-bd89-7c22ea6a8bb1-operator-scripts\") pod \"heat-2ea9-account-create-update-7tt5t\" (UID: \"22fec0db-d521-4e76-bd89-7c22ea6a8bb1\") " pod="openstack/heat-2ea9-account-create-update-7tt5t" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.112511 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlhp7\" (UniqueName: \"kubernetes.io/projected/22fec0db-d521-4e76-bd89-7c22ea6a8bb1-kube-api-access-xlhp7\") pod \"heat-2ea9-account-create-update-7tt5t\" (UID: \"22fec0db-d521-4e76-bd89-7c22ea6a8bb1\") " pod="openstack/heat-2ea9-account-create-update-7tt5t" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.116443 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.119988 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22fec0db-d521-4e76-bd89-7c22ea6a8bb1-operator-scripts\") pod \"heat-2ea9-account-create-update-7tt5t\" (UID: \"22fec0db-d521-4e76-bd89-7c22ea6a8bb1\") " pod="openstack/heat-2ea9-account-create-update-7tt5t" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.124759 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c38fe9c-5f26-457a-9209-688ba917fc8c-config-data\") pod \"keystone-db-sync-tk65m\" (UID: \"9c38fe9c-5f26-457a-9209-688ba917fc8c\") " pod="openstack/keystone-db-sync-tk65m" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.149150 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c38fe9c-5f26-457a-9209-688ba917fc8c-combined-ca-bundle\") pod \"keystone-db-sync-tk65m\" (UID: \"9c38fe9c-5f26-457a-9209-688ba917fc8c\") " pod="openstack/keystone-db-sync-tk65m" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.167709 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-rrqcf"] Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.183015 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlhp7\" (UniqueName: \"kubernetes.io/projected/22fec0db-d521-4e76-bd89-7c22ea6a8bb1-kube-api-access-xlhp7\") pod \"heat-2ea9-account-create-update-7tt5t\" (UID: \"22fec0db-d521-4e76-bd89-7c22ea6a8bb1\") " pod="openstack/heat-2ea9-account-create-update-7tt5t" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.183132 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpj5s\" (UniqueName: \"kubernetes.io/projected/9c38fe9c-5f26-457a-9209-688ba917fc8c-kube-api-access-vpj5s\") pod \"keystone-db-sync-tk65m\" (UID: \"9c38fe9c-5f26-457a-9209-688ba917fc8c\") " pod="openstack/keystone-db-sync-tk65m" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.214087 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48f949ea-bb57-46ea-a3b7-dfa2cd3ed8a4-operator-scripts\") pod \"cinder-3f33-account-create-update-bmfmg\" (UID: \"48f949ea-bb57-46ea-a3b7-dfa2cd3ed8a4\") " pod="openstack/cinder-3f33-account-create-update-bmfmg" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.224065 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eba0114-90ef-495f-b633-be0e999ee9db-operator-scripts\") pod \"cinder-db-create-rrqcf\" (UID: \"0eba0114-90ef-495f-b633-be0e999ee9db\") " pod="openstack/cinder-db-create-rrqcf" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.224227 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bw9v\" (UniqueName: \"kubernetes.io/projected/0eba0114-90ef-495f-b633-be0e999ee9db-kube-api-access-7bw9v\") pod \"cinder-db-create-rrqcf\" (UID: \"0eba0114-90ef-495f-b633-be0e999ee9db\") " pod="openstack/cinder-db-create-rrqcf" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.224275 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d78dz\" (UniqueName: \"kubernetes.io/projected/48f949ea-bb57-46ea-a3b7-dfa2cd3ed8a4-kube-api-access-d78dz\") pod \"cinder-3f33-account-create-update-bmfmg\" (UID: \"48f949ea-bb57-46ea-a3b7-dfa2cd3ed8a4\") " pod="openstack/cinder-3f33-account-create-update-bmfmg" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.286115 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-tk65m" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.310514 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-3f33-account-create-update-bmfmg"] Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.315677 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-2ea9-account-create-update-7tt5t" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.326900 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eba0114-90ef-495f-b633-be0e999ee9db-operator-scripts\") pod \"cinder-db-create-rrqcf\" (UID: \"0eba0114-90ef-495f-b633-be0e999ee9db\") " pod="openstack/cinder-db-create-rrqcf" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.327004 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bw9v\" (UniqueName: \"kubernetes.io/projected/0eba0114-90ef-495f-b633-be0e999ee9db-kube-api-access-7bw9v\") pod \"cinder-db-create-rrqcf\" (UID: \"0eba0114-90ef-495f-b633-be0e999ee9db\") " pod="openstack/cinder-db-create-rrqcf" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.327042 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d78dz\" (UniqueName: \"kubernetes.io/projected/48f949ea-bb57-46ea-a3b7-dfa2cd3ed8a4-kube-api-access-d78dz\") pod \"cinder-3f33-account-create-update-bmfmg\" (UID: \"48f949ea-bb57-46ea-a3b7-dfa2cd3ed8a4\") " pod="openstack/cinder-3f33-account-create-update-bmfmg" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.327108 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48f949ea-bb57-46ea-a3b7-dfa2cd3ed8a4-operator-scripts\") pod \"cinder-3f33-account-create-update-bmfmg\" (UID: \"48f949ea-bb57-46ea-a3b7-dfa2cd3ed8a4\") " pod="openstack/cinder-3f33-account-create-update-bmfmg" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.329411 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eba0114-90ef-495f-b633-be0e999ee9db-operator-scripts\") pod \"cinder-db-create-rrqcf\" (UID: \"0eba0114-90ef-495f-b633-be0e999ee9db\") " pod="openstack/cinder-db-create-rrqcf" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.342343 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48f949ea-bb57-46ea-a3b7-dfa2cd3ed8a4-operator-scripts\") pod \"cinder-3f33-account-create-update-bmfmg\" (UID: \"48f949ea-bb57-46ea-a3b7-dfa2cd3ed8a4\") " pod="openstack/cinder-3f33-account-create-update-bmfmg" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.354585 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bw9v\" (UniqueName: \"kubernetes.io/projected/0eba0114-90ef-495f-b633-be0e999ee9db-kube-api-access-7bw9v\") pod \"cinder-db-create-rrqcf\" (UID: \"0eba0114-90ef-495f-b633-be0e999ee9db\") " pod="openstack/cinder-db-create-rrqcf" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.362894 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d78dz\" (UniqueName: \"kubernetes.io/projected/48f949ea-bb57-46ea-a3b7-dfa2cd3ed8a4-kube-api-access-d78dz\") pod \"cinder-3f33-account-create-update-bmfmg\" (UID: \"48f949ea-bb57-46ea-a3b7-dfa2cd3ed8a4\") " pod="openstack/cinder-3f33-account-create-update-bmfmg" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.416681 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-wd76h"] Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.418988 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-wd76h" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.462386 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-wd76h"] Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.489988 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5932-account-create-update-bd29m"] Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.491586 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5932-account-create-update-bd29m" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.494049 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.531103 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5932-account-create-update-bd29m"] Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.536729 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tm4p\" (UniqueName: \"kubernetes.io/projected/6505f038-47d3-4a1b-a939-11469306ff84-kube-api-access-4tm4p\") pod \"neutron-db-create-wd76h\" (UID: \"6505f038-47d3-4a1b-a939-11469306ff84\") " pod="openstack/neutron-db-create-wd76h" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.537456 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6505f038-47d3-4a1b-a939-11469306ff84-operator-scripts\") pod \"neutron-db-create-wd76h\" (UID: \"6505f038-47d3-4a1b-a939-11469306ff84\") " pod="openstack/neutron-db-create-wd76h" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.544905 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rrqcf" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.565535 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-xp5cn"] Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.584032 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xp5cn" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.589630 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3f33-account-create-update-bmfmg" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.625838 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-xp5cn"] Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.648378 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vls89\" (UniqueName: \"kubernetes.io/projected/5589f24e-f4c8-427e-ba13-f0ffb8358940-kube-api-access-vls89\") pod \"neutron-5932-account-create-update-bd29m\" (UID: \"5589f24e-f4c8-427e-ba13-f0ffb8358940\") " pod="openstack/neutron-5932-account-create-update-bd29m" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.648862 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-01fa-account-create-update-mng8s"] Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.651576 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-01fa-account-create-update-mng8s" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.651708 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tm4p\" (UniqueName: \"kubernetes.io/projected/6505f038-47d3-4a1b-a939-11469306ff84-kube-api-access-4tm4p\") pod \"neutron-db-create-wd76h\" (UID: \"6505f038-47d3-4a1b-a939-11469306ff84\") " pod="openstack/neutron-db-create-wd76h" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.652359 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6505f038-47d3-4a1b-a939-11469306ff84-operator-scripts\") pod \"neutron-db-create-wd76h\" (UID: \"6505f038-47d3-4a1b-a939-11469306ff84\") " pod="openstack/neutron-db-create-wd76h" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.652592 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5589f24e-f4c8-427e-ba13-f0ffb8358940-operator-scripts\") pod \"neutron-5932-account-create-update-bd29m\" (UID: \"5589f24e-f4c8-427e-ba13-f0ffb8358940\") " pod="openstack/neutron-5932-account-create-update-bd29m" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.653560 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6505f038-47d3-4a1b-a939-11469306ff84-operator-scripts\") pod \"neutron-db-create-wd76h\" (UID: \"6505f038-47d3-4a1b-a939-11469306ff84\") " pod="openstack/neutron-db-create-wd76h" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.654860 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.669929 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-01fa-account-create-update-mng8s"] Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.688285 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tm4p\" (UniqueName: \"kubernetes.io/projected/6505f038-47d3-4a1b-a939-11469306ff84-kube-api-access-4tm4p\") pod \"neutron-db-create-wd76h\" (UID: \"6505f038-47d3-4a1b-a939-11469306ff84\") " pod="openstack/neutron-db-create-wd76h" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.754670 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmg8j\" (UniqueName: \"kubernetes.io/projected/1eb6af8b-8f65-4725-a2bc-88339a37bf85-kube-api-access-dmg8j\") pod \"barbican-db-create-xp5cn\" (UID: \"1eb6af8b-8f65-4725-a2bc-88339a37bf85\") " pod="openstack/barbican-db-create-xp5cn" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.754776 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1eb6af8b-8f65-4725-a2bc-88339a37bf85-operator-scripts\") pod \"barbican-db-create-xp5cn\" (UID: \"1eb6af8b-8f65-4725-a2bc-88339a37bf85\") " pod="openstack/barbican-db-create-xp5cn" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.754853 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs48t\" (UniqueName: \"kubernetes.io/projected/6989884b-6a5b-4e42-a0c8-bfd3a1361057-kube-api-access-vs48t\") pod \"barbican-01fa-account-create-update-mng8s\" (UID: \"6989884b-6a5b-4e42-a0c8-bfd3a1361057\") " pod="openstack/barbican-01fa-account-create-update-mng8s" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.754903 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5589f24e-f4c8-427e-ba13-f0ffb8358940-operator-scripts\") pod \"neutron-5932-account-create-update-bd29m\" (UID: \"5589f24e-f4c8-427e-ba13-f0ffb8358940\") " pod="openstack/neutron-5932-account-create-update-bd29m" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.754981 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vls89\" (UniqueName: \"kubernetes.io/projected/5589f24e-f4c8-427e-ba13-f0ffb8358940-kube-api-access-vls89\") pod \"neutron-5932-account-create-update-bd29m\" (UID: \"5589f24e-f4c8-427e-ba13-f0ffb8358940\") " pod="openstack/neutron-5932-account-create-update-bd29m" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.755072 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6989884b-6a5b-4e42-a0c8-bfd3a1361057-operator-scripts\") pod \"barbican-01fa-account-create-update-mng8s\" (UID: \"6989884b-6a5b-4e42-a0c8-bfd3a1361057\") " pod="openstack/barbican-01fa-account-create-update-mng8s" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.755971 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5589f24e-f4c8-427e-ba13-f0ffb8358940-operator-scripts\") pod \"neutron-5932-account-create-update-bd29m\" (UID: \"5589f24e-f4c8-427e-ba13-f0ffb8358940\") " pod="openstack/neutron-5932-account-create-update-bd29m" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.781266 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vls89\" (UniqueName: \"kubernetes.io/projected/5589f24e-f4c8-427e-ba13-f0ffb8358940-kube-api-access-vls89\") pod \"neutron-5932-account-create-update-bd29m\" (UID: \"5589f24e-f4c8-427e-ba13-f0ffb8358940\") " pod="openstack/neutron-5932-account-create-update-bd29m" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.806754 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-wd76h" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.823550 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5932-account-create-update-bd29m" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.857952 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmg8j\" (UniqueName: \"kubernetes.io/projected/1eb6af8b-8f65-4725-a2bc-88339a37bf85-kube-api-access-dmg8j\") pod \"barbican-db-create-xp5cn\" (UID: \"1eb6af8b-8f65-4725-a2bc-88339a37bf85\") " pod="openstack/barbican-db-create-xp5cn" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.858033 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1eb6af8b-8f65-4725-a2bc-88339a37bf85-operator-scripts\") pod \"barbican-db-create-xp5cn\" (UID: \"1eb6af8b-8f65-4725-a2bc-88339a37bf85\") " pod="openstack/barbican-db-create-xp5cn" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.858105 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs48t\" (UniqueName: \"kubernetes.io/projected/6989884b-6a5b-4e42-a0c8-bfd3a1361057-kube-api-access-vs48t\") pod \"barbican-01fa-account-create-update-mng8s\" (UID: \"6989884b-6a5b-4e42-a0c8-bfd3a1361057\") " pod="openstack/barbican-01fa-account-create-update-mng8s" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.858521 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6989884b-6a5b-4e42-a0c8-bfd3a1361057-operator-scripts\") pod \"barbican-01fa-account-create-update-mng8s\" (UID: \"6989884b-6a5b-4e42-a0c8-bfd3a1361057\") " pod="openstack/barbican-01fa-account-create-update-mng8s" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.859870 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6989884b-6a5b-4e42-a0c8-bfd3a1361057-operator-scripts\") pod \"barbican-01fa-account-create-update-mng8s\" (UID: \"6989884b-6a5b-4e42-a0c8-bfd3a1361057\") " pod="openstack/barbican-01fa-account-create-update-mng8s" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.863273 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1eb6af8b-8f65-4725-a2bc-88339a37bf85-operator-scripts\") pod \"barbican-db-create-xp5cn\" (UID: \"1eb6af8b-8f65-4725-a2bc-88339a37bf85\") " pod="openstack/barbican-db-create-xp5cn" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.894669 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs48t\" (UniqueName: \"kubernetes.io/projected/6989884b-6a5b-4e42-a0c8-bfd3a1361057-kube-api-access-vs48t\") pod \"barbican-01fa-account-create-update-mng8s\" (UID: \"6989884b-6a5b-4e42-a0c8-bfd3a1361057\") " pod="openstack/barbican-01fa-account-create-update-mng8s" Feb 16 17:21:26 crc kubenswrapper[4794]: I0216 17:21:26.923013 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmg8j\" (UniqueName: \"kubernetes.io/projected/1eb6af8b-8f65-4725-a2bc-88339a37bf85-kube-api-access-dmg8j\") pod \"barbican-db-create-xp5cn\" (UID: \"1eb6af8b-8f65-4725-a2bc-88339a37bf85\") " pod="openstack/barbican-db-create-xp5cn" Feb 16 17:21:27 crc kubenswrapper[4794]: I0216 17:21:26.998084 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-01fa-account-create-update-mng8s" Feb 16 17:21:27 crc kubenswrapper[4794]: I0216 17:21:27.224224 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xp5cn" Feb 16 17:21:27 crc kubenswrapper[4794]: I0216 17:21:27.342934 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-tk65m"] Feb 16 17:21:27 crc kubenswrapper[4794]: W0216 17:21:27.369169 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c38fe9c_5f26_457a_9209_688ba917fc8c.slice/crio-dcde796ac2a5c68ed899ad84b10724f9f2cb6a343632bbafe7694f723b8eb1f2 WatchSource:0}: Error finding container dcde796ac2a5c68ed899ad84b10724f9f2cb6a343632bbafe7694f723b8eb1f2: Status 404 returned error can't find the container with id dcde796ac2a5c68ed899ad84b10724f9f2cb6a343632bbafe7694f723b8eb1f2 Feb 16 17:21:27 crc kubenswrapper[4794]: I0216 17:21:27.576339 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"54acc9db-6bd7-463f-8637-6aa39ed3eb11","Type":"ContainerStarted","Data":"b21f502211d0800f64d14ebb06992a0c9867b71732369a89e49d748e5aebbac5"} Feb 16 17:21:27 crc kubenswrapper[4794]: I0216 17:21:27.578403 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-tk65m" event={"ID":"9c38fe9c-5f26-457a-9209-688ba917fc8c","Type":"ContainerStarted","Data":"dcde796ac2a5c68ed899ad84b10724f9f2cb6a343632bbafe7694f723b8eb1f2"} Feb 16 17:21:27 crc kubenswrapper[4794]: I0216 17:21:27.779371 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-3f33-account-create-update-bmfmg"] Feb 16 17:21:27 crc kubenswrapper[4794]: I0216 17:21:27.789608 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-6pcwt"] Feb 16 17:21:27 crc kubenswrapper[4794]: I0216 17:21:27.800090 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-2ea9-account-create-update-7tt5t"] Feb 16 17:21:27 crc kubenswrapper[4794]: W0216 17:21:27.831046 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22fec0db_d521_4e76_bd89_7c22ea6a8bb1.slice/crio-1791d7e0d738aca712dbca9d2b71d68a797082a0372790caecce2e0f844186dc WatchSource:0}: Error finding container 1791d7e0d738aca712dbca9d2b71d68a797082a0372790caecce2e0f844186dc: Status 404 returned error can't find the container with id 1791d7e0d738aca712dbca9d2b71d68a797082a0372790caecce2e0f844186dc Feb 16 17:21:28 crc kubenswrapper[4794]: I0216 17:21:28.222985 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-01fa-account-create-update-mng8s"] Feb 16 17:21:28 crc kubenswrapper[4794]: I0216 17:21:28.244611 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-rrqcf"] Feb 16 17:21:28 crc kubenswrapper[4794]: I0216 17:21:28.267431 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-wd76h"] Feb 16 17:21:28 crc kubenswrapper[4794]: W0216 17:21:28.277013 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0eba0114_90ef_495f_b633_be0e999ee9db.slice/crio-416298e79a801f7285a6b4f578abca82d40d6868d9983149b9aafd351d6d005f WatchSource:0}: Error finding container 416298e79a801f7285a6b4f578abca82d40d6868d9983149b9aafd351d6d005f: Status 404 returned error can't find the container with id 416298e79a801f7285a6b4f578abca82d40d6868d9983149b9aafd351d6d005f Feb 16 17:21:28 crc kubenswrapper[4794]: W0216 17:21:28.288733 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6505f038_47d3_4a1b_a939_11469306ff84.slice/crio-e1f989cda2fa2840c4b4356329f8897b31816554e0c5002a755117d1caed139d WatchSource:0}: Error finding container e1f989cda2fa2840c4b4356329f8897b31816554e0c5002a755117d1caed139d: Status 404 returned error can't find the container with id e1f989cda2fa2840c4b4356329f8897b31816554e0c5002a755117d1caed139d Feb 16 17:21:28 crc kubenswrapper[4794]: I0216 17:21:28.289892 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5932-account-create-update-bd29m"] Feb 16 17:21:28 crc kubenswrapper[4794]: I0216 17:21:28.305723 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-xp5cn"] Feb 16 17:21:28 crc kubenswrapper[4794]: I0216 17:21:28.604878 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-wd76h" event={"ID":"6505f038-47d3-4a1b-a939-11469306ff84","Type":"ContainerStarted","Data":"e1f989cda2fa2840c4b4356329f8897b31816554e0c5002a755117d1caed139d"} Feb 16 17:21:28 crc kubenswrapper[4794]: I0216 17:21:28.606802 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-01fa-account-create-update-mng8s" event={"ID":"6989884b-6a5b-4e42-a0c8-bfd3a1361057","Type":"ContainerStarted","Data":"5717e2b31e74fb7a1e96f51f6831550663140ab156e8222701c71657f9822993"} Feb 16 17:21:28 crc kubenswrapper[4794]: I0216 17:21:28.608370 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3f33-account-create-update-bmfmg" event={"ID":"48f949ea-bb57-46ea-a3b7-dfa2cd3ed8a4","Type":"ContainerStarted","Data":"268923bb88d3ff319485c8986c493ad543f3c0460287cabb6c4072e8fbd1d43a"} Feb 16 17:21:28 crc kubenswrapper[4794]: I0216 17:21:28.608404 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3f33-account-create-update-bmfmg" event={"ID":"48f949ea-bb57-46ea-a3b7-dfa2cd3ed8a4","Type":"ContainerStarted","Data":"186c4e863028fd7663033e09f9919add5a8b49882e9901e401a9979607cf46d5"} Feb 16 17:21:28 crc kubenswrapper[4794]: I0216 17:21:28.612611 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-6pcwt" event={"ID":"f9e97513-5c89-4917-8e5a-d2230e694e3f","Type":"ContainerStarted","Data":"2a632e977a49b27ea68bee7de6a2f979b999ad36f19d9f783c004e149891fc59"} Feb 16 17:21:28 crc kubenswrapper[4794]: I0216 17:21:28.612682 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-6pcwt" event={"ID":"f9e97513-5c89-4917-8e5a-d2230e694e3f","Type":"ContainerStarted","Data":"d8c621d8c872055732231c8ca3c955435c82a9aa958695dba0d4bda3637cacea"} Feb 16 17:21:28 crc kubenswrapper[4794]: I0216 17:21:28.635512 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"54acc9db-6bd7-463f-8637-6aa39ed3eb11","Type":"ContainerStarted","Data":"b6de53f4e8e526707bf94e3febd1e81f19f8e935ac590743ad72de14a8b45cf4"} Feb 16 17:21:28 crc kubenswrapper[4794]: I0216 17:21:28.635613 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"54acc9db-6bd7-463f-8637-6aa39ed3eb11","Type":"ContainerStarted","Data":"13874520337c596c713ba4dbe7bbd244b93b27f11a3f1627069fc08ca4c19e3a"} Feb 16 17:21:28 crc kubenswrapper[4794]: I0216 17:21:28.645869 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-2ea9-account-create-update-7tt5t" event={"ID":"22fec0db-d521-4e76-bd89-7c22ea6a8bb1","Type":"ContainerStarted","Data":"1791d7e0d738aca712dbca9d2b71d68a797082a0372790caecce2e0f844186dc"} Feb 16 17:21:28 crc kubenswrapper[4794]: I0216 17:21:28.650297 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5932-account-create-update-bd29m" event={"ID":"5589f24e-f4c8-427e-ba13-f0ffb8358940","Type":"ContainerStarted","Data":"89f712780c47af532c1644088893007104b77fec05eec3b499093f081bc1c001"} Feb 16 17:21:28 crc kubenswrapper[4794]: I0216 17:21:28.651897 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xp5cn" event={"ID":"1eb6af8b-8f65-4725-a2bc-88339a37bf85","Type":"ContainerStarted","Data":"9370e8c51441f0b59517c4de8ff64a51cccc5a6e9b08a9a02abd51a96ba31683"} Feb 16 17:21:28 crc kubenswrapper[4794]: I0216 17:21:28.656251 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-3f33-account-create-update-bmfmg" podStartSLOduration=3.656216368 podStartE2EDuration="3.656216368s" podCreationTimestamp="2026-02-16 17:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:21:28.626557668 +0000 UTC m=+1314.574652315" watchObservedRunningTime="2026-02-16 17:21:28.656216368 +0000 UTC m=+1314.604311015" Feb 16 17:21:28 crc kubenswrapper[4794]: I0216 17:21:28.657551 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-rrqcf" event={"ID":"0eba0114-90ef-495f-b633-be0e999ee9db","Type":"ContainerStarted","Data":"416298e79a801f7285a6b4f578abca82d40d6868d9983149b9aafd351d6d005f"} Feb 16 17:21:28 crc kubenswrapper[4794]: I0216 17:21:28.660859 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-6pcwt" podStartSLOduration=3.660832025 podStartE2EDuration="3.660832025s" podCreationTimestamp="2026-02-16 17:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:21:28.64245295 +0000 UTC m=+1314.590547607" watchObservedRunningTime="2026-02-16 17:21:28.660832025 +0000 UTC m=+1314.608926672" Feb 16 17:21:29 crc kubenswrapper[4794]: I0216 17:21:29.684224 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5932-account-create-update-bd29m" event={"ID":"5589f24e-f4c8-427e-ba13-f0ffb8358940","Type":"ContainerStarted","Data":"fbbbf2b86d6f7aca18f522d584fa582d33447ddaebfe59cf234b9265bca71fd0"} Feb 16 17:21:29 crc kubenswrapper[4794]: I0216 17:21:29.704617 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5932-account-create-update-bd29m" podStartSLOduration=3.704593973 podStartE2EDuration="3.704593973s" podCreationTimestamp="2026-02-16 17:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:21:29.704154152 +0000 UTC m=+1315.652248799" watchObservedRunningTime="2026-02-16 17:21:29.704593973 +0000 UTC m=+1315.652688620" Feb 16 17:21:29 crc kubenswrapper[4794]: I0216 17:21:29.707014 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xp5cn" event={"ID":"1eb6af8b-8f65-4725-a2bc-88339a37bf85","Type":"ContainerStarted","Data":"893e845e410be8e6b6a4dfd5bffbe3bb05b49af4c1da8177fb88b502bd7ceb60"} Feb 16 17:21:29 crc kubenswrapper[4794]: I0216 17:21:29.730037 4794 generic.go:334] "Generic (PLEG): container finished" podID="48f949ea-bb57-46ea-a3b7-dfa2cd3ed8a4" containerID="268923bb88d3ff319485c8986c493ad543f3c0460287cabb6c4072e8fbd1d43a" exitCode=0 Feb 16 17:21:29 crc kubenswrapper[4794]: I0216 17:21:29.730185 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3f33-account-create-update-bmfmg" event={"ID":"48f949ea-bb57-46ea-a3b7-dfa2cd3ed8a4","Type":"ContainerDied","Data":"268923bb88d3ff319485c8986c493ad543f3c0460287cabb6c4072e8fbd1d43a"} Feb 16 17:21:29 crc kubenswrapper[4794]: I0216 17:21:29.737073 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-xp5cn" podStartSLOduration=3.737046184 podStartE2EDuration="3.737046184s" podCreationTimestamp="2026-02-16 17:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:21:29.725074661 +0000 UTC m=+1315.673169318" watchObservedRunningTime="2026-02-16 17:21:29.737046184 +0000 UTC m=+1315.685140831" Feb 16 17:21:29 crc kubenswrapper[4794]: I0216 17:21:29.737733 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-rrqcf" event={"ID":"0eba0114-90ef-495f-b633-be0e999ee9db","Type":"ContainerStarted","Data":"0c8d4cc22b9fe6eab62d122b6ce1664ad3d47285de67635dd1363762627e7ad4"} Feb 16 17:21:29 crc kubenswrapper[4794]: I0216 17:21:29.744934 4794 generic.go:334] "Generic (PLEG): container finished" podID="f9e97513-5c89-4917-8e5a-d2230e694e3f" containerID="2a632e977a49b27ea68bee7de6a2f979b999ad36f19d9f783c004e149891fc59" exitCode=0 Feb 16 17:21:29 crc kubenswrapper[4794]: I0216 17:21:29.745029 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-6pcwt" event={"ID":"f9e97513-5c89-4917-8e5a-d2230e694e3f","Type":"ContainerDied","Data":"2a632e977a49b27ea68bee7de6a2f979b999ad36f19d9f783c004e149891fc59"} Feb 16 17:21:29 crc kubenswrapper[4794]: I0216 17:21:29.786222 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"54acc9db-6bd7-463f-8637-6aa39ed3eb11","Type":"ContainerStarted","Data":"caf2c4ec87e0a4e2de0af6b32898e3a5b17ec9a7f96e3ddb1b818d59415fbf46"} Feb 16 17:21:29 crc kubenswrapper[4794]: I0216 17:21:29.796045 4794 generic.go:334] "Generic (PLEG): container finished" podID="22fec0db-d521-4e76-bd89-7c22ea6a8bb1" containerID="9bc110d2f764d6184910d501cd998ee52e2930479bfbae83d7a123976df54630" exitCode=0 Feb 16 17:21:29 crc kubenswrapper[4794]: I0216 17:21:29.796199 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-2ea9-account-create-update-7tt5t" event={"ID":"22fec0db-d521-4e76-bd89-7c22ea6a8bb1","Type":"ContainerDied","Data":"9bc110d2f764d6184910d501cd998ee52e2930479bfbae83d7a123976df54630"} Feb 16 17:21:29 crc kubenswrapper[4794]: I0216 17:21:29.806515 4794 generic.go:334] "Generic (PLEG): container finished" podID="6989884b-6a5b-4e42-a0c8-bfd3a1361057" containerID="35c3affb2961c8861c1b9db09d2342b5abdd819f5017af5b25f4de81066ec822" exitCode=0 Feb 16 17:21:29 crc kubenswrapper[4794]: I0216 17:21:29.806655 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-01fa-account-create-update-mng8s" event={"ID":"6989884b-6a5b-4e42-a0c8-bfd3a1361057","Type":"ContainerDied","Data":"35c3affb2961c8861c1b9db09d2342b5abdd819f5017af5b25f4de81066ec822"} Feb 16 17:21:29 crc kubenswrapper[4794]: I0216 17:21:29.825666 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-wd76h" event={"ID":"6505f038-47d3-4a1b-a939-11469306ff84","Type":"ContainerStarted","Data":"630575e6e05bf43ed348d66618f77d949bba32704d4b42395e01551c8dadadf9"} Feb 16 17:21:29 crc kubenswrapper[4794]: I0216 17:21:29.855678 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-wd76h" podStartSLOduration=3.8556559630000002 podStartE2EDuration="3.855655963s" podCreationTimestamp="2026-02-16 17:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:21:29.852223826 +0000 UTC m=+1315.800318483" watchObservedRunningTime="2026-02-16 17:21:29.855655963 +0000 UTC m=+1315.803750610" Feb 16 17:21:30 crc kubenswrapper[4794]: I0216 17:21:30.843121 4794 generic.go:334] "Generic (PLEG): container finished" podID="0eba0114-90ef-495f-b633-be0e999ee9db" containerID="0c8d4cc22b9fe6eab62d122b6ce1664ad3d47285de67635dd1363762627e7ad4" exitCode=0 Feb 16 17:21:30 crc kubenswrapper[4794]: I0216 17:21:30.843506 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-rrqcf" event={"ID":"0eba0114-90ef-495f-b633-be0e999ee9db","Type":"ContainerDied","Data":"0c8d4cc22b9fe6eab62d122b6ce1664ad3d47285de67635dd1363762627e7ad4"} Feb 16 17:21:30 crc kubenswrapper[4794]: I0216 17:21:30.852832 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"54acc9db-6bd7-463f-8637-6aa39ed3eb11","Type":"ContainerStarted","Data":"305e44f762d06bc347e88704efa33aca9f18d2df2151a5cb1d6b2fc68210f86e"} Feb 16 17:21:30 crc kubenswrapper[4794]: I0216 17:21:30.857903 4794 generic.go:334] "Generic (PLEG): container finished" podID="5589f24e-f4c8-427e-ba13-f0ffb8358940" containerID="fbbbf2b86d6f7aca18f522d584fa582d33447ddaebfe59cf234b9265bca71fd0" exitCode=0 Feb 16 17:21:30 crc kubenswrapper[4794]: I0216 17:21:30.858179 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5932-account-create-update-bd29m" event={"ID":"5589f24e-f4c8-427e-ba13-f0ffb8358940","Type":"ContainerDied","Data":"fbbbf2b86d6f7aca18f522d584fa582d33447ddaebfe59cf234b9265bca71fd0"} Feb 16 17:21:30 crc kubenswrapper[4794]: I0216 17:21:30.859966 4794 generic.go:334] "Generic (PLEG): container finished" podID="1eb6af8b-8f65-4725-a2bc-88339a37bf85" containerID="893e845e410be8e6b6a4dfd5bffbe3bb05b49af4c1da8177fb88b502bd7ceb60" exitCode=0 Feb 16 17:21:30 crc kubenswrapper[4794]: I0216 17:21:30.860140 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xp5cn" event={"ID":"1eb6af8b-8f65-4725-a2bc-88339a37bf85","Type":"ContainerDied","Data":"893e845e410be8e6b6a4dfd5bffbe3bb05b49af4c1da8177fb88b502bd7ceb60"} Feb 16 17:21:30 crc kubenswrapper[4794]: I0216 17:21:30.864434 4794 generic.go:334] "Generic (PLEG): container finished" podID="6505f038-47d3-4a1b-a939-11469306ff84" containerID="630575e6e05bf43ed348d66618f77d949bba32704d4b42395e01551c8dadadf9" exitCode=0 Feb 16 17:21:30 crc kubenswrapper[4794]: I0216 17:21:30.864580 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-wd76h" event={"ID":"6505f038-47d3-4a1b-a939-11469306ff84","Type":"ContainerDied","Data":"630575e6e05bf43ed348d66618f77d949bba32704d4b42395e01551c8dadadf9"} Feb 16 17:21:31 crc kubenswrapper[4794]: I0216 17:21:31.897426 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"54acc9db-6bd7-463f-8637-6aa39ed3eb11","Type":"ContainerStarted","Data":"752621122b95ff20269a6a304466587ded90566078e47a3d05579dd2e2f93b5d"} Feb 16 17:21:31 crc kubenswrapper[4794]: I0216 17:21:31.908730 4794 generic.go:334] "Generic (PLEG): container finished" podID="fb8edc26-5ad8-440e-9d5b-942b0a287ea4" containerID="447ffb8d8b4495130e9739fabe034c2edbfe34d056da67cd871631699252a06d" exitCode=0 Feb 16 17:21:31 crc kubenswrapper[4794]: I0216 17:21:31.928712 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-vc8d5" event={"ID":"fb8edc26-5ad8-440e-9d5b-942b0a287ea4","Type":"ContainerDied","Data":"447ffb8d8b4495130e9739fabe034c2edbfe34d056da67cd871631699252a06d"} Feb 16 17:21:32 crc kubenswrapper[4794]: E0216 17:21:32.041142 4794 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb8edc26_5ad8_440e_9d5b_942b0a287ea4.slice/crio-447ffb8d8b4495130e9739fabe034c2edbfe34d056da67cd871631699252a06d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb8edc26_5ad8_440e_9d5b_942b0a287ea4.slice/crio-conmon-447ffb8d8b4495130e9739fabe034c2edbfe34d056da67cd871631699252a06d.scope\": RecentStats: unable to find data in memory cache]" Feb 16 17:21:34 crc kubenswrapper[4794]: I0216 17:21:34.887399 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-2ea9-account-create-update-7tt5t" Feb 16 17:21:34 crc kubenswrapper[4794]: I0216 17:21:34.927715 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-01fa-account-create-update-mng8s" Feb 16 17:21:34 crc kubenswrapper[4794]: I0216 17:21:34.931605 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-wd76h" Feb 16 17:21:34 crc kubenswrapper[4794]: I0216 17:21:34.970146 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rrqcf" Feb 16 17:21:34 crc kubenswrapper[4794]: I0216 17:21:34.974829 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xp5cn" event={"ID":"1eb6af8b-8f65-4725-a2bc-88339a37bf85","Type":"ContainerDied","Data":"9370e8c51441f0b59517c4de8ff64a51cccc5a6e9b08a9a02abd51a96ba31683"} Feb 16 17:21:34 crc kubenswrapper[4794]: I0216 17:21:34.974897 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9370e8c51441f0b59517c4de8ff64a51cccc5a6e9b08a9a02abd51a96ba31683" Feb 16 17:21:34 crc kubenswrapper[4794]: I0216 17:21:34.977063 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3f33-account-create-update-bmfmg" Feb 16 17:21:34 crc kubenswrapper[4794]: I0216 17:21:34.978698 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-01fa-account-create-update-mng8s" event={"ID":"6989884b-6a5b-4e42-a0c8-bfd3a1361057","Type":"ContainerDied","Data":"5717e2b31e74fb7a1e96f51f6831550663140ab156e8222701c71657f9822993"} Feb 16 17:21:34 crc kubenswrapper[4794]: I0216 17:21:34.978739 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5717e2b31e74fb7a1e96f51f6831550663140ab156e8222701c71657f9822993" Feb 16 17:21:34 crc kubenswrapper[4794]: I0216 17:21:34.978930 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-01fa-account-create-update-mng8s" Feb 16 17:21:34 crc kubenswrapper[4794]: I0216 17:21:34.990005 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3f33-account-create-update-bmfmg" event={"ID":"48f949ea-bb57-46ea-a3b7-dfa2cd3ed8a4","Type":"ContainerDied","Data":"186c4e863028fd7663033e09f9919add5a8b49882e9901e401a9979607cf46d5"} Feb 16 17:21:34 crc kubenswrapper[4794]: I0216 17:21:34.990061 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="186c4e863028fd7663033e09f9919add5a8b49882e9901e401a9979607cf46d5" Feb 16 17:21:34 crc kubenswrapper[4794]: I0216 17:21:34.990126 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3f33-account-create-update-bmfmg" Feb 16 17:21:34 crc kubenswrapper[4794]: I0216 17:21:34.992010 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-6pcwt" Feb 16 17:21:34 crc kubenswrapper[4794]: I0216 17:21:34.993405 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-wd76h" event={"ID":"6505f038-47d3-4a1b-a939-11469306ff84","Type":"ContainerDied","Data":"e1f989cda2fa2840c4b4356329f8897b31816554e0c5002a755117d1caed139d"} Feb 16 17:21:34 crc kubenswrapper[4794]: I0216 17:21:34.993441 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1f989cda2fa2840c4b4356329f8897b31816554e0c5002a755117d1caed139d" Feb 16 17:21:34 crc kubenswrapper[4794]: I0216 17:21:34.993492 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-wd76h" Feb 16 17:21:34 crc kubenswrapper[4794]: I0216 17:21:34.999239 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-rrqcf" event={"ID":"0eba0114-90ef-495f-b633-be0e999ee9db","Type":"ContainerDied","Data":"416298e79a801f7285a6b4f578abca82d40d6868d9983149b9aafd351d6d005f"} Feb 16 17:21:34 crc kubenswrapper[4794]: I0216 17:21:34.999288 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="416298e79a801f7285a6b4f578abca82d40d6868d9983149b9aafd351d6d005f" Feb 16 17:21:34 crc kubenswrapper[4794]: I0216 17:21:34.999389 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rrqcf" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.009695 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-2ea9-account-create-update-7tt5t" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.010499 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-2ea9-account-create-update-7tt5t" event={"ID":"22fec0db-d521-4e76-bd89-7c22ea6a8bb1","Type":"ContainerDied","Data":"1791d7e0d738aca712dbca9d2b71d68a797082a0372790caecce2e0f844186dc"} Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.013381 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1791d7e0d738aca712dbca9d2b71d68a797082a0372790caecce2e0f844186dc" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.015456 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-vc8d5" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.023623 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-vc8d5" event={"ID":"fb8edc26-5ad8-440e-9d5b-942b0a287ea4","Type":"ContainerDied","Data":"0711b1e3030ef020187ce0f0267a0c3cfda62765bb915a83b935c700fa60e163"} Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.023660 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0711b1e3030ef020187ce0f0267a0c3cfda62765bb915a83b935c700fa60e163" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.025092 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5932-account-create-update-bd29m" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.025895 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-6pcwt" event={"ID":"f9e97513-5c89-4917-8e5a-d2230e694e3f","Type":"ContainerDied","Data":"d8c621d8c872055732231c8ca3c955435c82a9aa958695dba0d4bda3637cacea"} Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.025911 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8c621d8c872055732231c8ca3c955435c82a9aa958695dba0d4bda3637cacea" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.025984 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-6pcwt" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.029151 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6505f038-47d3-4a1b-a939-11469306ff84-operator-scripts\") pod \"6505f038-47d3-4a1b-a939-11469306ff84\" (UID: \"6505f038-47d3-4a1b-a939-11469306ff84\") " Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.030053 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6505f038-47d3-4a1b-a939-11469306ff84-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6505f038-47d3-4a1b-a939-11469306ff84" (UID: "6505f038-47d3-4a1b-a939-11469306ff84"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.029869 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlhp7\" (UniqueName: \"kubernetes.io/projected/22fec0db-d521-4e76-bd89-7c22ea6a8bb1-kube-api-access-xlhp7\") pod \"22fec0db-d521-4e76-bd89-7c22ea6a8bb1\" (UID: \"22fec0db-d521-4e76-bd89-7c22ea6a8bb1\") " Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.032454 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vs48t\" (UniqueName: \"kubernetes.io/projected/6989884b-6a5b-4e42-a0c8-bfd3a1361057-kube-api-access-vs48t\") pod \"6989884b-6a5b-4e42-a0c8-bfd3a1361057\" (UID: \"6989884b-6a5b-4e42-a0c8-bfd3a1361057\") " Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.032742 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6989884b-6a5b-4e42-a0c8-bfd3a1361057-operator-scripts\") pod \"6989884b-6a5b-4e42-a0c8-bfd3a1361057\" (UID: \"6989884b-6a5b-4e42-a0c8-bfd3a1361057\") " Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.032798 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tm4p\" (UniqueName: \"kubernetes.io/projected/6505f038-47d3-4a1b-a939-11469306ff84-kube-api-access-4tm4p\") pod \"6505f038-47d3-4a1b-a939-11469306ff84\" (UID: \"6505f038-47d3-4a1b-a939-11469306ff84\") " Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.032848 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22fec0db-d521-4e76-bd89-7c22ea6a8bb1-operator-scripts\") pod \"22fec0db-d521-4e76-bd89-7c22ea6a8bb1\" (UID: \"22fec0db-d521-4e76-bd89-7c22ea6a8bb1\") " Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.032928 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eba0114-90ef-495f-b633-be0e999ee9db-operator-scripts\") pod \"0eba0114-90ef-495f-b633-be0e999ee9db\" (UID: \"0eba0114-90ef-495f-b633-be0e999ee9db\") " Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.033017 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bw9v\" (UniqueName: \"kubernetes.io/projected/0eba0114-90ef-495f-b633-be0e999ee9db-kube-api-access-7bw9v\") pod \"0eba0114-90ef-495f-b633-be0e999ee9db\" (UID: \"0eba0114-90ef-495f-b633-be0e999ee9db\") " Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.033350 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6989884b-6a5b-4e42-a0c8-bfd3a1361057-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6989884b-6a5b-4e42-a0c8-bfd3a1361057" (UID: "6989884b-6a5b-4e42-a0c8-bfd3a1361057"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.034098 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22fec0db-d521-4e76-bd89-7c22ea6a8bb1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "22fec0db-d521-4e76-bd89-7c22ea6a8bb1" (UID: "22fec0db-d521-4e76-bd89-7c22ea6a8bb1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.034410 4794 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22fec0db-d521-4e76-bd89-7c22ea6a8bb1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.034438 4794 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6505f038-47d3-4a1b-a939-11469306ff84-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.034449 4794 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6989884b-6a5b-4e42-a0c8-bfd3a1361057-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.034826 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eba0114-90ef-495f-b633-be0e999ee9db-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0eba0114-90ef-495f-b633-be0e999ee9db" (UID: "0eba0114-90ef-495f-b633-be0e999ee9db"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.038702 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22fec0db-d521-4e76-bd89-7c22ea6a8bb1-kube-api-access-xlhp7" (OuterVolumeSpecName: "kube-api-access-xlhp7") pod "22fec0db-d521-4e76-bd89-7c22ea6a8bb1" (UID: "22fec0db-d521-4e76-bd89-7c22ea6a8bb1"). InnerVolumeSpecName "kube-api-access-xlhp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.039728 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eba0114-90ef-495f-b633-be0e999ee9db-kube-api-access-7bw9v" (OuterVolumeSpecName: "kube-api-access-7bw9v") pod "0eba0114-90ef-495f-b633-be0e999ee9db" (UID: "0eba0114-90ef-495f-b633-be0e999ee9db"). InnerVolumeSpecName "kube-api-access-7bw9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.043502 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5932-account-create-update-bd29m" event={"ID":"5589f24e-f4c8-427e-ba13-f0ffb8358940","Type":"ContainerDied","Data":"89f712780c47af532c1644088893007104b77fec05eec3b499093f081bc1c001"} Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.043544 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89f712780c47af532c1644088893007104b77fec05eec3b499093f081bc1c001" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.043611 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5932-account-create-update-bd29m" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.053632 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xp5cn" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.073544 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6989884b-6a5b-4e42-a0c8-bfd3a1361057-kube-api-access-vs48t" (OuterVolumeSpecName: "kube-api-access-vs48t") pod "6989884b-6a5b-4e42-a0c8-bfd3a1361057" (UID: "6989884b-6a5b-4e42-a0c8-bfd3a1361057"). InnerVolumeSpecName "kube-api-access-vs48t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.081922 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6505f038-47d3-4a1b-a939-11469306ff84-kube-api-access-4tm4p" (OuterVolumeSpecName: "kube-api-access-4tm4p") pod "6505f038-47d3-4a1b-a939-11469306ff84" (UID: "6505f038-47d3-4a1b-a939-11469306ff84"). InnerVolumeSpecName "kube-api-access-4tm4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.135198 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vls89\" (UniqueName: \"kubernetes.io/projected/5589f24e-f4c8-427e-ba13-f0ffb8358940-kube-api-access-vls89\") pod \"5589f24e-f4c8-427e-ba13-f0ffb8358940\" (UID: \"5589f24e-f4c8-427e-ba13-f0ffb8358940\") " Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.135273 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb8edc26-5ad8-440e-9d5b-942b0a287ea4-combined-ca-bundle\") pod \"fb8edc26-5ad8-440e-9d5b-942b0a287ea4\" (UID: \"fb8edc26-5ad8-440e-9d5b-942b0a287ea4\") " Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.135384 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmg8j\" (UniqueName: \"kubernetes.io/projected/1eb6af8b-8f65-4725-a2bc-88339a37bf85-kube-api-access-dmg8j\") pod \"1eb6af8b-8f65-4725-a2bc-88339a37bf85\" (UID: \"1eb6af8b-8f65-4725-a2bc-88339a37bf85\") " Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.135434 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1eb6af8b-8f65-4725-a2bc-88339a37bf85-operator-scripts\") pod \"1eb6af8b-8f65-4725-a2bc-88339a37bf85\" (UID: \"1eb6af8b-8f65-4725-a2bc-88339a37bf85\") " Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.135477 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48f949ea-bb57-46ea-a3b7-dfa2cd3ed8a4-operator-scripts\") pod \"48f949ea-bb57-46ea-a3b7-dfa2cd3ed8a4\" (UID: \"48f949ea-bb57-46ea-a3b7-dfa2cd3ed8a4\") " Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.135501 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dw7m\" (UniqueName: \"kubernetes.io/projected/f9e97513-5c89-4917-8e5a-d2230e694e3f-kube-api-access-5dw7m\") pod \"f9e97513-5c89-4917-8e5a-d2230e694e3f\" (UID: \"f9e97513-5c89-4917-8e5a-d2230e694e3f\") " Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.135573 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb8edc26-5ad8-440e-9d5b-942b0a287ea4-config-data\") pod \"fb8edc26-5ad8-440e-9d5b-942b0a287ea4\" (UID: \"fb8edc26-5ad8-440e-9d5b-942b0a287ea4\") " Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.135608 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjp46\" (UniqueName: \"kubernetes.io/projected/fb8edc26-5ad8-440e-9d5b-942b0a287ea4-kube-api-access-kjp46\") pod \"fb8edc26-5ad8-440e-9d5b-942b0a287ea4\" (UID: \"fb8edc26-5ad8-440e-9d5b-942b0a287ea4\") " Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.135706 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5589f24e-f4c8-427e-ba13-f0ffb8358940-operator-scripts\") pod \"5589f24e-f4c8-427e-ba13-f0ffb8358940\" (UID: \"5589f24e-f4c8-427e-ba13-f0ffb8358940\") " Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.135732 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d78dz\" (UniqueName: \"kubernetes.io/projected/48f949ea-bb57-46ea-a3b7-dfa2cd3ed8a4-kube-api-access-d78dz\") pod \"48f949ea-bb57-46ea-a3b7-dfa2cd3ed8a4\" (UID: \"48f949ea-bb57-46ea-a3b7-dfa2cd3ed8a4\") " Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.135754 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fb8edc26-5ad8-440e-9d5b-942b0a287ea4-db-sync-config-data\") pod \"fb8edc26-5ad8-440e-9d5b-942b0a287ea4\" (UID: \"fb8edc26-5ad8-440e-9d5b-942b0a287ea4\") " Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.135815 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9e97513-5c89-4917-8e5a-d2230e694e3f-operator-scripts\") pod \"f9e97513-5c89-4917-8e5a-d2230e694e3f\" (UID: \"f9e97513-5c89-4917-8e5a-d2230e694e3f\") " Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.136437 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tm4p\" (UniqueName: \"kubernetes.io/projected/6505f038-47d3-4a1b-a939-11469306ff84-kube-api-access-4tm4p\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.136458 4794 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eba0114-90ef-495f-b633-be0e999ee9db-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.136469 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bw9v\" (UniqueName: \"kubernetes.io/projected/0eba0114-90ef-495f-b633-be0e999ee9db-kube-api-access-7bw9v\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.136479 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlhp7\" (UniqueName: \"kubernetes.io/projected/22fec0db-d521-4e76-bd89-7c22ea6a8bb1-kube-api-access-xlhp7\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.136490 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vs48t\" (UniqueName: \"kubernetes.io/projected/6989884b-6a5b-4e42-a0c8-bfd3a1361057-kube-api-access-vs48t\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.137028 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9e97513-5c89-4917-8e5a-d2230e694e3f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f9e97513-5c89-4917-8e5a-d2230e694e3f" (UID: "f9e97513-5c89-4917-8e5a-d2230e694e3f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.141180 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9e97513-5c89-4917-8e5a-d2230e694e3f-kube-api-access-5dw7m" (OuterVolumeSpecName: "kube-api-access-5dw7m") pod "f9e97513-5c89-4917-8e5a-d2230e694e3f" (UID: "f9e97513-5c89-4917-8e5a-d2230e694e3f"). InnerVolumeSpecName "kube-api-access-5dw7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.141486 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1eb6af8b-8f65-4725-a2bc-88339a37bf85-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1eb6af8b-8f65-4725-a2bc-88339a37bf85" (UID: "1eb6af8b-8f65-4725-a2bc-88339a37bf85"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.141705 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48f949ea-bb57-46ea-a3b7-dfa2cd3ed8a4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "48f949ea-bb57-46ea-a3b7-dfa2cd3ed8a4" (UID: "48f949ea-bb57-46ea-a3b7-dfa2cd3ed8a4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.141938 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5589f24e-f4c8-427e-ba13-f0ffb8358940-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5589f24e-f4c8-427e-ba13-f0ffb8358940" (UID: "5589f24e-f4c8-427e-ba13-f0ffb8358940"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.144615 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5589f24e-f4c8-427e-ba13-f0ffb8358940-kube-api-access-vls89" (OuterVolumeSpecName: "kube-api-access-vls89") pod "5589f24e-f4c8-427e-ba13-f0ffb8358940" (UID: "5589f24e-f4c8-427e-ba13-f0ffb8358940"). InnerVolumeSpecName "kube-api-access-vls89". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.146555 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1eb6af8b-8f65-4725-a2bc-88339a37bf85-kube-api-access-dmg8j" (OuterVolumeSpecName: "kube-api-access-dmg8j") pod "1eb6af8b-8f65-4725-a2bc-88339a37bf85" (UID: "1eb6af8b-8f65-4725-a2bc-88339a37bf85"). InnerVolumeSpecName "kube-api-access-dmg8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.149301 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb8edc26-5ad8-440e-9d5b-942b0a287ea4-kube-api-access-kjp46" (OuterVolumeSpecName: "kube-api-access-kjp46") pod "fb8edc26-5ad8-440e-9d5b-942b0a287ea4" (UID: "fb8edc26-5ad8-440e-9d5b-942b0a287ea4"). InnerVolumeSpecName "kube-api-access-kjp46". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.154580 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48f949ea-bb57-46ea-a3b7-dfa2cd3ed8a4-kube-api-access-d78dz" (OuterVolumeSpecName: "kube-api-access-d78dz") pod "48f949ea-bb57-46ea-a3b7-dfa2cd3ed8a4" (UID: "48f949ea-bb57-46ea-a3b7-dfa2cd3ed8a4"). InnerVolumeSpecName "kube-api-access-d78dz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.159461 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb8edc26-5ad8-440e-9d5b-942b0a287ea4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "fb8edc26-5ad8-440e-9d5b-942b0a287ea4" (UID: "fb8edc26-5ad8-440e-9d5b-942b0a287ea4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.174749 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb8edc26-5ad8-440e-9d5b-942b0a287ea4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb8edc26-5ad8-440e-9d5b-942b0a287ea4" (UID: "fb8edc26-5ad8-440e-9d5b-942b0a287ea4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.215801 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb8edc26-5ad8-440e-9d5b-942b0a287ea4-config-data" (OuterVolumeSpecName: "config-data") pod "fb8edc26-5ad8-440e-9d5b-942b0a287ea4" (UID: "fb8edc26-5ad8-440e-9d5b-942b0a287ea4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.238545 4794 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5589f24e-f4c8-427e-ba13-f0ffb8358940-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.238581 4794 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fb8edc26-5ad8-440e-9d5b-942b0a287ea4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.238591 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d78dz\" (UniqueName: \"kubernetes.io/projected/48f949ea-bb57-46ea-a3b7-dfa2cd3ed8a4-kube-api-access-d78dz\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.238602 4794 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9e97513-5c89-4917-8e5a-d2230e694e3f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.238613 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vls89\" (UniqueName: \"kubernetes.io/projected/5589f24e-f4c8-427e-ba13-f0ffb8358940-kube-api-access-vls89\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.238621 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb8edc26-5ad8-440e-9d5b-942b0a287ea4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.238630 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmg8j\" (UniqueName: \"kubernetes.io/projected/1eb6af8b-8f65-4725-a2bc-88339a37bf85-kube-api-access-dmg8j\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.238638 4794 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1eb6af8b-8f65-4725-a2bc-88339a37bf85-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.238672 4794 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48f949ea-bb57-46ea-a3b7-dfa2cd3ed8a4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.238681 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dw7m\" (UniqueName: \"kubernetes.io/projected/f9e97513-5c89-4917-8e5a-d2230e694e3f-kube-api-access-5dw7m\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.238689 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb8edc26-5ad8-440e-9d5b-942b0a287ea4-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.238697 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjp46\" (UniqueName: \"kubernetes.io/projected/fb8edc26-5ad8-440e-9d5b-942b0a287ea4-kube-api-access-kjp46\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:35 crc kubenswrapper[4794]: I0216 17:21:35.997651 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.013603 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.092865 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-tk65m" event={"ID":"9c38fe9c-5f26-457a-9209-688ba917fc8c","Type":"ContainerStarted","Data":"c0014598bc2a512223afdf6b71b9f3b4a272584045d78c4d818756b0f6ddd386"} Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.144959 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-tk65m" podStartSLOduration=3.792217657 podStartE2EDuration="11.144932845s" podCreationTimestamp="2026-02-16 17:21:25 +0000 UTC" firstStartedPulling="2026-02-16 17:21:27.37718482 +0000 UTC m=+1313.325279467" lastFinishedPulling="2026-02-16 17:21:34.729900018 +0000 UTC m=+1320.677994655" observedRunningTime="2026-02-16 17:21:36.129408223 +0000 UTC m=+1322.077502880" watchObservedRunningTime="2026-02-16 17:21:36.144932845 +0000 UTC m=+1322.093027492" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.154374 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xp5cn" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.155406 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"54acc9db-6bd7-463f-8637-6aa39ed3eb11","Type":"ContainerStarted","Data":"03de3f5431bceff3af134c434ce355c12764f985a0ab829a4034f1d5187e78bd"} Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.156574 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-vc8d5" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.187776 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.237930 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=48.854784273 podStartE2EDuration="58.237910926s" podCreationTimestamp="2026-02-16 17:20:38 +0000 UTC" firstStartedPulling="2026-02-16 17:21:17.385796358 +0000 UTC m=+1303.333891005" lastFinishedPulling="2026-02-16 17:21:26.768923011 +0000 UTC m=+1312.717017658" observedRunningTime="2026-02-16 17:21:36.23611458 +0000 UTC m=+1322.184209227" watchObservedRunningTime="2026-02-16 17:21:36.237910926 +0000 UTC m=+1322.186005573" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.749068 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-pszl5"] Feb 16 17:21:36 crc kubenswrapper[4794]: E0216 17:21:36.749874 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48f949ea-bb57-46ea-a3b7-dfa2cd3ed8a4" containerName="mariadb-account-create-update" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.749892 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="48f949ea-bb57-46ea-a3b7-dfa2cd3ed8a4" containerName="mariadb-account-create-update" Feb 16 17:21:36 crc kubenswrapper[4794]: E0216 17:21:36.749920 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9e97513-5c89-4917-8e5a-d2230e694e3f" containerName="mariadb-database-create" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.749927 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9e97513-5c89-4917-8e5a-d2230e694e3f" containerName="mariadb-database-create" Feb 16 17:21:36 crc kubenswrapper[4794]: E0216 17:21:36.749944 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eba0114-90ef-495f-b633-be0e999ee9db" containerName="mariadb-database-create" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.749950 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eba0114-90ef-495f-b633-be0e999ee9db" containerName="mariadb-database-create" Feb 16 17:21:36 crc kubenswrapper[4794]: E0216 17:21:36.749963 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eb6af8b-8f65-4725-a2bc-88339a37bf85" containerName="mariadb-database-create" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.749968 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eb6af8b-8f65-4725-a2bc-88339a37bf85" containerName="mariadb-database-create" Feb 16 17:21:36 crc kubenswrapper[4794]: E0216 17:21:36.749978 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb8edc26-5ad8-440e-9d5b-942b0a287ea4" containerName="glance-db-sync" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.749984 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb8edc26-5ad8-440e-9d5b-942b0a287ea4" containerName="glance-db-sync" Feb 16 17:21:36 crc kubenswrapper[4794]: E0216 17:21:36.749995 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5589f24e-f4c8-427e-ba13-f0ffb8358940" containerName="mariadb-account-create-update" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.750001 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="5589f24e-f4c8-427e-ba13-f0ffb8358940" containerName="mariadb-account-create-update" Feb 16 17:21:36 crc kubenswrapper[4794]: E0216 17:21:36.750011 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6505f038-47d3-4a1b-a939-11469306ff84" containerName="mariadb-database-create" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.750017 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="6505f038-47d3-4a1b-a939-11469306ff84" containerName="mariadb-database-create" Feb 16 17:21:36 crc kubenswrapper[4794]: E0216 17:21:36.750032 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6989884b-6a5b-4e42-a0c8-bfd3a1361057" containerName="mariadb-account-create-update" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.750038 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="6989884b-6a5b-4e42-a0c8-bfd3a1361057" containerName="mariadb-account-create-update" Feb 16 17:21:36 crc kubenswrapper[4794]: E0216 17:21:36.750047 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22fec0db-d521-4e76-bd89-7c22ea6a8bb1" containerName="mariadb-account-create-update" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.750052 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="22fec0db-d521-4e76-bd89-7c22ea6a8bb1" containerName="mariadb-account-create-update" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.750253 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb8edc26-5ad8-440e-9d5b-942b0a287ea4" containerName="glance-db-sync" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.750268 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9e97513-5c89-4917-8e5a-d2230e694e3f" containerName="mariadb-database-create" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.750282 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="48f949ea-bb57-46ea-a3b7-dfa2cd3ed8a4" containerName="mariadb-account-create-update" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.750295 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="22fec0db-d521-4e76-bd89-7c22ea6a8bb1" containerName="mariadb-account-create-update" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.750321 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eba0114-90ef-495f-b633-be0e999ee9db" containerName="mariadb-database-create" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.750332 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="6505f038-47d3-4a1b-a939-11469306ff84" containerName="mariadb-database-create" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.750340 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="5589f24e-f4c8-427e-ba13-f0ffb8358940" containerName="mariadb-account-create-update" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.750352 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="1eb6af8b-8f65-4725-a2bc-88339a37bf85" containerName="mariadb-database-create" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.750365 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="6989884b-6a5b-4e42-a0c8-bfd3a1361057" containerName="mariadb-account-create-update" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.751482 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-pszl5" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.797694 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d821d6c0-d74b-42b0-a9f1-a127addce3a0-config\") pod \"dnsmasq-dns-74dc88fc-pszl5\" (UID: \"d821d6c0-d74b-42b0-a9f1-a127addce3a0\") " pod="openstack/dnsmasq-dns-74dc88fc-pszl5" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.797757 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d821d6c0-d74b-42b0-a9f1-a127addce3a0-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-pszl5\" (UID: \"d821d6c0-d74b-42b0-a9f1-a127addce3a0\") " pod="openstack/dnsmasq-dns-74dc88fc-pszl5" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.797925 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hjtk\" (UniqueName: \"kubernetes.io/projected/d821d6c0-d74b-42b0-a9f1-a127addce3a0-kube-api-access-8hjtk\") pod \"dnsmasq-dns-74dc88fc-pszl5\" (UID: \"d821d6c0-d74b-42b0-a9f1-a127addce3a0\") " pod="openstack/dnsmasq-dns-74dc88fc-pszl5" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.797953 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d821d6c0-d74b-42b0-a9f1-a127addce3a0-dns-svc\") pod \"dnsmasq-dns-74dc88fc-pszl5\" (UID: \"d821d6c0-d74b-42b0-a9f1-a127addce3a0\") " pod="openstack/dnsmasq-dns-74dc88fc-pszl5" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.797987 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d821d6c0-d74b-42b0-a9f1-a127addce3a0-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-pszl5\" (UID: \"d821d6c0-d74b-42b0-a9f1-a127addce3a0\") " pod="openstack/dnsmasq-dns-74dc88fc-pszl5" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.813973 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-pszl5"] Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.896349 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-pszl5"] Feb 16 17:21:36 crc kubenswrapper[4794]: E0216 17:21:36.897341 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-8hjtk ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-74dc88fc-pszl5" podUID="d821d6c0-d74b-42b0-a9f1-a127addce3a0" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.900966 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hjtk\" (UniqueName: \"kubernetes.io/projected/d821d6c0-d74b-42b0-a9f1-a127addce3a0-kube-api-access-8hjtk\") pod \"dnsmasq-dns-74dc88fc-pszl5\" (UID: \"d821d6c0-d74b-42b0-a9f1-a127addce3a0\") " pod="openstack/dnsmasq-dns-74dc88fc-pszl5" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.901049 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d821d6c0-d74b-42b0-a9f1-a127addce3a0-dns-svc\") pod \"dnsmasq-dns-74dc88fc-pszl5\" (UID: \"d821d6c0-d74b-42b0-a9f1-a127addce3a0\") " pod="openstack/dnsmasq-dns-74dc88fc-pszl5" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.901087 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d821d6c0-d74b-42b0-a9f1-a127addce3a0-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-pszl5\" (UID: \"d821d6c0-d74b-42b0-a9f1-a127addce3a0\") " pod="openstack/dnsmasq-dns-74dc88fc-pszl5" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.901210 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d821d6c0-d74b-42b0-a9f1-a127addce3a0-config\") pod \"dnsmasq-dns-74dc88fc-pszl5\" (UID: \"d821d6c0-d74b-42b0-a9f1-a127addce3a0\") " pod="openstack/dnsmasq-dns-74dc88fc-pszl5" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.901233 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d821d6c0-d74b-42b0-a9f1-a127addce3a0-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-pszl5\" (UID: \"d821d6c0-d74b-42b0-a9f1-a127addce3a0\") " pod="openstack/dnsmasq-dns-74dc88fc-pszl5" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.902444 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d821d6c0-d74b-42b0-a9f1-a127addce3a0-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-pszl5\" (UID: \"d821d6c0-d74b-42b0-a9f1-a127addce3a0\") " pod="openstack/dnsmasq-dns-74dc88fc-pszl5" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.903451 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d821d6c0-d74b-42b0-a9f1-a127addce3a0-dns-svc\") pod \"dnsmasq-dns-74dc88fc-pszl5\" (UID: \"d821d6c0-d74b-42b0-a9f1-a127addce3a0\") " pod="openstack/dnsmasq-dns-74dc88fc-pszl5" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.904343 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d821d6c0-d74b-42b0-a9f1-a127addce3a0-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-pszl5\" (UID: \"d821d6c0-d74b-42b0-a9f1-a127addce3a0\") " pod="openstack/dnsmasq-dns-74dc88fc-pszl5" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.905428 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d821d6c0-d74b-42b0-a9f1-a127addce3a0-config\") pod \"dnsmasq-dns-74dc88fc-pszl5\" (UID: \"d821d6c0-d74b-42b0-a9f1-a127addce3a0\") " pod="openstack/dnsmasq-dns-74dc88fc-pszl5" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.976275 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hjtk\" (UniqueName: \"kubernetes.io/projected/d821d6c0-d74b-42b0-a9f1-a127addce3a0-kube-api-access-8hjtk\") pod \"dnsmasq-dns-74dc88fc-pszl5\" (UID: \"d821d6c0-d74b-42b0-a9f1-a127addce3a0\") " pod="openstack/dnsmasq-dns-74dc88fc-pszl5" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.982515 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-n9w9x"] Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.984234 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-n9w9x" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.994789 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 16 17:21:36 crc kubenswrapper[4794]: I0216 17:21:36.996861 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-n9w9x"] Feb 16 17:21:37 crc kubenswrapper[4794]: I0216 17:21:37.007357 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd6ae0e2-b666-4241-b3ab-fbfc47c39651-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-n9w9x\" (UID: \"cd6ae0e2-b666-4241-b3ab-fbfc47c39651\") " pod="openstack/dnsmasq-dns-5f59b8f679-n9w9x" Feb 16 17:21:37 crc kubenswrapper[4794]: I0216 17:21:37.007426 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd6ae0e2-b666-4241-b3ab-fbfc47c39651-config\") pod \"dnsmasq-dns-5f59b8f679-n9w9x\" (UID: \"cd6ae0e2-b666-4241-b3ab-fbfc47c39651\") " pod="openstack/dnsmasq-dns-5f59b8f679-n9w9x" Feb 16 17:21:37 crc kubenswrapper[4794]: I0216 17:21:37.007446 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbbxc\" (UniqueName: \"kubernetes.io/projected/cd6ae0e2-b666-4241-b3ab-fbfc47c39651-kube-api-access-gbbxc\") pod \"dnsmasq-dns-5f59b8f679-n9w9x\" (UID: \"cd6ae0e2-b666-4241-b3ab-fbfc47c39651\") " pod="openstack/dnsmasq-dns-5f59b8f679-n9w9x" Feb 16 17:21:37 crc kubenswrapper[4794]: I0216 17:21:37.007708 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd6ae0e2-b666-4241-b3ab-fbfc47c39651-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-n9w9x\" (UID: \"cd6ae0e2-b666-4241-b3ab-fbfc47c39651\") " pod="openstack/dnsmasq-dns-5f59b8f679-n9w9x" Feb 16 17:21:37 crc kubenswrapper[4794]: I0216 17:21:37.007806 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd6ae0e2-b666-4241-b3ab-fbfc47c39651-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-n9w9x\" (UID: \"cd6ae0e2-b666-4241-b3ab-fbfc47c39651\") " pod="openstack/dnsmasq-dns-5f59b8f679-n9w9x" Feb 16 17:21:37 crc kubenswrapper[4794]: I0216 17:21:37.007831 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd6ae0e2-b666-4241-b3ab-fbfc47c39651-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-n9w9x\" (UID: \"cd6ae0e2-b666-4241-b3ab-fbfc47c39651\") " pod="openstack/dnsmasq-dns-5f59b8f679-n9w9x" Feb 16 17:21:37 crc kubenswrapper[4794]: I0216 17:21:37.109942 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd6ae0e2-b666-4241-b3ab-fbfc47c39651-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-n9w9x\" (UID: \"cd6ae0e2-b666-4241-b3ab-fbfc47c39651\") " pod="openstack/dnsmasq-dns-5f59b8f679-n9w9x" Feb 16 17:21:37 crc kubenswrapper[4794]: I0216 17:21:37.110014 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd6ae0e2-b666-4241-b3ab-fbfc47c39651-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-n9w9x\" (UID: \"cd6ae0e2-b666-4241-b3ab-fbfc47c39651\") " pod="openstack/dnsmasq-dns-5f59b8f679-n9w9x" Feb 16 17:21:37 crc kubenswrapper[4794]: I0216 17:21:37.110036 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd6ae0e2-b666-4241-b3ab-fbfc47c39651-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-n9w9x\" (UID: \"cd6ae0e2-b666-4241-b3ab-fbfc47c39651\") " pod="openstack/dnsmasq-dns-5f59b8f679-n9w9x" Feb 16 17:21:37 crc kubenswrapper[4794]: I0216 17:21:37.110153 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd6ae0e2-b666-4241-b3ab-fbfc47c39651-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-n9w9x\" (UID: \"cd6ae0e2-b666-4241-b3ab-fbfc47c39651\") " pod="openstack/dnsmasq-dns-5f59b8f679-n9w9x" Feb 16 17:21:37 crc kubenswrapper[4794]: I0216 17:21:37.110212 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd6ae0e2-b666-4241-b3ab-fbfc47c39651-config\") pod \"dnsmasq-dns-5f59b8f679-n9w9x\" (UID: \"cd6ae0e2-b666-4241-b3ab-fbfc47c39651\") " pod="openstack/dnsmasq-dns-5f59b8f679-n9w9x" Feb 16 17:21:37 crc kubenswrapper[4794]: I0216 17:21:37.110234 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbbxc\" (UniqueName: \"kubernetes.io/projected/cd6ae0e2-b666-4241-b3ab-fbfc47c39651-kube-api-access-gbbxc\") pod \"dnsmasq-dns-5f59b8f679-n9w9x\" (UID: \"cd6ae0e2-b666-4241-b3ab-fbfc47c39651\") " pod="openstack/dnsmasq-dns-5f59b8f679-n9w9x" Feb 16 17:21:37 crc kubenswrapper[4794]: I0216 17:21:37.111420 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd6ae0e2-b666-4241-b3ab-fbfc47c39651-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-n9w9x\" (UID: \"cd6ae0e2-b666-4241-b3ab-fbfc47c39651\") " pod="openstack/dnsmasq-dns-5f59b8f679-n9w9x" Feb 16 17:21:37 crc kubenswrapper[4794]: I0216 17:21:37.111658 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd6ae0e2-b666-4241-b3ab-fbfc47c39651-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-n9w9x\" (UID: \"cd6ae0e2-b666-4241-b3ab-fbfc47c39651\") " pod="openstack/dnsmasq-dns-5f59b8f679-n9w9x" Feb 16 17:21:37 crc kubenswrapper[4794]: I0216 17:21:37.111724 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd6ae0e2-b666-4241-b3ab-fbfc47c39651-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-n9w9x\" (UID: \"cd6ae0e2-b666-4241-b3ab-fbfc47c39651\") " pod="openstack/dnsmasq-dns-5f59b8f679-n9w9x" Feb 16 17:21:37 crc kubenswrapper[4794]: I0216 17:21:37.111674 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd6ae0e2-b666-4241-b3ab-fbfc47c39651-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-n9w9x\" (UID: \"cd6ae0e2-b666-4241-b3ab-fbfc47c39651\") " pod="openstack/dnsmasq-dns-5f59b8f679-n9w9x" Feb 16 17:21:37 crc kubenswrapper[4794]: I0216 17:21:37.111785 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd6ae0e2-b666-4241-b3ab-fbfc47c39651-config\") pod \"dnsmasq-dns-5f59b8f679-n9w9x\" (UID: \"cd6ae0e2-b666-4241-b3ab-fbfc47c39651\") " pod="openstack/dnsmasq-dns-5f59b8f679-n9w9x" Feb 16 17:21:37 crc kubenswrapper[4794]: I0216 17:21:37.127495 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbbxc\" (UniqueName: \"kubernetes.io/projected/cd6ae0e2-b666-4241-b3ab-fbfc47c39651-kube-api-access-gbbxc\") pod \"dnsmasq-dns-5f59b8f679-n9w9x\" (UID: \"cd6ae0e2-b666-4241-b3ab-fbfc47c39651\") " pod="openstack/dnsmasq-dns-5f59b8f679-n9w9x" Feb 16 17:21:37 crc kubenswrapper[4794]: I0216 17:21:37.167059 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-pszl5" Feb 16 17:21:37 crc kubenswrapper[4794]: I0216 17:21:37.190438 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-pszl5" Feb 16 17:21:37 crc kubenswrapper[4794]: I0216 17:21:37.211487 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d821d6c0-d74b-42b0-a9f1-a127addce3a0-config\") pod \"d821d6c0-d74b-42b0-a9f1-a127addce3a0\" (UID: \"d821d6c0-d74b-42b0-a9f1-a127addce3a0\") " Feb 16 17:21:37 crc kubenswrapper[4794]: I0216 17:21:37.211602 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hjtk\" (UniqueName: \"kubernetes.io/projected/d821d6c0-d74b-42b0-a9f1-a127addce3a0-kube-api-access-8hjtk\") pod \"d821d6c0-d74b-42b0-a9f1-a127addce3a0\" (UID: \"d821d6c0-d74b-42b0-a9f1-a127addce3a0\") " Feb 16 17:21:37 crc kubenswrapper[4794]: I0216 17:21:37.211673 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d821d6c0-d74b-42b0-a9f1-a127addce3a0-dns-svc\") pod \"d821d6c0-d74b-42b0-a9f1-a127addce3a0\" (UID: \"d821d6c0-d74b-42b0-a9f1-a127addce3a0\") " Feb 16 17:21:37 crc kubenswrapper[4794]: I0216 17:21:37.211724 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d821d6c0-d74b-42b0-a9f1-a127addce3a0-ovsdbserver-sb\") pod \"d821d6c0-d74b-42b0-a9f1-a127addce3a0\" (UID: \"d821d6c0-d74b-42b0-a9f1-a127addce3a0\") " Feb 16 17:21:37 crc kubenswrapper[4794]: I0216 17:21:37.211744 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d821d6c0-d74b-42b0-a9f1-a127addce3a0-ovsdbserver-nb\") pod \"d821d6c0-d74b-42b0-a9f1-a127addce3a0\" (UID: \"d821d6c0-d74b-42b0-a9f1-a127addce3a0\") " Feb 16 17:21:37 crc kubenswrapper[4794]: I0216 17:21:37.211999 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d821d6c0-d74b-42b0-a9f1-a127addce3a0-config" (OuterVolumeSpecName: "config") pod "d821d6c0-d74b-42b0-a9f1-a127addce3a0" (UID: "d821d6c0-d74b-42b0-a9f1-a127addce3a0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:37 crc kubenswrapper[4794]: I0216 17:21:37.212177 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d821d6c0-d74b-42b0-a9f1-a127addce3a0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d821d6c0-d74b-42b0-a9f1-a127addce3a0" (UID: "d821d6c0-d74b-42b0-a9f1-a127addce3a0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:37 crc kubenswrapper[4794]: I0216 17:21:37.212207 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d821d6c0-d74b-42b0-a9f1-a127addce3a0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d821d6c0-d74b-42b0-a9f1-a127addce3a0" (UID: "d821d6c0-d74b-42b0-a9f1-a127addce3a0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:37 crc kubenswrapper[4794]: I0216 17:21:37.212363 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d821d6c0-d74b-42b0-a9f1-a127addce3a0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d821d6c0-d74b-42b0-a9f1-a127addce3a0" (UID: "d821d6c0-d74b-42b0-a9f1-a127addce3a0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:37 crc kubenswrapper[4794]: I0216 17:21:37.212799 4794 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d821d6c0-d74b-42b0-a9f1-a127addce3a0-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:37 crc kubenswrapper[4794]: I0216 17:21:37.213149 4794 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d821d6c0-d74b-42b0-a9f1-a127addce3a0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:37 crc kubenswrapper[4794]: I0216 17:21:37.213468 4794 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d821d6c0-d74b-42b0-a9f1-a127addce3a0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:37 crc kubenswrapper[4794]: I0216 17:21:37.213522 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d821d6c0-d74b-42b0-a9f1-a127addce3a0-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:37 crc kubenswrapper[4794]: I0216 17:21:37.216498 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d821d6c0-d74b-42b0-a9f1-a127addce3a0-kube-api-access-8hjtk" (OuterVolumeSpecName: "kube-api-access-8hjtk") pod "d821d6c0-d74b-42b0-a9f1-a127addce3a0" (UID: "d821d6c0-d74b-42b0-a9f1-a127addce3a0"). InnerVolumeSpecName "kube-api-access-8hjtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:37 crc kubenswrapper[4794]: I0216 17:21:37.315361 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hjtk\" (UniqueName: \"kubernetes.io/projected/d821d6c0-d74b-42b0-a9f1-a127addce3a0-kube-api-access-8hjtk\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:37 crc kubenswrapper[4794]: I0216 17:21:37.337851 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-n9w9x" Feb 16 17:21:37 crc kubenswrapper[4794]: I0216 17:21:37.855661 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-n9w9x"] Feb 16 17:21:38 crc kubenswrapper[4794]: I0216 17:21:38.182961 4794 generic.go:334] "Generic (PLEG): container finished" podID="cd6ae0e2-b666-4241-b3ab-fbfc47c39651" containerID="7880e1b3199ae3987009ebe7ad2442d8b2a17a087c64be5cc42adcea4432e821" exitCode=0 Feb 16 17:21:38 crc kubenswrapper[4794]: I0216 17:21:38.183214 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-pszl5" Feb 16 17:21:38 crc kubenswrapper[4794]: I0216 17:21:38.183806 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-n9w9x" event={"ID":"cd6ae0e2-b666-4241-b3ab-fbfc47c39651","Type":"ContainerDied","Data":"7880e1b3199ae3987009ebe7ad2442d8b2a17a087c64be5cc42adcea4432e821"} Feb 16 17:21:38 crc kubenswrapper[4794]: I0216 17:21:38.183852 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-n9w9x" event={"ID":"cd6ae0e2-b666-4241-b3ab-fbfc47c39651","Type":"ContainerStarted","Data":"4ea84a985e308b36159ce17a9d4c8502d4b1517652ee49166dce715be501ebfc"} Feb 16 17:21:38 crc kubenswrapper[4794]: I0216 17:21:38.454968 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-pszl5"] Feb 16 17:21:38 crc kubenswrapper[4794]: I0216 17:21:38.469402 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-pszl5"] Feb 16 17:21:38 crc kubenswrapper[4794]: I0216 17:21:38.806694 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d821d6c0-d74b-42b0-a9f1-a127addce3a0" path="/var/lib/kubelet/pods/d821d6c0-d74b-42b0-a9f1-a127addce3a0/volumes" Feb 16 17:21:39 crc kubenswrapper[4794]: I0216 17:21:39.194831 4794 generic.go:334] "Generic (PLEG): container finished" podID="9c38fe9c-5f26-457a-9209-688ba917fc8c" containerID="c0014598bc2a512223afdf6b71b9f3b4a272584045d78c4d818756b0f6ddd386" exitCode=0 Feb 16 17:21:39 crc kubenswrapper[4794]: I0216 17:21:39.194910 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-tk65m" event={"ID":"9c38fe9c-5f26-457a-9209-688ba917fc8c","Type":"ContainerDied","Data":"c0014598bc2a512223afdf6b71b9f3b4a272584045d78c4d818756b0f6ddd386"} Feb 16 17:21:39 crc kubenswrapper[4794]: I0216 17:21:39.197014 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-n9w9x" event={"ID":"cd6ae0e2-b666-4241-b3ab-fbfc47c39651","Type":"ContainerStarted","Data":"6b14d3240dc923b72393fa5df8f4742930afa63848ffd3ad8101d4c83f6ea5e6"} Feb 16 17:21:39 crc kubenswrapper[4794]: I0216 17:21:39.197446 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-n9w9x" Feb 16 17:21:39 crc kubenswrapper[4794]: I0216 17:21:39.231056 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f59b8f679-n9w9x" podStartSLOduration=3.231036381 podStartE2EDuration="3.231036381s" podCreationTimestamp="2026-02-16 17:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:21:39.222783202 +0000 UTC m=+1325.170877879" watchObservedRunningTime="2026-02-16 17:21:39.231036381 +0000 UTC m=+1325.179131028" Feb 16 17:21:40 crc kubenswrapper[4794]: I0216 17:21:40.725749 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-tk65m" Feb 16 17:21:40 crc kubenswrapper[4794]: I0216 17:21:40.793806 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpj5s\" (UniqueName: \"kubernetes.io/projected/9c38fe9c-5f26-457a-9209-688ba917fc8c-kube-api-access-vpj5s\") pod \"9c38fe9c-5f26-457a-9209-688ba917fc8c\" (UID: \"9c38fe9c-5f26-457a-9209-688ba917fc8c\") " Feb 16 17:21:40 crc kubenswrapper[4794]: I0216 17:21:40.794036 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c38fe9c-5f26-457a-9209-688ba917fc8c-combined-ca-bundle\") pod \"9c38fe9c-5f26-457a-9209-688ba917fc8c\" (UID: \"9c38fe9c-5f26-457a-9209-688ba917fc8c\") " Feb 16 17:21:40 crc kubenswrapper[4794]: I0216 17:21:40.794230 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c38fe9c-5f26-457a-9209-688ba917fc8c-config-data\") pod \"9c38fe9c-5f26-457a-9209-688ba917fc8c\" (UID: \"9c38fe9c-5f26-457a-9209-688ba917fc8c\") " Feb 16 17:21:40 crc kubenswrapper[4794]: I0216 17:21:40.803852 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c38fe9c-5f26-457a-9209-688ba917fc8c-kube-api-access-vpj5s" (OuterVolumeSpecName: "kube-api-access-vpj5s") pod "9c38fe9c-5f26-457a-9209-688ba917fc8c" (UID: "9c38fe9c-5f26-457a-9209-688ba917fc8c"). InnerVolumeSpecName "kube-api-access-vpj5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:40 crc kubenswrapper[4794]: I0216 17:21:40.836221 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c38fe9c-5f26-457a-9209-688ba917fc8c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c38fe9c-5f26-457a-9209-688ba917fc8c" (UID: "9c38fe9c-5f26-457a-9209-688ba917fc8c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:40 crc kubenswrapper[4794]: I0216 17:21:40.855989 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c38fe9c-5f26-457a-9209-688ba917fc8c-config-data" (OuterVolumeSpecName: "config-data") pod "9c38fe9c-5f26-457a-9209-688ba917fc8c" (UID: "9c38fe9c-5f26-457a-9209-688ba917fc8c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:40 crc kubenswrapper[4794]: I0216 17:21:40.897882 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c38fe9c-5f26-457a-9209-688ba917fc8c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:40 crc kubenswrapper[4794]: I0216 17:21:40.898091 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c38fe9c-5f26-457a-9209-688ba917fc8c-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:40 crc kubenswrapper[4794]: I0216 17:21:40.898149 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpj5s\" (UniqueName: \"kubernetes.io/projected/9c38fe9c-5f26-457a-9209-688ba917fc8c-kube-api-access-vpj5s\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.221758 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-tk65m" event={"ID":"9c38fe9c-5f26-457a-9209-688ba917fc8c","Type":"ContainerDied","Data":"dcde796ac2a5c68ed899ad84b10724f9f2cb6a343632bbafe7694f723b8eb1f2"} Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.221871 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcde796ac2a5c68ed899ad84b10724f9f2cb6a343632bbafe7694f723b8eb1f2" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.221801 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-tk65m" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.439156 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-n9w9x"] Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.440101 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f59b8f679-n9w9x" podUID="cd6ae0e2-b666-4241-b3ab-fbfc47c39651" containerName="dnsmasq-dns" containerID="cri-o://6b14d3240dc923b72393fa5df8f4742930afa63848ffd3ad8101d4c83f6ea5e6" gracePeriod=10 Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.466177 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-rxzd7"] Feb 16 17:21:41 crc kubenswrapper[4794]: E0216 17:21:41.467027 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c38fe9c-5f26-457a-9209-688ba917fc8c" containerName="keystone-db-sync" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.467151 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c38fe9c-5f26-457a-9209-688ba917fc8c" containerName="keystone-db-sync" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.467558 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c38fe9c-5f26-457a-9209-688ba917fc8c" containerName="keystone-db-sync" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.469140 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-rxzd7" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.475344 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-snkpw"] Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.476723 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-snkpw" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.481923 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.482351 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.482608 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.482937 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gm9wc" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.489529 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-rxzd7"] Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.509471 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.514627 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-snkpw"] Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.594232 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-rppbn"] Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.597522 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-rppbn" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.609462 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.631346 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-s27hr" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.644556 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4585a2a5-30be-4837-b502-d948b6f4cf6e-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-rxzd7\" (UID: \"4585a2a5-30be-4837-b502-d948b6f4cf6e\") " pod="openstack/dnsmasq-dns-bbf5cc879-rxzd7" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.644790 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/957098f0-d0b0-425d-b74a-fe3c84889eab-config-data\") pod \"keystone-bootstrap-snkpw\" (UID: \"957098f0-d0b0-425d-b74a-fe3c84889eab\") " pod="openstack/keystone-bootstrap-snkpw" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.644902 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/957098f0-d0b0-425d-b74a-fe3c84889eab-fernet-keys\") pod \"keystone-bootstrap-snkpw\" (UID: \"957098f0-d0b0-425d-b74a-fe3c84889eab\") " pod="openstack/keystone-bootstrap-snkpw" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.645175 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4585a2a5-30be-4837-b502-d948b6f4cf6e-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-rxzd7\" (UID: \"4585a2a5-30be-4837-b502-d948b6f4cf6e\") " pod="openstack/dnsmasq-dns-bbf5cc879-rxzd7" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.645237 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnb2g\" (UniqueName: \"kubernetes.io/projected/4585a2a5-30be-4837-b502-d948b6f4cf6e-kube-api-access-lnb2g\") pod \"dnsmasq-dns-bbf5cc879-rxzd7\" (UID: \"4585a2a5-30be-4837-b502-d948b6f4cf6e\") " pod="openstack/dnsmasq-dns-bbf5cc879-rxzd7" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.645290 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4585a2a5-30be-4837-b502-d948b6f4cf6e-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-rxzd7\" (UID: \"4585a2a5-30be-4837-b502-d948b6f4cf6e\") " pod="openstack/dnsmasq-dns-bbf5cc879-rxzd7" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.647510 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4585a2a5-30be-4837-b502-d948b6f4cf6e-config\") pod \"dnsmasq-dns-bbf5cc879-rxzd7\" (UID: \"4585a2a5-30be-4837-b502-d948b6f4cf6e\") " pod="openstack/dnsmasq-dns-bbf5cc879-rxzd7" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.647564 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdtfj\" (UniqueName: \"kubernetes.io/projected/957098f0-d0b0-425d-b74a-fe3c84889eab-kube-api-access-bdtfj\") pod \"keystone-bootstrap-snkpw\" (UID: \"957098f0-d0b0-425d-b74a-fe3c84889eab\") " pod="openstack/keystone-bootstrap-snkpw" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.647787 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/957098f0-d0b0-425d-b74a-fe3c84889eab-scripts\") pod \"keystone-bootstrap-snkpw\" (UID: \"957098f0-d0b0-425d-b74a-fe3c84889eab\") " pod="openstack/keystone-bootstrap-snkpw" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.647849 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4585a2a5-30be-4837-b502-d948b6f4cf6e-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-rxzd7\" (UID: \"4585a2a5-30be-4837-b502-d948b6f4cf6e\") " pod="openstack/dnsmasq-dns-bbf5cc879-rxzd7" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.647895 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/957098f0-d0b0-425d-b74a-fe3c84889eab-credential-keys\") pod \"keystone-bootstrap-snkpw\" (UID: \"957098f0-d0b0-425d-b74a-fe3c84889eab\") " pod="openstack/keystone-bootstrap-snkpw" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.647951 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/957098f0-d0b0-425d-b74a-fe3c84889eab-combined-ca-bundle\") pod \"keystone-bootstrap-snkpw\" (UID: \"957098f0-d0b0-425d-b74a-fe3c84889eab\") " pod="openstack/keystone-bootstrap-snkpw" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.751241 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgjh2\" (UniqueName: \"kubernetes.io/projected/8f3b58ad-6afe-4194-a578-2f4fec69367c-kube-api-access-pgjh2\") pod \"heat-db-sync-rppbn\" (UID: \"8f3b58ad-6afe-4194-a578-2f4fec69367c\") " pod="openstack/heat-db-sync-rppbn" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.751368 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4585a2a5-30be-4837-b502-d948b6f4cf6e-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-rxzd7\" (UID: \"4585a2a5-30be-4837-b502-d948b6f4cf6e\") " pod="openstack/dnsmasq-dns-bbf5cc879-rxzd7" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.751407 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnb2g\" (UniqueName: \"kubernetes.io/projected/4585a2a5-30be-4837-b502-d948b6f4cf6e-kube-api-access-lnb2g\") pod \"dnsmasq-dns-bbf5cc879-rxzd7\" (UID: \"4585a2a5-30be-4837-b502-d948b6f4cf6e\") " pod="openstack/dnsmasq-dns-bbf5cc879-rxzd7" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.751435 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4585a2a5-30be-4837-b502-d948b6f4cf6e-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-rxzd7\" (UID: \"4585a2a5-30be-4837-b502-d948b6f4cf6e\") " pod="openstack/dnsmasq-dns-bbf5cc879-rxzd7" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.751485 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4585a2a5-30be-4837-b502-d948b6f4cf6e-config\") pod \"dnsmasq-dns-bbf5cc879-rxzd7\" (UID: \"4585a2a5-30be-4837-b502-d948b6f4cf6e\") " pod="openstack/dnsmasq-dns-bbf5cc879-rxzd7" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.751528 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdtfj\" (UniqueName: \"kubernetes.io/projected/957098f0-d0b0-425d-b74a-fe3c84889eab-kube-api-access-bdtfj\") pod \"keystone-bootstrap-snkpw\" (UID: \"957098f0-d0b0-425d-b74a-fe3c84889eab\") " pod="openstack/keystone-bootstrap-snkpw" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.751572 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f3b58ad-6afe-4194-a578-2f4fec69367c-combined-ca-bundle\") pod \"heat-db-sync-rppbn\" (UID: \"8f3b58ad-6afe-4194-a578-2f4fec69367c\") " pod="openstack/heat-db-sync-rppbn" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.751620 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f3b58ad-6afe-4194-a578-2f4fec69367c-config-data\") pod \"heat-db-sync-rppbn\" (UID: \"8f3b58ad-6afe-4194-a578-2f4fec69367c\") " pod="openstack/heat-db-sync-rppbn" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.751657 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/957098f0-d0b0-425d-b74a-fe3c84889eab-scripts\") pod \"keystone-bootstrap-snkpw\" (UID: \"957098f0-d0b0-425d-b74a-fe3c84889eab\") " pod="openstack/keystone-bootstrap-snkpw" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.751695 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4585a2a5-30be-4837-b502-d948b6f4cf6e-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-rxzd7\" (UID: \"4585a2a5-30be-4837-b502-d948b6f4cf6e\") " pod="openstack/dnsmasq-dns-bbf5cc879-rxzd7" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.751737 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/957098f0-d0b0-425d-b74a-fe3c84889eab-credential-keys\") pod \"keystone-bootstrap-snkpw\" (UID: \"957098f0-d0b0-425d-b74a-fe3c84889eab\") " pod="openstack/keystone-bootstrap-snkpw" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.751771 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/957098f0-d0b0-425d-b74a-fe3c84889eab-combined-ca-bundle\") pod \"keystone-bootstrap-snkpw\" (UID: \"957098f0-d0b0-425d-b74a-fe3c84889eab\") " pod="openstack/keystone-bootstrap-snkpw" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.751895 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4585a2a5-30be-4837-b502-d948b6f4cf6e-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-rxzd7\" (UID: \"4585a2a5-30be-4837-b502-d948b6f4cf6e\") " pod="openstack/dnsmasq-dns-bbf5cc879-rxzd7" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.751922 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/957098f0-d0b0-425d-b74a-fe3c84889eab-config-data\") pod \"keystone-bootstrap-snkpw\" (UID: \"957098f0-d0b0-425d-b74a-fe3c84889eab\") " pod="openstack/keystone-bootstrap-snkpw" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.751953 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/957098f0-d0b0-425d-b74a-fe3c84889eab-fernet-keys\") pod \"keystone-bootstrap-snkpw\" (UID: \"957098f0-d0b0-425d-b74a-fe3c84889eab\") " pod="openstack/keystone-bootstrap-snkpw" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.762771 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4585a2a5-30be-4837-b502-d948b6f4cf6e-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-rxzd7\" (UID: \"4585a2a5-30be-4837-b502-d948b6f4cf6e\") " pod="openstack/dnsmasq-dns-bbf5cc879-rxzd7" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.763961 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4585a2a5-30be-4837-b502-d948b6f4cf6e-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-rxzd7\" (UID: \"4585a2a5-30be-4837-b502-d948b6f4cf6e\") " pod="openstack/dnsmasq-dns-bbf5cc879-rxzd7" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.764484 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4585a2a5-30be-4837-b502-d948b6f4cf6e-config\") pod \"dnsmasq-dns-bbf5cc879-rxzd7\" (UID: \"4585a2a5-30be-4837-b502-d948b6f4cf6e\") " pod="openstack/dnsmasq-dns-bbf5cc879-rxzd7" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.773332 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4585a2a5-30be-4837-b502-d948b6f4cf6e-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-rxzd7\" (UID: \"4585a2a5-30be-4837-b502-d948b6f4cf6e\") " pod="openstack/dnsmasq-dns-bbf5cc879-rxzd7" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.774868 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4585a2a5-30be-4837-b502-d948b6f4cf6e-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-rxzd7\" (UID: \"4585a2a5-30be-4837-b502-d948b6f4cf6e\") " pod="openstack/dnsmasq-dns-bbf5cc879-rxzd7" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.789080 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/957098f0-d0b0-425d-b74a-fe3c84889eab-config-data\") pod \"keystone-bootstrap-snkpw\" (UID: \"957098f0-d0b0-425d-b74a-fe3c84889eab\") " pod="openstack/keystone-bootstrap-snkpw" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.795895 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/957098f0-d0b0-425d-b74a-fe3c84889eab-scripts\") pod \"keystone-bootstrap-snkpw\" (UID: \"957098f0-d0b0-425d-b74a-fe3c84889eab\") " pod="openstack/keystone-bootstrap-snkpw" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.796251 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/957098f0-d0b0-425d-b74a-fe3c84889eab-credential-keys\") pod \"keystone-bootstrap-snkpw\" (UID: \"957098f0-d0b0-425d-b74a-fe3c84889eab\") " pod="openstack/keystone-bootstrap-snkpw" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.796770 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-rppbn"] Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.796839 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnb2g\" (UniqueName: \"kubernetes.io/projected/4585a2a5-30be-4837-b502-d948b6f4cf6e-kube-api-access-lnb2g\") pod \"dnsmasq-dns-bbf5cc879-rxzd7\" (UID: \"4585a2a5-30be-4837-b502-d948b6f4cf6e\") " pod="openstack/dnsmasq-dns-bbf5cc879-rxzd7" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.803910 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-rxzd7" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.804264 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/957098f0-d0b0-425d-b74a-fe3c84889eab-fernet-keys\") pod \"keystone-bootstrap-snkpw\" (UID: \"957098f0-d0b0-425d-b74a-fe3c84889eab\") " pod="openstack/keystone-bootstrap-snkpw" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.806781 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdtfj\" (UniqueName: \"kubernetes.io/projected/957098f0-d0b0-425d-b74a-fe3c84889eab-kube-api-access-bdtfj\") pod \"keystone-bootstrap-snkpw\" (UID: \"957098f0-d0b0-425d-b74a-fe3c84889eab\") " pod="openstack/keystone-bootstrap-snkpw" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.811045 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/957098f0-d0b0-425d-b74a-fe3c84889eab-combined-ca-bundle\") pod \"keystone-bootstrap-snkpw\" (UID: \"957098f0-d0b0-425d-b74a-fe3c84889eab\") " pod="openstack/keystone-bootstrap-snkpw" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.816843 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-snkpw" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.855017 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgjh2\" (UniqueName: \"kubernetes.io/projected/8f3b58ad-6afe-4194-a578-2f4fec69367c-kube-api-access-pgjh2\") pod \"heat-db-sync-rppbn\" (UID: \"8f3b58ad-6afe-4194-a578-2f4fec69367c\") " pod="openstack/heat-db-sync-rppbn" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.855139 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f3b58ad-6afe-4194-a578-2f4fec69367c-combined-ca-bundle\") pod \"heat-db-sync-rppbn\" (UID: \"8f3b58ad-6afe-4194-a578-2f4fec69367c\") " pod="openstack/heat-db-sync-rppbn" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.855172 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f3b58ad-6afe-4194-a578-2f4fec69367c-config-data\") pod \"heat-db-sync-rppbn\" (UID: \"8f3b58ad-6afe-4194-a578-2f4fec69367c\") " pod="openstack/heat-db-sync-rppbn" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.858403 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-fs2n9"] Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.862366 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fs2n9" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.863429 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f3b58ad-6afe-4194-a578-2f4fec69367c-config-data\") pod \"heat-db-sync-rppbn\" (UID: \"8f3b58ad-6afe-4194-a578-2f4fec69367c\") " pod="openstack/heat-db-sync-rppbn" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.863626 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f3b58ad-6afe-4194-a578-2f4fec69367c-combined-ca-bundle\") pod \"heat-db-sync-rppbn\" (UID: \"8f3b58ad-6afe-4194-a578-2f4fec69367c\") " pod="openstack/heat-db-sync-rppbn" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.875034 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qxq52" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.916745 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.916797 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.924036 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-fs2n9"] Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.941021 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgjh2\" (UniqueName: \"kubernetes.io/projected/8f3b58ad-6afe-4194-a578-2f4fec69367c-kube-api-access-pgjh2\") pod \"heat-db-sync-rppbn\" (UID: \"8f3b58ad-6afe-4194-a578-2f4fec69367c\") " pod="openstack/heat-db-sync-rppbn" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.958077 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-rs2k4"] Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.960452 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gxcp\" (UniqueName: \"kubernetes.io/projected/67e15f05-9d62-45f7-a278-aeb9583be1a3-kube-api-access-9gxcp\") pod \"neutron-db-sync-fs2n9\" (UID: \"67e15f05-9d62-45f7-a278-aeb9583be1a3\") " pod="openstack/neutron-db-sync-fs2n9" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.960735 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67e15f05-9d62-45f7-a278-aeb9583be1a3-combined-ca-bundle\") pod \"neutron-db-sync-fs2n9\" (UID: \"67e15f05-9d62-45f7-a278-aeb9583be1a3\") " pod="openstack/neutron-db-sync-fs2n9" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.960928 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/67e15f05-9d62-45f7-a278-aeb9583be1a3-config\") pod \"neutron-db-sync-fs2n9\" (UID: \"67e15f05-9d62-45f7-a278-aeb9583be1a3\") " pod="openstack/neutron-db-sync-fs2n9" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.965664 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rs2k4" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.987231 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 16 17:21:41 crc kubenswrapper[4794]: I0216 17:21:41.987750 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-tk9n7" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.040275 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-rppbn" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.040754 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-t9x9p"] Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.044863 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-t9x9p" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.056772 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-lc8tq" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.067826 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.084952 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.089409 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnm8q\" (UniqueName: \"kubernetes.io/projected/865acfbb-330f-4594-a7d8-64962cab3cd5-kube-api-access-wnm8q\") pod \"barbican-db-sync-rs2k4\" (UID: \"865acfbb-330f-4594-a7d8-64962cab3cd5\") " pod="openstack/barbican-db-sync-rs2k4" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.089583 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67e15f05-9d62-45f7-a278-aeb9583be1a3-combined-ca-bundle\") pod \"neutron-db-sync-fs2n9\" (UID: \"67e15f05-9d62-45f7-a278-aeb9583be1a3\") " pod="openstack/neutron-db-sync-fs2n9" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.089765 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/67e15f05-9d62-45f7-a278-aeb9583be1a3-config\") pod \"neutron-db-sync-fs2n9\" (UID: \"67e15f05-9d62-45f7-a278-aeb9583be1a3\") " pod="openstack/neutron-db-sync-fs2n9" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.089836 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gxcp\" (UniqueName: \"kubernetes.io/projected/67e15f05-9d62-45f7-a278-aeb9583be1a3-kube-api-access-9gxcp\") pod \"neutron-db-sync-fs2n9\" (UID: \"67e15f05-9d62-45f7-a278-aeb9583be1a3\") " pod="openstack/neutron-db-sync-fs2n9" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.089868 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/865acfbb-330f-4594-a7d8-64962cab3cd5-db-sync-config-data\") pod \"barbican-db-sync-rs2k4\" (UID: \"865acfbb-330f-4594-a7d8-64962cab3cd5\") " pod="openstack/barbican-db-sync-rs2k4" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.089888 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/865acfbb-330f-4594-a7d8-64962cab3cd5-combined-ca-bundle\") pod \"barbican-db-sync-rs2k4\" (UID: \"865acfbb-330f-4594-a7d8-64962cab3cd5\") " pod="openstack/barbican-db-sync-rs2k4" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.129891 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67e15f05-9d62-45f7-a278-aeb9583be1a3-combined-ca-bundle\") pod \"neutron-db-sync-fs2n9\" (UID: \"67e15f05-9d62-45f7-a278-aeb9583be1a3\") " pod="openstack/neutron-db-sync-fs2n9" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.130133 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/67e15f05-9d62-45f7-a278-aeb9583be1a3-config\") pod \"neutron-db-sync-fs2n9\" (UID: \"67e15f05-9d62-45f7-a278-aeb9583be1a3\") " pod="openstack/neutron-db-sync-fs2n9" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.155890 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gxcp\" (UniqueName: \"kubernetes.io/projected/67e15f05-9d62-45f7-a278-aeb9583be1a3-kube-api-access-9gxcp\") pod \"neutron-db-sync-fs2n9\" (UID: \"67e15f05-9d62-45f7-a278-aeb9583be1a3\") " pod="openstack/neutron-db-sync-fs2n9" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.157678 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-rs2k4"] Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.207956 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/706ed090-ccb8-4488-ae71-8c991476fd08-config-data\") pod \"cinder-db-sync-t9x9p\" (UID: \"706ed090-ccb8-4488-ae71-8c991476fd08\") " pod="openstack/cinder-db-sync-t9x9p" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.208010 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/706ed090-ccb8-4488-ae71-8c991476fd08-db-sync-config-data\") pod \"cinder-db-sync-t9x9p\" (UID: \"706ed090-ccb8-4488-ae71-8c991476fd08\") " pod="openstack/cinder-db-sync-t9x9p" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.208137 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnm8q\" (UniqueName: \"kubernetes.io/projected/865acfbb-330f-4594-a7d8-64962cab3cd5-kube-api-access-wnm8q\") pod \"barbican-db-sync-rs2k4\" (UID: \"865acfbb-330f-4594-a7d8-64962cab3cd5\") " pod="openstack/barbican-db-sync-rs2k4" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.208324 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/706ed090-ccb8-4488-ae71-8c991476fd08-etc-machine-id\") pod \"cinder-db-sync-t9x9p\" (UID: \"706ed090-ccb8-4488-ae71-8c991476fd08\") " pod="openstack/cinder-db-sync-t9x9p" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.208366 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/706ed090-ccb8-4488-ae71-8c991476fd08-combined-ca-bundle\") pod \"cinder-db-sync-t9x9p\" (UID: \"706ed090-ccb8-4488-ae71-8c991476fd08\") " pod="openstack/cinder-db-sync-t9x9p" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.208413 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvqfk\" (UniqueName: \"kubernetes.io/projected/706ed090-ccb8-4488-ae71-8c991476fd08-kube-api-access-fvqfk\") pod \"cinder-db-sync-t9x9p\" (UID: \"706ed090-ccb8-4488-ae71-8c991476fd08\") " pod="openstack/cinder-db-sync-t9x9p" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.208517 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/865acfbb-330f-4594-a7d8-64962cab3cd5-db-sync-config-data\") pod \"barbican-db-sync-rs2k4\" (UID: \"865acfbb-330f-4594-a7d8-64962cab3cd5\") " pod="openstack/barbican-db-sync-rs2k4" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.208541 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/865acfbb-330f-4594-a7d8-64962cab3cd5-combined-ca-bundle\") pod \"barbican-db-sync-rs2k4\" (UID: \"865acfbb-330f-4594-a7d8-64962cab3cd5\") " pod="openstack/barbican-db-sync-rs2k4" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.208563 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/706ed090-ccb8-4488-ae71-8c991476fd08-scripts\") pod \"cinder-db-sync-t9x9p\" (UID: \"706ed090-ccb8-4488-ae71-8c991476fd08\") " pod="openstack/cinder-db-sync-t9x9p" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.218964 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/865acfbb-330f-4594-a7d8-64962cab3cd5-combined-ca-bundle\") pod \"barbican-db-sync-rs2k4\" (UID: \"865acfbb-330f-4594-a7d8-64962cab3cd5\") " pod="openstack/barbican-db-sync-rs2k4" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.246417 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-t9x9p"] Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.278219 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/865acfbb-330f-4594-a7d8-64962cab3cd5-db-sync-config-data\") pod \"barbican-db-sync-rs2k4\" (UID: \"865acfbb-330f-4594-a7d8-64962cab3cd5\") " pod="openstack/barbican-db-sync-rs2k4" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.322214 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/706ed090-ccb8-4488-ae71-8c991476fd08-config-data\") pod \"cinder-db-sync-t9x9p\" (UID: \"706ed090-ccb8-4488-ae71-8c991476fd08\") " pod="openstack/cinder-db-sync-t9x9p" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.332050 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/706ed090-ccb8-4488-ae71-8c991476fd08-db-sync-config-data\") pod \"cinder-db-sync-t9x9p\" (UID: \"706ed090-ccb8-4488-ae71-8c991476fd08\") " pod="openstack/cinder-db-sync-t9x9p" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.332482 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/706ed090-ccb8-4488-ae71-8c991476fd08-etc-machine-id\") pod \"cinder-db-sync-t9x9p\" (UID: \"706ed090-ccb8-4488-ae71-8c991476fd08\") " pod="openstack/cinder-db-sync-t9x9p" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.332540 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/706ed090-ccb8-4488-ae71-8c991476fd08-combined-ca-bundle\") pod \"cinder-db-sync-t9x9p\" (UID: \"706ed090-ccb8-4488-ae71-8c991476fd08\") " pod="openstack/cinder-db-sync-t9x9p" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.332605 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvqfk\" (UniqueName: \"kubernetes.io/projected/706ed090-ccb8-4488-ae71-8c991476fd08-kube-api-access-fvqfk\") pod \"cinder-db-sync-t9x9p\" (UID: \"706ed090-ccb8-4488-ae71-8c991476fd08\") " pod="openstack/cinder-db-sync-t9x9p" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.332723 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/706ed090-ccb8-4488-ae71-8c991476fd08-scripts\") pod \"cinder-db-sync-t9x9p\" (UID: \"706ed090-ccb8-4488-ae71-8c991476fd08\") " pod="openstack/cinder-db-sync-t9x9p" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.322769 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-4fdhf"] Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.335197 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4fdhf" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.336348 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/706ed090-ccb8-4488-ae71-8c991476fd08-etc-machine-id\") pod \"cinder-db-sync-t9x9p\" (UID: \"706ed090-ccb8-4488-ae71-8c991476fd08\") " pod="openstack/cinder-db-sync-t9x9p" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.344511 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/706ed090-ccb8-4488-ae71-8c991476fd08-config-data\") pod \"cinder-db-sync-t9x9p\" (UID: \"706ed090-ccb8-4488-ae71-8c991476fd08\") " pod="openstack/cinder-db-sync-t9x9p" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.350840 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-9lc9q" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.351101 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.351862 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.352000 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/706ed090-ccb8-4488-ae71-8c991476fd08-db-sync-config-data\") pod \"cinder-db-sync-t9x9p\" (UID: \"706ed090-ccb8-4488-ae71-8c991476fd08\") " pod="openstack/cinder-db-sync-t9x9p" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.352528 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/706ed090-ccb8-4488-ae71-8c991476fd08-combined-ca-bundle\") pod \"cinder-db-sync-t9x9p\" (UID: \"706ed090-ccb8-4488-ae71-8c991476fd08\") " pod="openstack/cinder-db-sync-t9x9p" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.357868 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/706ed090-ccb8-4488-ae71-8c991476fd08-scripts\") pod \"cinder-db-sync-t9x9p\" (UID: \"706ed090-ccb8-4488-ae71-8c991476fd08\") " pod="openstack/cinder-db-sync-t9x9p" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.359343 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-4fdhf"] Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.375097 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnm8q\" (UniqueName: \"kubernetes.io/projected/865acfbb-330f-4594-a7d8-64962cab3cd5-kube-api-access-wnm8q\") pod \"barbican-db-sync-rs2k4\" (UID: \"865acfbb-330f-4594-a7d8-64962cab3cd5\") " pod="openstack/barbican-db-sync-rs2k4" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.375674 4794 generic.go:334] "Generic (PLEG): container finished" podID="cd6ae0e2-b666-4241-b3ab-fbfc47c39651" containerID="6b14d3240dc923b72393fa5df8f4742930afa63848ffd3ad8101d4c83f6ea5e6" exitCode=0 Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.375718 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-n9w9x" event={"ID":"cd6ae0e2-b666-4241-b3ab-fbfc47c39651","Type":"ContainerDied","Data":"6b14d3240dc923b72393fa5df8f4742930afa63848ffd3ad8101d4c83f6ea5e6"} Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.376272 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-rxzd7"] Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.376404 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fs2n9" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.405922 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvqfk\" (UniqueName: \"kubernetes.io/projected/706ed090-ccb8-4488-ae71-8c991476fd08-kube-api-access-fvqfk\") pod \"cinder-db-sync-t9x9p\" (UID: \"706ed090-ccb8-4488-ae71-8c991476fd08\") " pod="openstack/cinder-db-sync-t9x9p" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.409891 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-rdq4b"] Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.416752 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-rdq4b" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.427423 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-rdq4b"] Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.438010 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7473b04b-0d0a-4c73-ac81-f0ad2959dc79-scripts\") pod \"placement-db-sync-4fdhf\" (UID: \"7473b04b-0d0a-4c73-ac81-f0ad2959dc79\") " pod="openstack/placement-db-sync-4fdhf" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.438095 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7473b04b-0d0a-4c73-ac81-f0ad2959dc79-logs\") pod \"placement-db-sync-4fdhf\" (UID: \"7473b04b-0d0a-4c73-ac81-f0ad2959dc79\") " pod="openstack/placement-db-sync-4fdhf" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.438129 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7473b04b-0d0a-4c73-ac81-f0ad2959dc79-config-data\") pod \"placement-db-sync-4fdhf\" (UID: \"7473b04b-0d0a-4c73-ac81-f0ad2959dc79\") " pod="openstack/placement-db-sync-4fdhf" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.438151 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7473b04b-0d0a-4c73-ac81-f0ad2959dc79-combined-ca-bundle\") pod \"placement-db-sync-4fdhf\" (UID: \"7473b04b-0d0a-4c73-ac81-f0ad2959dc79\") " pod="openstack/placement-db-sync-4fdhf" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.438185 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mghxx\" (UniqueName: \"kubernetes.io/projected/7473b04b-0d0a-4c73-ac81-f0ad2959dc79-kube-api-access-mghxx\") pod \"placement-db-sync-4fdhf\" (UID: \"7473b04b-0d0a-4c73-ac81-f0ad2959dc79\") " pod="openstack/placement-db-sync-4fdhf" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.446564 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.460965 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.468478 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rs2k4" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.480380 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.481770 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.487082 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.509808 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-t9x9p" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.562135 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc67x\" (UniqueName: \"kubernetes.io/projected/b7b5f58b-cbab-4834-93ce-96d088299265-kube-api-access-dc67x\") pod \"dnsmasq-dns-56df8fb6b7-rdq4b\" (UID: \"b7b5f58b-cbab-4834-93ce-96d088299265\") " pod="openstack/dnsmasq-dns-56df8fb6b7-rdq4b" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.562193 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f584ef06-0506-4130-b87a-ec406e89d1f5-scripts\") pod \"ceilometer-0\" (UID: \"f584ef06-0506-4130-b87a-ec406e89d1f5\") " pod="openstack/ceilometer-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.562222 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7b5f58b-cbab-4834-93ce-96d088299265-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-rdq4b\" (UID: \"b7b5f58b-cbab-4834-93ce-96d088299265\") " pod="openstack/dnsmasq-dns-56df8fb6b7-rdq4b" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.564156 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f584ef06-0506-4130-b87a-ec406e89d1f5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f584ef06-0506-4130-b87a-ec406e89d1f5\") " pod="openstack/ceilometer-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.564248 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7473b04b-0d0a-4c73-ac81-f0ad2959dc79-scripts\") pod \"placement-db-sync-4fdhf\" (UID: \"7473b04b-0d0a-4c73-ac81-f0ad2959dc79\") " pod="openstack/placement-db-sync-4fdhf" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.564339 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4v52\" (UniqueName: \"kubernetes.io/projected/f584ef06-0506-4130-b87a-ec406e89d1f5-kube-api-access-p4v52\") pod \"ceilometer-0\" (UID: \"f584ef06-0506-4130-b87a-ec406e89d1f5\") " pod="openstack/ceilometer-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.564361 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f584ef06-0506-4130-b87a-ec406e89d1f5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f584ef06-0506-4130-b87a-ec406e89d1f5\") " pod="openstack/ceilometer-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.564380 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f584ef06-0506-4130-b87a-ec406e89d1f5-config-data\") pod \"ceilometer-0\" (UID: \"f584ef06-0506-4130-b87a-ec406e89d1f5\") " pod="openstack/ceilometer-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.564410 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7b5f58b-cbab-4834-93ce-96d088299265-config\") pod \"dnsmasq-dns-56df8fb6b7-rdq4b\" (UID: \"b7b5f58b-cbab-4834-93ce-96d088299265\") " pod="openstack/dnsmasq-dns-56df8fb6b7-rdq4b" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.566477 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f584ef06-0506-4130-b87a-ec406e89d1f5-log-httpd\") pod \"ceilometer-0\" (UID: \"f584ef06-0506-4130-b87a-ec406e89d1f5\") " pod="openstack/ceilometer-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.566499 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f584ef06-0506-4130-b87a-ec406e89d1f5-run-httpd\") pod \"ceilometer-0\" (UID: \"f584ef06-0506-4130-b87a-ec406e89d1f5\") " pod="openstack/ceilometer-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.566529 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7473b04b-0d0a-4c73-ac81-f0ad2959dc79-logs\") pod \"placement-db-sync-4fdhf\" (UID: \"7473b04b-0d0a-4c73-ac81-f0ad2959dc79\") " pod="openstack/placement-db-sync-4fdhf" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.566569 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7473b04b-0d0a-4c73-ac81-f0ad2959dc79-config-data\") pod \"placement-db-sync-4fdhf\" (UID: \"7473b04b-0d0a-4c73-ac81-f0ad2959dc79\") " pod="openstack/placement-db-sync-4fdhf" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.566591 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7473b04b-0d0a-4c73-ac81-f0ad2959dc79-combined-ca-bundle\") pod \"placement-db-sync-4fdhf\" (UID: \"7473b04b-0d0a-4c73-ac81-f0ad2959dc79\") " pod="openstack/placement-db-sync-4fdhf" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.566616 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7b5f58b-cbab-4834-93ce-96d088299265-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-rdq4b\" (UID: \"b7b5f58b-cbab-4834-93ce-96d088299265\") " pod="openstack/dnsmasq-dns-56df8fb6b7-rdq4b" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.566652 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mghxx\" (UniqueName: \"kubernetes.io/projected/7473b04b-0d0a-4c73-ac81-f0ad2959dc79-kube-api-access-mghxx\") pod \"placement-db-sync-4fdhf\" (UID: \"7473b04b-0d0a-4c73-ac81-f0ad2959dc79\") " pod="openstack/placement-db-sync-4fdhf" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.566681 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7b5f58b-cbab-4834-93ce-96d088299265-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-rdq4b\" (UID: \"b7b5f58b-cbab-4834-93ce-96d088299265\") " pod="openstack/dnsmasq-dns-56df8fb6b7-rdq4b" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.566699 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7b5f58b-cbab-4834-93ce-96d088299265-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-rdq4b\" (UID: \"b7b5f58b-cbab-4834-93ce-96d088299265\") " pod="openstack/dnsmasq-dns-56df8fb6b7-rdq4b" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.567034 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7473b04b-0d0a-4c73-ac81-f0ad2959dc79-logs\") pod \"placement-db-sync-4fdhf\" (UID: \"7473b04b-0d0a-4c73-ac81-f0ad2959dc79\") " pod="openstack/placement-db-sync-4fdhf" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.573999 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7473b04b-0d0a-4c73-ac81-f0ad2959dc79-config-data\") pod \"placement-db-sync-4fdhf\" (UID: \"7473b04b-0d0a-4c73-ac81-f0ad2959dc79\") " pod="openstack/placement-db-sync-4fdhf" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.574828 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7473b04b-0d0a-4c73-ac81-f0ad2959dc79-scripts\") pod \"placement-db-sync-4fdhf\" (UID: \"7473b04b-0d0a-4c73-ac81-f0ad2959dc79\") " pod="openstack/placement-db-sync-4fdhf" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.581550 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-n9w9x" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.601797 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7473b04b-0d0a-4c73-ac81-f0ad2959dc79-combined-ca-bundle\") pod \"placement-db-sync-4fdhf\" (UID: \"7473b04b-0d0a-4c73-ac81-f0ad2959dc79\") " pod="openstack/placement-db-sync-4fdhf" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.603643 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mghxx\" (UniqueName: \"kubernetes.io/projected/7473b04b-0d0a-4c73-ac81-f0ad2959dc79-kube-api-access-mghxx\") pod \"placement-db-sync-4fdhf\" (UID: \"7473b04b-0d0a-4c73-ac81-f0ad2959dc79\") " pod="openstack/placement-db-sync-4fdhf" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.677353 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 17:21:42 crc kubenswrapper[4794]: E0216 17:21:42.677843 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd6ae0e2-b666-4241-b3ab-fbfc47c39651" containerName="init" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.677855 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd6ae0e2-b666-4241-b3ab-fbfc47c39651" containerName="init" Feb 16 17:21:42 crc kubenswrapper[4794]: E0216 17:21:42.677882 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd6ae0e2-b666-4241-b3ab-fbfc47c39651" containerName="dnsmasq-dns" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.677888 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd6ae0e2-b666-4241-b3ab-fbfc47c39651" containerName="dnsmasq-dns" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.678066 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd6ae0e2-b666-4241-b3ab-fbfc47c39651-dns-swift-storage-0\") pod \"cd6ae0e2-b666-4241-b3ab-fbfc47c39651\" (UID: \"cd6ae0e2-b666-4241-b3ab-fbfc47c39651\") " Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.678129 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd6ae0e2-b666-4241-b3ab-fbfc47c39651" containerName="dnsmasq-dns" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.678188 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd6ae0e2-b666-4241-b3ab-fbfc47c39651-config\") pod \"cd6ae0e2-b666-4241-b3ab-fbfc47c39651\" (UID: \"cd6ae0e2-b666-4241-b3ab-fbfc47c39651\") " Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.678548 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbbxc\" (UniqueName: \"kubernetes.io/projected/cd6ae0e2-b666-4241-b3ab-fbfc47c39651-kube-api-access-gbbxc\") pod \"cd6ae0e2-b666-4241-b3ab-fbfc47c39651\" (UID: \"cd6ae0e2-b666-4241-b3ab-fbfc47c39651\") " Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.678659 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd6ae0e2-b666-4241-b3ab-fbfc47c39651-ovsdbserver-sb\") pod \"cd6ae0e2-b666-4241-b3ab-fbfc47c39651\" (UID: \"cd6ae0e2-b666-4241-b3ab-fbfc47c39651\") " Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.678690 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd6ae0e2-b666-4241-b3ab-fbfc47c39651-dns-svc\") pod \"cd6ae0e2-b666-4241-b3ab-fbfc47c39651\" (UID: \"cd6ae0e2-b666-4241-b3ab-fbfc47c39651\") " Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.678718 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd6ae0e2-b666-4241-b3ab-fbfc47c39651-ovsdbserver-nb\") pod \"cd6ae0e2-b666-4241-b3ab-fbfc47c39651\" (UID: \"cd6ae0e2-b666-4241-b3ab-fbfc47c39651\") " Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.678977 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f584ef06-0506-4130-b87a-ec406e89d1f5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f584ef06-0506-4130-b87a-ec406e89d1f5\") " pod="openstack/ceilometer-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.679037 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4v52\" (UniqueName: \"kubernetes.io/projected/f584ef06-0506-4130-b87a-ec406e89d1f5-kube-api-access-p4v52\") pod \"ceilometer-0\" (UID: \"f584ef06-0506-4130-b87a-ec406e89d1f5\") " pod="openstack/ceilometer-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.679061 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f584ef06-0506-4130-b87a-ec406e89d1f5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f584ef06-0506-4130-b87a-ec406e89d1f5\") " pod="openstack/ceilometer-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.679083 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f584ef06-0506-4130-b87a-ec406e89d1f5-config-data\") pod \"ceilometer-0\" (UID: \"f584ef06-0506-4130-b87a-ec406e89d1f5\") " pod="openstack/ceilometer-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.679106 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7b5f58b-cbab-4834-93ce-96d088299265-config\") pod \"dnsmasq-dns-56df8fb6b7-rdq4b\" (UID: \"b7b5f58b-cbab-4834-93ce-96d088299265\") " pod="openstack/dnsmasq-dns-56df8fb6b7-rdq4b" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.679134 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f584ef06-0506-4130-b87a-ec406e89d1f5-log-httpd\") pod \"ceilometer-0\" (UID: \"f584ef06-0506-4130-b87a-ec406e89d1f5\") " pod="openstack/ceilometer-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.679150 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f584ef06-0506-4130-b87a-ec406e89d1f5-run-httpd\") pod \"ceilometer-0\" (UID: \"f584ef06-0506-4130-b87a-ec406e89d1f5\") " pod="openstack/ceilometer-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.679196 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7b5f58b-cbab-4834-93ce-96d088299265-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-rdq4b\" (UID: \"b7b5f58b-cbab-4834-93ce-96d088299265\") " pod="openstack/dnsmasq-dns-56df8fb6b7-rdq4b" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.679220 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.679239 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7b5f58b-cbab-4834-93ce-96d088299265-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-rdq4b\" (UID: \"b7b5f58b-cbab-4834-93ce-96d088299265\") " pod="openstack/dnsmasq-dns-56df8fb6b7-rdq4b" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.679257 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7b5f58b-cbab-4834-93ce-96d088299265-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-rdq4b\" (UID: \"b7b5f58b-cbab-4834-93ce-96d088299265\") " pod="openstack/dnsmasq-dns-56df8fb6b7-rdq4b" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.679293 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc67x\" (UniqueName: \"kubernetes.io/projected/b7b5f58b-cbab-4834-93ce-96d088299265-kube-api-access-dc67x\") pod \"dnsmasq-dns-56df8fb6b7-rdq4b\" (UID: \"b7b5f58b-cbab-4834-93ce-96d088299265\") " pod="openstack/dnsmasq-dns-56df8fb6b7-rdq4b" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.679331 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f584ef06-0506-4130-b87a-ec406e89d1f5-scripts\") pod \"ceilometer-0\" (UID: \"f584ef06-0506-4130-b87a-ec406e89d1f5\") " pod="openstack/ceilometer-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.679351 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7b5f58b-cbab-4834-93ce-96d088299265-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-rdq4b\" (UID: \"b7b5f58b-cbab-4834-93ce-96d088299265\") " pod="openstack/dnsmasq-dns-56df8fb6b7-rdq4b" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.683886 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.684993 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7b5f58b-cbab-4834-93ce-96d088299265-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-rdq4b\" (UID: \"b7b5f58b-cbab-4834-93ce-96d088299265\") " pod="openstack/dnsmasq-dns-56df8fb6b7-rdq4b" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.685875 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7b5f58b-cbab-4834-93ce-96d088299265-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-rdq4b\" (UID: \"b7b5f58b-cbab-4834-93ce-96d088299265\") " pod="openstack/dnsmasq-dns-56df8fb6b7-rdq4b" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.686422 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7b5f58b-cbab-4834-93ce-96d088299265-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-rdq4b\" (UID: \"b7b5f58b-cbab-4834-93ce-96d088299265\") " pod="openstack/dnsmasq-dns-56df8fb6b7-rdq4b" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.686826 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.686914 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7b5f58b-cbab-4834-93ce-96d088299265-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-rdq4b\" (UID: \"b7b5f58b-cbab-4834-93ce-96d088299265\") " pod="openstack/dnsmasq-dns-56df8fb6b7-rdq4b" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.687070 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-6gc5c" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.687888 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.688501 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f584ef06-0506-4130-b87a-ec406e89d1f5-log-httpd\") pod \"ceilometer-0\" (UID: \"f584ef06-0506-4130-b87a-ec406e89d1f5\") " pod="openstack/ceilometer-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.689067 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7b5f58b-cbab-4834-93ce-96d088299265-config\") pod \"dnsmasq-dns-56df8fb6b7-rdq4b\" (UID: \"b7b5f58b-cbab-4834-93ce-96d088299265\") " pod="openstack/dnsmasq-dns-56df8fb6b7-rdq4b" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.691631 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f584ef06-0506-4130-b87a-ec406e89d1f5-config-data\") pod \"ceilometer-0\" (UID: \"f584ef06-0506-4130-b87a-ec406e89d1f5\") " pod="openstack/ceilometer-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.704577 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd6ae0e2-b666-4241-b3ab-fbfc47c39651-kube-api-access-gbbxc" (OuterVolumeSpecName: "kube-api-access-gbbxc") pod "cd6ae0e2-b666-4241-b3ab-fbfc47c39651" (UID: "cd6ae0e2-b666-4241-b3ab-fbfc47c39651"). InnerVolumeSpecName "kube-api-access-gbbxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.706162 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f584ef06-0506-4130-b87a-ec406e89d1f5-scripts\") pod \"ceilometer-0\" (UID: \"f584ef06-0506-4130-b87a-ec406e89d1f5\") " pod="openstack/ceilometer-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.718383 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f584ef06-0506-4130-b87a-ec406e89d1f5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f584ef06-0506-4130-b87a-ec406e89d1f5\") " pod="openstack/ceilometer-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.719936 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f584ef06-0506-4130-b87a-ec406e89d1f5-run-httpd\") pod \"ceilometer-0\" (UID: \"f584ef06-0506-4130-b87a-ec406e89d1f5\") " pod="openstack/ceilometer-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.735874 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4fdhf" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.737657 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f584ef06-0506-4130-b87a-ec406e89d1f5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f584ef06-0506-4130-b87a-ec406e89d1f5\") " pod="openstack/ceilometer-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.765684 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc67x\" (UniqueName: \"kubernetes.io/projected/b7b5f58b-cbab-4834-93ce-96d088299265-kube-api-access-dc67x\") pod \"dnsmasq-dns-56df8fb6b7-rdq4b\" (UID: \"b7b5f58b-cbab-4834-93ce-96d088299265\") " pod="openstack/dnsmasq-dns-56df8fb6b7-rdq4b" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.766204 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-rdq4b" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.784171 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4v52\" (UniqueName: \"kubernetes.io/projected/f584ef06-0506-4130-b87a-ec406e89d1f5-kube-api-access-p4v52\") pod \"ceilometer-0\" (UID: \"f584ef06-0506-4130-b87a-ec406e89d1f5\") " pod="openstack/ceilometer-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.785457 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-config-data\") pod \"glance-default-external-api-0\" (UID: \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.785484 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-logs\") pod \"glance-default-external-api-0\" (UID: \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.785591 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-scripts\") pod \"glance-default-external-api-0\" (UID: \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.785614 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.785668 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.785696 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5bnx\" (UniqueName: \"kubernetes.io/projected/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-kube-api-access-z5bnx\") pod \"glance-default-external-api-0\" (UID: \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.785747 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.785774 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b\") pod \"glance-default-external-api-0\" (UID: \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.785835 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbbxc\" (UniqueName: \"kubernetes.io/projected/cd6ae0e2-b666-4241-b3ab-fbfc47c39651-kube-api-access-gbbxc\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.789483 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.907679 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.913996 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-scripts\") pod \"glance-default-external-api-0\" (UID: \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.914050 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.914144 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.914189 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5bnx\" (UniqueName: \"kubernetes.io/projected/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-kube-api-access-z5bnx\") pod \"glance-default-external-api-0\" (UID: \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.914320 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.914359 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b\") pod \"glance-default-external-api-0\" (UID: \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.914408 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-config-data\") pod \"glance-default-external-api-0\" (UID: \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.914433 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-logs\") pod \"glance-default-external-api-0\" (UID: \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.918811 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.920551 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-logs\") pod \"glance-default-external-api-0\" (UID: \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.923546 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd6ae0e2-b666-4241-b3ab-fbfc47c39651-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cd6ae0e2-b666-4241-b3ab-fbfc47c39651" (UID: "cd6ae0e2-b666-4241-b3ab-fbfc47c39651"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.923601 4794 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.923638 4794 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b\") pod \"glance-default-external-api-0\" (UID: \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/02eb96684cb2daee1e7757d905c4024416c5994d26b1f18fcded63c6a3978ca1/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.935604 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd6ae0e2-b666-4241-b3ab-fbfc47c39651-config" (OuterVolumeSpecName: "config") pod "cd6ae0e2-b666-4241-b3ab-fbfc47c39651" (UID: "cd6ae0e2-b666-4241-b3ab-fbfc47c39651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.966864 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-config-data\") pod \"glance-default-external-api-0\" (UID: \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.968827 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.970422 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-scripts\") pod \"glance-default-external-api-0\" (UID: \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.974242 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5bnx\" (UniqueName: \"kubernetes.io/projected/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-kube-api-access-z5bnx\") pod \"glance-default-external-api-0\" (UID: \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:42 crc kubenswrapper[4794]: I0216 17:21:42.990450 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.013914 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd6ae0e2-b666-4241-b3ab-fbfc47c39651-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cd6ae0e2-b666-4241-b3ab-fbfc47c39651" (UID: "cd6ae0e2-b666-4241-b3ab-fbfc47c39651"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.022426 4794 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd6ae0e2-b666-4241-b3ab-fbfc47c39651-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.022471 4794 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd6ae0e2-b666-4241-b3ab-fbfc47c39651-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.022485 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd6ae0e2-b666-4241-b3ab-fbfc47c39651-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.046224 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd6ae0e2-b666-4241-b3ab-fbfc47c39651-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cd6ae0e2-b666-4241-b3ab-fbfc47c39651" (UID: "cd6ae0e2-b666-4241-b3ab-fbfc47c39651"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.048435 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b\") pod \"glance-default-external-api-0\" (UID: \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.058425 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd6ae0e2-b666-4241-b3ab-fbfc47c39651-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cd6ae0e2-b666-4241-b3ab-fbfc47c39651" (UID: "cd6ae0e2-b666-4241-b3ab-fbfc47c39651"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.124671 4794 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd6ae0e2-b666-4241-b3ab-fbfc47c39651-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.124708 4794 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd6ae0e2-b666-4241-b3ab-fbfc47c39651-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.176042 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.182054 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.182237 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.195338 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.197171 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.197422 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.228524 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b032f9fb-6222-45f9-a022-cf7ff5b697f5-logs\") pod \"glance-default-internal-api-0\" (UID: \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.228587 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b032f9fb-6222-45f9-a022-cf7ff5b697f5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.228615 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b032f9fb-6222-45f9-a022-cf7ff5b697f5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.228685 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b032f9fb-6222-45f9-a022-cf7ff5b697f5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.228754 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b032f9fb-6222-45f9-a022-cf7ff5b697f5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.228775 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b032f9fb-6222-45f9-a022-cf7ff5b697f5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.228926 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z8w4\" (UniqueName: \"kubernetes.io/projected/b032f9fb-6222-45f9-a022-cf7ff5b697f5-kube-api-access-5z8w4\") pod \"glance-default-internal-api-0\" (UID: \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.228978 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd\") pod \"glance-default-internal-api-0\" (UID: \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.330430 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b032f9fb-6222-45f9-a022-cf7ff5b697f5-logs\") pod \"glance-default-internal-api-0\" (UID: \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.330500 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b032f9fb-6222-45f9-a022-cf7ff5b697f5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.330521 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b032f9fb-6222-45f9-a022-cf7ff5b697f5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.330580 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b032f9fb-6222-45f9-a022-cf7ff5b697f5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.330646 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b032f9fb-6222-45f9-a022-cf7ff5b697f5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.330662 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b032f9fb-6222-45f9-a022-cf7ff5b697f5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.330747 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z8w4\" (UniqueName: \"kubernetes.io/projected/b032f9fb-6222-45f9-a022-cf7ff5b697f5-kube-api-access-5z8w4\") pod \"glance-default-internal-api-0\" (UID: \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.330769 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd\") pod \"glance-default-internal-api-0\" (UID: \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.332748 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b032f9fb-6222-45f9-a022-cf7ff5b697f5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.334545 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b032f9fb-6222-45f9-a022-cf7ff5b697f5-logs\") pod \"glance-default-internal-api-0\" (UID: \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.340859 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b032f9fb-6222-45f9-a022-cf7ff5b697f5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.342294 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b032f9fb-6222-45f9-a022-cf7ff5b697f5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.347344 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b032f9fb-6222-45f9-a022-cf7ff5b697f5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.350059 4794 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.350106 4794 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd\") pod \"glance-default-internal-api-0\" (UID: \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/cb6046d42ed9a0eea3afd967978370f0d4a85f1a0cd82d5e783a4e6c6e087e5f/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.355095 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b032f9fb-6222-45f9-a022-cf7ff5b697f5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.371026 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z8w4\" (UniqueName: \"kubernetes.io/projected/b032f9fb-6222-45f9-a022-cf7ff5b697f5-kube-api-access-5z8w4\") pod \"glance-default-internal-api-0\" (UID: \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.460969 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-n9w9x" event={"ID":"cd6ae0e2-b666-4241-b3ab-fbfc47c39651","Type":"ContainerDied","Data":"4ea84a985e308b36159ce17a9d4c8502d4b1517652ee49166dce715be501ebfc"} Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.462621 4794 scope.go:117] "RemoveContainer" containerID="6b14d3240dc923b72393fa5df8f4742930afa63848ffd3ad8101d4c83f6ea5e6" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.461061 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-n9w9x" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.476520 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd\") pod \"glance-default-internal-api-0\" (UID: \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.510559 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.519588 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-n9w9x"] Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.527297 4794 scope.go:117] "RemoveContainer" containerID="7880e1b3199ae3987009ebe7ad2442d8b2a17a087c64be5cc42adcea4432e821" Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.546542 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-n9w9x"] Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.654454 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-rxzd7"] Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.678731 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-snkpw"] Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.691873 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-rppbn"] Feb 16 17:21:43 crc kubenswrapper[4794]: W0216 17:21:43.736936 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f3b58ad_6afe_4194_a578_2f4fec69367c.slice/crio-6872e37b7f84fd6443ab9ee4de35100ff379655d12d03f09f2131cd440b0be60 WatchSource:0}: Error finding container 6872e37b7f84fd6443ab9ee4de35100ff379655d12d03f09f2131cd440b0be60: Status 404 returned error can't find the container with id 6872e37b7f84fd6443ab9ee4de35100ff379655d12d03f09f2131cd440b0be60 Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.831500 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-t9x9p"] Feb 16 17:21:43 crc kubenswrapper[4794]: I0216 17:21:43.993454 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-fs2n9"] Feb 16 17:21:44 crc kubenswrapper[4794]: W0216 17:21:44.013452 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67e15f05_9d62_45f7_a278_aeb9583be1a3.slice/crio-6760cd516640d34fd2ed80148d91fc3cd7143898a65aa80120940bf912ffa854 WatchSource:0}: Error finding container 6760cd516640d34fd2ed80148d91fc3cd7143898a65aa80120940bf912ffa854: Status 404 returned error can't find the container with id 6760cd516640d34fd2ed80148d91fc3cd7143898a65aa80120940bf912ffa854 Feb 16 17:21:44 crc kubenswrapper[4794]: I0216 17:21:44.295228 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 17:21:44 crc kubenswrapper[4794]: I0216 17:21:44.322687 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-rs2k4"] Feb 16 17:21:44 crc kubenswrapper[4794]: W0216 17:21:44.336645 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7473b04b_0d0a_4c73_ac81_f0ad2959dc79.slice/crio-b55e385fe345aa7f5e4ba35a458faa3e72711f8e04097aa0674a24d2d8131db6 WatchSource:0}: Error finding container b55e385fe345aa7f5e4ba35a458faa3e72711f8e04097aa0674a24d2d8131db6: Status 404 returned error can't find the container with id b55e385fe345aa7f5e4ba35a458faa3e72711f8e04097aa0674a24d2d8131db6 Feb 16 17:21:44 crc kubenswrapper[4794]: I0216 17:21:44.367242 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-4fdhf"] Feb 16 17:21:44 crc kubenswrapper[4794]: I0216 17:21:44.410272 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-rdq4b"] Feb 16 17:21:44 crc kubenswrapper[4794]: I0216 17:21:44.428385 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:21:44 crc kubenswrapper[4794]: I0216 17:21:44.446061 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 17:21:44 crc kubenswrapper[4794]: I0216 17:21:44.549543 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-rdq4b" event={"ID":"b7b5f58b-cbab-4834-93ce-96d088299265","Type":"ContainerStarted","Data":"f2c6a4d2b39428c3d6a9f448be232adbfc2116183e8e8de475f2ee49852f23b9"} Feb 16 17:21:44 crc kubenswrapper[4794]: I0216 17:21:44.573966 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rs2k4" event={"ID":"865acfbb-330f-4594-a7d8-64962cab3cd5","Type":"ContainerStarted","Data":"a201a8c9b7838d26172a1353efa266d9ce445b8e231a61ae7cb8e9eff3205964"} Feb 16 17:21:44 crc kubenswrapper[4794]: I0216 17:21:44.577921 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fs2n9" event={"ID":"67e15f05-9d62-45f7-a278-aeb9583be1a3","Type":"ContainerStarted","Data":"4aaae524dab826255e6a2ba268bb0f7d36c73d90aa6fb43b268b42cf915e4a6d"} Feb 16 17:21:44 crc kubenswrapper[4794]: I0216 17:21:44.577973 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fs2n9" event={"ID":"67e15f05-9d62-45f7-a278-aeb9583be1a3","Type":"ContainerStarted","Data":"6760cd516640d34fd2ed80148d91fc3cd7143898a65aa80120940bf912ffa854"} Feb 16 17:21:44 crc kubenswrapper[4794]: I0216 17:21:44.591273 4794 generic.go:334] "Generic (PLEG): container finished" podID="4585a2a5-30be-4837-b502-d948b6f4cf6e" containerID="bca9dda92d3195590de93dba6f13e371808a40a9473b41166d3c4f06f8a1ea60" exitCode=0 Feb 16 17:21:44 crc kubenswrapper[4794]: I0216 17:21:44.591464 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-rxzd7" event={"ID":"4585a2a5-30be-4837-b502-d948b6f4cf6e","Type":"ContainerDied","Data":"bca9dda92d3195590de93dba6f13e371808a40a9473b41166d3c4f06f8a1ea60"} Feb 16 17:21:44 crc kubenswrapper[4794]: I0216 17:21:44.591533 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-rxzd7" event={"ID":"4585a2a5-30be-4837-b502-d948b6f4cf6e","Type":"ContainerStarted","Data":"171c4d686b07c4f93909aad5fd8eb3cb7317f500f99ec6e40b492eb98d7fb26a"} Feb 16 17:21:44 crc kubenswrapper[4794]: I0216 17:21:44.594159 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-rppbn" event={"ID":"8f3b58ad-6afe-4194-a578-2f4fec69367c","Type":"ContainerStarted","Data":"6872e37b7f84fd6443ab9ee4de35100ff379655d12d03f09f2131cd440b0be60"} Feb 16 17:21:44 crc kubenswrapper[4794]: I0216 17:21:44.599150 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4fdhf" event={"ID":"7473b04b-0d0a-4c73-ac81-f0ad2959dc79","Type":"ContainerStarted","Data":"b55e385fe345aa7f5e4ba35a458faa3e72711f8e04097aa0674a24d2d8131db6"} Feb 16 17:21:44 crc kubenswrapper[4794]: I0216 17:21:44.606561 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f584ef06-0506-4130-b87a-ec406e89d1f5","Type":"ContainerStarted","Data":"10eea464aeaf0310266524ae99b31a2de038fa9342d1f8fd78b3906d75a37ecd"} Feb 16 17:21:44 crc kubenswrapper[4794]: I0216 17:21:44.619193 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-snkpw" event={"ID":"957098f0-d0b0-425d-b74a-fe3c84889eab","Type":"ContainerStarted","Data":"c16525667c36dad66eba954d729d6b86a5266e61911552421e34290fff35174d"} Feb 16 17:21:44 crc kubenswrapper[4794]: I0216 17:21:44.619236 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-snkpw" event={"ID":"957098f0-d0b0-425d-b74a-fe3c84889eab","Type":"ContainerStarted","Data":"48c9789245a1615bed212ea31ea7e332480d110b604cfdc9b4a7ce8083bf90a9"} Feb 16 17:21:44 crc kubenswrapper[4794]: I0216 17:21:44.624490 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-t9x9p" event={"ID":"706ed090-ccb8-4488-ae71-8c991476fd08","Type":"ContainerStarted","Data":"f7aed07a34d47035c6f2721756b53444d5dec7ca9ff6cd0d3708f67f76e1193a"} Feb 16 17:21:44 crc kubenswrapper[4794]: I0216 17:21:44.635269 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-fs2n9" podStartSLOduration=3.635251375 podStartE2EDuration="3.635251375s" podCreationTimestamp="2026-02-16 17:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:21:44.618345397 +0000 UTC m=+1330.566440044" watchObservedRunningTime="2026-02-16 17:21:44.635251375 +0000 UTC m=+1330.583346022" Feb 16 17:21:44 crc kubenswrapper[4794]: I0216 17:21:44.759704 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 17:21:44 crc kubenswrapper[4794]: I0216 17:21:44.849247 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-snkpw" podStartSLOduration=3.849219315 podStartE2EDuration="3.849219315s" podCreationTimestamp="2026-02-16 17:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:21:44.674153378 +0000 UTC m=+1330.622248035" watchObservedRunningTime="2026-02-16 17:21:44.849219315 +0000 UTC m=+1330.797313972" Feb 16 17:21:44 crc kubenswrapper[4794]: I0216 17:21:44.878072 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd6ae0e2-b666-4241-b3ab-fbfc47c39651" path="/var/lib/kubelet/pods/cd6ae0e2-b666-4241-b3ab-fbfc47c39651/volumes" Feb 16 17:21:44 crc kubenswrapper[4794]: I0216 17:21:44.901574 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:21:45 crc kubenswrapper[4794]: I0216 17:21:45.018048 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 17:21:45 crc kubenswrapper[4794]: W0216 17:21:45.065828 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb032f9fb_6222_45f9_a022_cf7ff5b697f5.slice/crio-28f1eaa1ea5b9d872be7fd52294cd36c83842baaa633ce70569f9b06b36a7bf7 WatchSource:0}: Error finding container 28f1eaa1ea5b9d872be7fd52294cd36c83842baaa633ce70569f9b06b36a7bf7: Status 404 returned error can't find the container with id 28f1eaa1ea5b9d872be7fd52294cd36c83842baaa633ce70569f9b06b36a7bf7 Feb 16 17:21:45 crc kubenswrapper[4794]: I0216 17:21:45.547864 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-rxzd7" Feb 16 17:21:45 crc kubenswrapper[4794]: I0216 17:21:45.646632 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2a04af4e-b3be-4b1c-938c-f78a1ead2eba","Type":"ContainerStarted","Data":"5dd349dad127e31e89e62475ca46cbf47f9cb9391d8c8fab481f6bb3d025b1b3"} Feb 16 17:21:45 crc kubenswrapper[4794]: I0216 17:21:45.659602 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-rdq4b" event={"ID":"b7b5f58b-cbab-4834-93ce-96d088299265","Type":"ContainerStarted","Data":"a30a56f991894269edeb947b028c97f9f8858cb175549ca976e6c99044addb7d"} Feb 16 17:21:45 crc kubenswrapper[4794]: I0216 17:21:45.679026 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b032f9fb-6222-45f9-a022-cf7ff5b697f5","Type":"ContainerStarted","Data":"28f1eaa1ea5b9d872be7fd52294cd36c83842baaa633ce70569f9b06b36a7bf7"} Feb 16 17:21:45 crc kubenswrapper[4794]: I0216 17:21:45.728703 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4585a2a5-30be-4837-b502-d948b6f4cf6e-ovsdbserver-nb\") pod \"4585a2a5-30be-4837-b502-d948b6f4cf6e\" (UID: \"4585a2a5-30be-4837-b502-d948b6f4cf6e\") " Feb 16 17:21:45 crc kubenswrapper[4794]: I0216 17:21:45.728972 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnb2g\" (UniqueName: \"kubernetes.io/projected/4585a2a5-30be-4837-b502-d948b6f4cf6e-kube-api-access-lnb2g\") pod \"4585a2a5-30be-4837-b502-d948b6f4cf6e\" (UID: \"4585a2a5-30be-4837-b502-d948b6f4cf6e\") " Feb 16 17:21:45 crc kubenswrapper[4794]: I0216 17:21:45.729032 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4585a2a5-30be-4837-b502-d948b6f4cf6e-dns-swift-storage-0\") pod \"4585a2a5-30be-4837-b502-d948b6f4cf6e\" (UID: \"4585a2a5-30be-4837-b502-d948b6f4cf6e\") " Feb 16 17:21:45 crc kubenswrapper[4794]: I0216 17:21:45.729077 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4585a2a5-30be-4837-b502-d948b6f4cf6e-ovsdbserver-sb\") pod \"4585a2a5-30be-4837-b502-d948b6f4cf6e\" (UID: \"4585a2a5-30be-4837-b502-d948b6f4cf6e\") " Feb 16 17:21:45 crc kubenswrapper[4794]: I0216 17:21:45.729145 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4585a2a5-30be-4837-b502-d948b6f4cf6e-dns-svc\") pod \"4585a2a5-30be-4837-b502-d948b6f4cf6e\" (UID: \"4585a2a5-30be-4837-b502-d948b6f4cf6e\") " Feb 16 17:21:45 crc kubenswrapper[4794]: I0216 17:21:45.729233 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4585a2a5-30be-4837-b502-d948b6f4cf6e-config\") pod \"4585a2a5-30be-4837-b502-d948b6f4cf6e\" (UID: \"4585a2a5-30be-4837-b502-d948b6f4cf6e\") " Feb 16 17:21:45 crc kubenswrapper[4794]: I0216 17:21:45.740590 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4585a2a5-30be-4837-b502-d948b6f4cf6e-kube-api-access-lnb2g" (OuterVolumeSpecName: "kube-api-access-lnb2g") pod "4585a2a5-30be-4837-b502-d948b6f4cf6e" (UID: "4585a2a5-30be-4837-b502-d948b6f4cf6e"). InnerVolumeSpecName "kube-api-access-lnb2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:45 crc kubenswrapper[4794]: I0216 17:21:45.765114 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4585a2a5-30be-4837-b502-d948b6f4cf6e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4585a2a5-30be-4837-b502-d948b6f4cf6e" (UID: "4585a2a5-30be-4837-b502-d948b6f4cf6e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:45 crc kubenswrapper[4794]: I0216 17:21:45.770125 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4585a2a5-30be-4837-b502-d948b6f4cf6e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4585a2a5-30be-4837-b502-d948b6f4cf6e" (UID: "4585a2a5-30be-4837-b502-d948b6f4cf6e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:45 crc kubenswrapper[4794]: I0216 17:21:45.780991 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-rxzd7" Feb 16 17:21:45 crc kubenswrapper[4794]: I0216 17:21:45.782098 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-rxzd7" event={"ID":"4585a2a5-30be-4837-b502-d948b6f4cf6e","Type":"ContainerDied","Data":"171c4d686b07c4f93909aad5fd8eb3cb7317f500f99ec6e40b492eb98d7fb26a"} Feb 16 17:21:45 crc kubenswrapper[4794]: I0216 17:21:45.782565 4794 scope.go:117] "RemoveContainer" containerID="bca9dda92d3195590de93dba6f13e371808a40a9473b41166d3c4f06f8a1ea60" Feb 16 17:21:45 crc kubenswrapper[4794]: I0216 17:21:45.808735 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4585a2a5-30be-4837-b502-d948b6f4cf6e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4585a2a5-30be-4837-b502-d948b6f4cf6e" (UID: "4585a2a5-30be-4837-b502-d948b6f4cf6e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:45 crc kubenswrapper[4794]: I0216 17:21:45.823817 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4585a2a5-30be-4837-b502-d948b6f4cf6e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4585a2a5-30be-4837-b502-d948b6f4cf6e" (UID: "4585a2a5-30be-4837-b502-d948b6f4cf6e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:45 crc kubenswrapper[4794]: I0216 17:21:45.836121 4794 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4585a2a5-30be-4837-b502-d948b6f4cf6e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:45 crc kubenswrapper[4794]: I0216 17:21:45.836158 4794 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4585a2a5-30be-4837-b502-d948b6f4cf6e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:45 crc kubenswrapper[4794]: I0216 17:21:45.836168 4794 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4585a2a5-30be-4837-b502-d948b6f4cf6e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:45 crc kubenswrapper[4794]: I0216 17:21:45.836176 4794 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4585a2a5-30be-4837-b502-d948b6f4cf6e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:45 crc kubenswrapper[4794]: I0216 17:21:45.836184 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnb2g\" (UniqueName: \"kubernetes.io/projected/4585a2a5-30be-4837-b502-d948b6f4cf6e-kube-api-access-lnb2g\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:45 crc kubenswrapper[4794]: I0216 17:21:45.854629 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4585a2a5-30be-4837-b502-d948b6f4cf6e-config" (OuterVolumeSpecName: "config") pod "4585a2a5-30be-4837-b502-d948b6f4cf6e" (UID: "4585a2a5-30be-4837-b502-d948b6f4cf6e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:45 crc kubenswrapper[4794]: I0216 17:21:45.939366 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4585a2a5-30be-4837-b502-d948b6f4cf6e-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:46 crc kubenswrapper[4794]: I0216 17:21:46.170930 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-rxzd7"] Feb 16 17:21:46 crc kubenswrapper[4794]: I0216 17:21:46.198447 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-rxzd7"] Feb 16 17:21:46 crc kubenswrapper[4794]: I0216 17:21:46.821890 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4585a2a5-30be-4837-b502-d948b6f4cf6e" path="/var/lib/kubelet/pods/4585a2a5-30be-4837-b502-d948b6f4cf6e/volumes" Feb 16 17:21:46 crc kubenswrapper[4794]: I0216 17:21:46.843193 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2a04af4e-b3be-4b1c-938c-f78a1ead2eba","Type":"ContainerStarted","Data":"7a8351be092aba96088e5aa491898dc2b25cc8446898d1008791eadabd0ab52c"} Feb 16 17:21:46 crc kubenswrapper[4794]: I0216 17:21:46.869157 4794 generic.go:334] "Generic (PLEG): container finished" podID="b7b5f58b-cbab-4834-93ce-96d088299265" containerID="a30a56f991894269edeb947b028c97f9f8858cb175549ca976e6c99044addb7d" exitCode=0 Feb 16 17:21:46 crc kubenswrapper[4794]: I0216 17:21:46.875273 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-rdq4b" event={"ID":"b7b5f58b-cbab-4834-93ce-96d088299265","Type":"ContainerDied","Data":"a30a56f991894269edeb947b028c97f9f8858cb175549ca976e6c99044addb7d"} Feb 16 17:21:46 crc kubenswrapper[4794]: I0216 17:21:46.875516 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-rdq4b" event={"ID":"b7b5f58b-cbab-4834-93ce-96d088299265","Type":"ContainerStarted","Data":"a22d8a2d0cbc3a4e8c7541c2039e59c07416d0e3d5f570bf3913a332455860a9"} Feb 16 17:21:46 crc kubenswrapper[4794]: I0216 17:21:46.881107 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b032f9fb-6222-45f9-a022-cf7ff5b697f5","Type":"ContainerStarted","Data":"f8a174e154933e4369ba53e3ce2f7424863eca96a616906616ff5d8f4a0a9f6f"} Feb 16 17:21:46 crc kubenswrapper[4794]: I0216 17:21:46.912946 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-rdq4b" podStartSLOduration=4.912926162 podStartE2EDuration="4.912926162s" podCreationTimestamp="2026-02-16 17:21:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:21:46.902260782 +0000 UTC m=+1332.850355429" watchObservedRunningTime="2026-02-16 17:21:46.912926162 +0000 UTC m=+1332.861020809" Feb 16 17:21:47 crc kubenswrapper[4794]: I0216 17:21:47.766922 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-rdq4b" Feb 16 17:21:47 crc kubenswrapper[4794]: I0216 17:21:47.899785 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2a04af4e-b3be-4b1c-938c-f78a1ead2eba","Type":"ContainerStarted","Data":"2e70cdea89f15323990b59c352d72a60dddb5e38a27004b3d85849bf805fa539"} Feb 16 17:21:47 crc kubenswrapper[4794]: I0216 17:21:47.899826 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2a04af4e-b3be-4b1c-938c-f78a1ead2eba" containerName="glance-log" containerID="cri-o://7a8351be092aba96088e5aa491898dc2b25cc8446898d1008791eadabd0ab52c" gracePeriod=30 Feb 16 17:21:47 crc kubenswrapper[4794]: I0216 17:21:47.900024 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2a04af4e-b3be-4b1c-938c-f78a1ead2eba" containerName="glance-httpd" containerID="cri-o://2e70cdea89f15323990b59c352d72a60dddb5e38a27004b3d85849bf805fa539" gracePeriod=30 Feb 16 17:21:47 crc kubenswrapper[4794]: I0216 17:21:47.905162 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b032f9fb-6222-45f9-a022-cf7ff5b697f5","Type":"ContainerStarted","Data":"c55fea7ed8bafbd11e40907f7fe58ed40e6772d1bec9691bc9f74b35c0d7fe5f"} Feb 16 17:21:47 crc kubenswrapper[4794]: I0216 17:21:47.905322 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b032f9fb-6222-45f9-a022-cf7ff5b697f5" containerName="glance-log" containerID="cri-o://f8a174e154933e4369ba53e3ce2f7424863eca96a616906616ff5d8f4a0a9f6f" gracePeriod=30 Feb 16 17:21:47 crc kubenswrapper[4794]: I0216 17:21:47.905498 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b032f9fb-6222-45f9-a022-cf7ff5b697f5" containerName="glance-httpd" containerID="cri-o://c55fea7ed8bafbd11e40907f7fe58ed40e6772d1bec9691bc9f74b35c0d7fe5f" gracePeriod=30 Feb 16 17:21:47 crc kubenswrapper[4794]: I0216 17:21:47.940982 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.940956643 podStartE2EDuration="6.940956643s" podCreationTimestamp="2026-02-16 17:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:21:47.925893612 +0000 UTC m=+1333.873988269" watchObservedRunningTime="2026-02-16 17:21:47.940956643 +0000 UTC m=+1333.889051290" Feb 16 17:21:47 crc kubenswrapper[4794]: I0216 17:21:47.959031 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.959006769 podStartE2EDuration="6.959006769s" podCreationTimestamp="2026-02-16 17:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:21:47.953150451 +0000 UTC m=+1333.901245098" watchObservedRunningTime="2026-02-16 17:21:47.959006769 +0000 UTC m=+1333.907101426" Feb 16 17:21:48 crc kubenswrapper[4794]: E0216 17:21:48.007762 4794 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb032f9fb_6222_45f9_a022_cf7ff5b697f5.slice/crio-c55fea7ed8bafbd11e40907f7fe58ed40e6772d1bec9691bc9f74b35c0d7fe5f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb032f9fb_6222_45f9_a022_cf7ff5b697f5.slice/crio-conmon-f8a174e154933e4369ba53e3ce2f7424863eca96a616906616ff5d8f4a0a9f6f.scope\": RecentStats: unable to find data in memory cache]" Feb 16 17:21:48 crc kubenswrapper[4794]: I0216 17:21:48.924920 4794 generic.go:334] "Generic (PLEG): container finished" podID="2a04af4e-b3be-4b1c-938c-f78a1ead2eba" containerID="2e70cdea89f15323990b59c352d72a60dddb5e38a27004b3d85849bf805fa539" exitCode=143 Feb 16 17:21:48 crc kubenswrapper[4794]: I0216 17:21:48.926209 4794 generic.go:334] "Generic (PLEG): container finished" podID="2a04af4e-b3be-4b1c-938c-f78a1ead2eba" containerID="7a8351be092aba96088e5aa491898dc2b25cc8446898d1008791eadabd0ab52c" exitCode=143 Feb 16 17:21:48 crc kubenswrapper[4794]: I0216 17:21:48.925589 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2a04af4e-b3be-4b1c-938c-f78a1ead2eba","Type":"ContainerDied","Data":"2e70cdea89f15323990b59c352d72a60dddb5e38a27004b3d85849bf805fa539"} Feb 16 17:21:48 crc kubenswrapper[4794]: I0216 17:21:48.926367 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2a04af4e-b3be-4b1c-938c-f78a1ead2eba","Type":"ContainerDied","Data":"7a8351be092aba96088e5aa491898dc2b25cc8446898d1008791eadabd0ab52c"} Feb 16 17:21:48 crc kubenswrapper[4794]: I0216 17:21:48.930165 4794 generic.go:334] "Generic (PLEG): container finished" podID="b032f9fb-6222-45f9-a022-cf7ff5b697f5" containerID="c55fea7ed8bafbd11e40907f7fe58ed40e6772d1bec9691bc9f74b35c0d7fe5f" exitCode=143 Feb 16 17:21:48 crc kubenswrapper[4794]: I0216 17:21:48.930201 4794 generic.go:334] "Generic (PLEG): container finished" podID="b032f9fb-6222-45f9-a022-cf7ff5b697f5" containerID="f8a174e154933e4369ba53e3ce2f7424863eca96a616906616ff5d8f4a0a9f6f" exitCode=143 Feb 16 17:21:48 crc kubenswrapper[4794]: I0216 17:21:48.930349 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b032f9fb-6222-45f9-a022-cf7ff5b697f5","Type":"ContainerDied","Data":"c55fea7ed8bafbd11e40907f7fe58ed40e6772d1bec9691bc9f74b35c0d7fe5f"} Feb 16 17:21:48 crc kubenswrapper[4794]: I0216 17:21:48.930375 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b032f9fb-6222-45f9-a022-cf7ff5b697f5","Type":"ContainerDied","Data":"f8a174e154933e4369ba53e3ce2f7424863eca96a616906616ff5d8f4a0a9f6f"} Feb 16 17:21:50 crc kubenswrapper[4794]: I0216 17:21:50.967292 4794 generic.go:334] "Generic (PLEG): container finished" podID="957098f0-d0b0-425d-b74a-fe3c84889eab" containerID="c16525667c36dad66eba954d729d6b86a5266e61911552421e34290fff35174d" exitCode=0 Feb 16 17:21:50 crc kubenswrapper[4794]: I0216 17:21:50.967356 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-snkpw" event={"ID":"957098f0-d0b0-425d-b74a-fe3c84889eab","Type":"ContainerDied","Data":"c16525667c36dad66eba954d729d6b86a5266e61911552421e34290fff35174d"} Feb 16 17:21:52 crc kubenswrapper[4794]: I0216 17:21:52.767480 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-rdq4b" Feb 16 17:21:52 crc kubenswrapper[4794]: I0216 17:21:52.839206 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lswtm"] Feb 16 17:21:52 crc kubenswrapper[4794]: I0216 17:21:52.839519 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-lswtm" podUID="2f564c83-65cd-4eb0-81b3-155b5a221041" containerName="dnsmasq-dns" containerID="cri-o://a9197f571a6a4ec904f6ebf4455d0bbf732cd435435fcc0805cffabdeb5ad6df" gracePeriod=10 Feb 16 17:21:53 crc kubenswrapper[4794]: I0216 17:21:53.003769 4794 generic.go:334] "Generic (PLEG): container finished" podID="2f564c83-65cd-4eb0-81b3-155b5a221041" containerID="a9197f571a6a4ec904f6ebf4455d0bbf732cd435435fcc0805cffabdeb5ad6df" exitCode=0 Feb 16 17:21:53 crc kubenswrapper[4794]: I0216 17:21:53.003964 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lswtm" event={"ID":"2f564c83-65cd-4eb0-81b3-155b5a221041","Type":"ContainerDied","Data":"a9197f571a6a4ec904f6ebf4455d0bbf732cd435435fcc0805cffabdeb5ad6df"} Feb 16 17:21:53 crc kubenswrapper[4794]: I0216 17:21:53.467248 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 17:21:53 crc kubenswrapper[4794]: I0216 17:21:53.470241 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-lswtm" podUID="2f564c83-65cd-4eb0-81b3-155b5a221041" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.152:5353: connect: connection refused" Feb 16 17:21:53 crc kubenswrapper[4794]: I0216 17:21:53.566000 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b032f9fb-6222-45f9-a022-cf7ff5b697f5-logs\") pod \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\" (UID: \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\") " Feb 16 17:21:53 crc kubenswrapper[4794]: I0216 17:21:53.566136 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b032f9fb-6222-45f9-a022-cf7ff5b697f5-config-data\") pod \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\" (UID: \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\") " Feb 16 17:21:53 crc kubenswrapper[4794]: I0216 17:21:53.566185 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b032f9fb-6222-45f9-a022-cf7ff5b697f5-combined-ca-bundle\") pod \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\" (UID: \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\") " Feb 16 17:21:53 crc kubenswrapper[4794]: I0216 17:21:53.566228 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b032f9fb-6222-45f9-a022-cf7ff5b697f5-internal-tls-certs\") pod \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\" (UID: \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\") " Feb 16 17:21:53 crc kubenswrapper[4794]: I0216 17:21:53.566275 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5z8w4\" (UniqueName: \"kubernetes.io/projected/b032f9fb-6222-45f9-a022-cf7ff5b697f5-kube-api-access-5z8w4\") pod \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\" (UID: \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\") " Feb 16 17:21:53 crc kubenswrapper[4794]: I0216 17:21:53.566524 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b032f9fb-6222-45f9-a022-cf7ff5b697f5-logs" (OuterVolumeSpecName: "logs") pod "b032f9fb-6222-45f9-a022-cf7ff5b697f5" (UID: "b032f9fb-6222-45f9-a022-cf7ff5b697f5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:21:53 crc kubenswrapper[4794]: I0216 17:21:53.566543 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd\") pod \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\" (UID: \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\") " Feb 16 17:21:53 crc kubenswrapper[4794]: I0216 17:21:53.566608 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b032f9fb-6222-45f9-a022-cf7ff5b697f5-scripts\") pod \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\" (UID: \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\") " Feb 16 17:21:53 crc kubenswrapper[4794]: I0216 17:21:53.566653 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b032f9fb-6222-45f9-a022-cf7ff5b697f5-httpd-run\") pod \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\" (UID: \"b032f9fb-6222-45f9-a022-cf7ff5b697f5\") " Feb 16 17:21:53 crc kubenswrapper[4794]: I0216 17:21:53.567607 4794 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b032f9fb-6222-45f9-a022-cf7ff5b697f5-logs\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:53 crc kubenswrapper[4794]: I0216 17:21:53.568312 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b032f9fb-6222-45f9-a022-cf7ff5b697f5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b032f9fb-6222-45f9-a022-cf7ff5b697f5" (UID: "b032f9fb-6222-45f9-a022-cf7ff5b697f5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:21:53 crc kubenswrapper[4794]: I0216 17:21:53.573156 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b032f9fb-6222-45f9-a022-cf7ff5b697f5-scripts" (OuterVolumeSpecName: "scripts") pod "b032f9fb-6222-45f9-a022-cf7ff5b697f5" (UID: "b032f9fb-6222-45f9-a022-cf7ff5b697f5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:53 crc kubenswrapper[4794]: I0216 17:21:53.589795 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b032f9fb-6222-45f9-a022-cf7ff5b697f5-kube-api-access-5z8w4" (OuterVolumeSpecName: "kube-api-access-5z8w4") pod "b032f9fb-6222-45f9-a022-cf7ff5b697f5" (UID: "b032f9fb-6222-45f9-a022-cf7ff5b697f5"). InnerVolumeSpecName "kube-api-access-5z8w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:53 crc kubenswrapper[4794]: I0216 17:21:53.597540 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd" (OuterVolumeSpecName: "glance") pod "b032f9fb-6222-45f9-a022-cf7ff5b697f5" (UID: "b032f9fb-6222-45f9-a022-cf7ff5b697f5"). InnerVolumeSpecName "pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 17:21:53 crc kubenswrapper[4794]: I0216 17:21:53.614216 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b032f9fb-6222-45f9-a022-cf7ff5b697f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b032f9fb-6222-45f9-a022-cf7ff5b697f5" (UID: "b032f9fb-6222-45f9-a022-cf7ff5b697f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:53 crc kubenswrapper[4794]: I0216 17:21:53.637716 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b032f9fb-6222-45f9-a022-cf7ff5b697f5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b032f9fb-6222-45f9-a022-cf7ff5b697f5" (UID: "b032f9fb-6222-45f9-a022-cf7ff5b697f5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:53 crc kubenswrapper[4794]: I0216 17:21:53.649183 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b032f9fb-6222-45f9-a022-cf7ff5b697f5-config-data" (OuterVolumeSpecName: "config-data") pod "b032f9fb-6222-45f9-a022-cf7ff5b697f5" (UID: "b032f9fb-6222-45f9-a022-cf7ff5b697f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:53 crc kubenswrapper[4794]: I0216 17:21:53.669053 4794 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b032f9fb-6222-45f9-a022-cf7ff5b697f5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:53 crc kubenswrapper[4794]: I0216 17:21:53.669081 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5z8w4\" (UniqueName: \"kubernetes.io/projected/b032f9fb-6222-45f9-a022-cf7ff5b697f5-kube-api-access-5z8w4\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:53 crc kubenswrapper[4794]: I0216 17:21:53.669118 4794 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd\") on node \"crc\" " Feb 16 17:21:53 crc kubenswrapper[4794]: I0216 17:21:53.669129 4794 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b032f9fb-6222-45f9-a022-cf7ff5b697f5-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:53 crc kubenswrapper[4794]: I0216 17:21:53.669138 4794 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b032f9fb-6222-45f9-a022-cf7ff5b697f5-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:53 crc kubenswrapper[4794]: I0216 17:21:53.669146 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b032f9fb-6222-45f9-a022-cf7ff5b697f5-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:53 crc kubenswrapper[4794]: I0216 17:21:53.669154 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b032f9fb-6222-45f9-a022-cf7ff5b697f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:53 crc kubenswrapper[4794]: I0216 17:21:53.699849 4794 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 17:21:53 crc kubenswrapper[4794]: I0216 17:21:53.700086 4794 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd") on node "crc" Feb 16 17:21:53 crc kubenswrapper[4794]: I0216 17:21:53.770993 4794 reconciler_common.go:293] "Volume detached for volume \"pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.019011 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b032f9fb-6222-45f9-a022-cf7ff5b697f5","Type":"ContainerDied","Data":"28f1eaa1ea5b9d872be7fd52294cd36c83842baaa633ce70569f9b06b36a7bf7"} Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.019640 4794 scope.go:117] "RemoveContainer" containerID="c55fea7ed8bafbd11e40907f7fe58ed40e6772d1bec9691bc9f74b35c0d7fe5f" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.019083 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.067178 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.084721 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.097519 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 17:21:54 crc kubenswrapper[4794]: E0216 17:21:54.098013 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4585a2a5-30be-4837-b502-d948b6f4cf6e" containerName="init" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.098038 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="4585a2a5-30be-4837-b502-d948b6f4cf6e" containerName="init" Feb 16 17:21:54 crc kubenswrapper[4794]: E0216 17:21:54.098052 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b032f9fb-6222-45f9-a022-cf7ff5b697f5" containerName="glance-log" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.098074 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="b032f9fb-6222-45f9-a022-cf7ff5b697f5" containerName="glance-log" Feb 16 17:21:54 crc kubenswrapper[4794]: E0216 17:21:54.098105 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b032f9fb-6222-45f9-a022-cf7ff5b697f5" containerName="glance-httpd" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.098113 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="b032f9fb-6222-45f9-a022-cf7ff5b697f5" containerName="glance-httpd" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.098389 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="b032f9fb-6222-45f9-a022-cf7ff5b697f5" containerName="glance-log" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.098423 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="b032f9fb-6222-45f9-a022-cf7ff5b697f5" containerName="glance-httpd" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.098458 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="4585a2a5-30be-4837-b502-d948b6f4cf6e" containerName="init" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.099791 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.103128 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.103334 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.104286 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.280009 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd\") pod \"glance-default-internal-api-0\" (UID: \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.280059 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d19b9ba9-ea39-41de-a397-3c5e844f24d8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.280113 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d19b9ba9-ea39-41de-a397-3c5e844f24d8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.280139 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d19b9ba9-ea39-41de-a397-3c5e844f24d8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.280255 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d19b9ba9-ea39-41de-a397-3c5e844f24d8-logs\") pod \"glance-default-internal-api-0\" (UID: \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.280283 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d19b9ba9-ea39-41de-a397-3c5e844f24d8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.280342 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d19b9ba9-ea39-41de-a397-3c5e844f24d8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.280372 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98qzb\" (UniqueName: \"kubernetes.io/projected/d19b9ba9-ea39-41de-a397-3c5e844f24d8-kube-api-access-98qzb\") pod \"glance-default-internal-api-0\" (UID: \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.381839 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d19b9ba9-ea39-41de-a397-3c5e844f24d8-logs\") pod \"glance-default-internal-api-0\" (UID: \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.381884 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d19b9ba9-ea39-41de-a397-3c5e844f24d8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.381928 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d19b9ba9-ea39-41de-a397-3c5e844f24d8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.381958 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98qzb\" (UniqueName: \"kubernetes.io/projected/d19b9ba9-ea39-41de-a397-3c5e844f24d8-kube-api-access-98qzb\") pod \"glance-default-internal-api-0\" (UID: \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.382007 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd\") pod \"glance-default-internal-api-0\" (UID: \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.382027 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d19b9ba9-ea39-41de-a397-3c5e844f24d8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.382054 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d19b9ba9-ea39-41de-a397-3c5e844f24d8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.382070 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d19b9ba9-ea39-41de-a397-3c5e844f24d8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.382319 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d19b9ba9-ea39-41de-a397-3c5e844f24d8-logs\") pod \"glance-default-internal-api-0\" (UID: \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.388648 4794 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.388817 4794 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd\") pod \"glance-default-internal-api-0\" (UID: \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/cb6046d42ed9a0eea3afd967978370f0d4a85f1a0cd82d5e783a4e6c6e087e5f/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.390131 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d19b9ba9-ea39-41de-a397-3c5e844f24d8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.390342 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d19b9ba9-ea39-41de-a397-3c5e844f24d8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.393952 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d19b9ba9-ea39-41de-a397-3c5e844f24d8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.394264 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d19b9ba9-ea39-41de-a397-3c5e844f24d8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.406895 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d19b9ba9-ea39-41de-a397-3c5e844f24d8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.416026 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98qzb\" (UniqueName: \"kubernetes.io/projected/d19b9ba9-ea39-41de-a397-3c5e844f24d8-kube-api-access-98qzb\") pod \"glance-default-internal-api-0\" (UID: \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.465015 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd\") pod \"glance-default-internal-api-0\" (UID: \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.508365 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.694113 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-scripts\") pod \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\" (UID: \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\") " Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.694220 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-public-tls-certs\") pod \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\" (UID: \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\") " Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.694285 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-config-data\") pod \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\" (UID: \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\") " Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.694348 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-combined-ca-bundle\") pod \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\" (UID: \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\") " Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.694458 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b\") pod \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\" (UID: \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\") " Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.694518 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5bnx\" (UniqueName: \"kubernetes.io/projected/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-kube-api-access-z5bnx\") pod \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\" (UID: \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\") " Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.694612 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-logs\") pod \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\" (UID: \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\") " Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.694700 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-httpd-run\") pod \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\" (UID: \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\") " Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.695136 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-logs" (OuterVolumeSpecName: "logs") pod "2a04af4e-b3be-4b1c-938c-f78a1ead2eba" (UID: "2a04af4e-b3be-4b1c-938c-f78a1ead2eba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.695431 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2a04af4e-b3be-4b1c-938c-f78a1ead2eba" (UID: "2a04af4e-b3be-4b1c-938c-f78a1ead2eba"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.695602 4794 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-logs\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.695619 4794 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.699530 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-scripts" (OuterVolumeSpecName: "scripts") pod "2a04af4e-b3be-4b1c-938c-f78a1ead2eba" (UID: "2a04af4e-b3be-4b1c-938c-f78a1ead2eba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.719679 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-kube-api-access-z5bnx" (OuterVolumeSpecName: "kube-api-access-z5bnx") pod "2a04af4e-b3be-4b1c-938c-f78a1ead2eba" (UID: "2a04af4e-b3be-4b1c-938c-f78a1ead2eba"). InnerVolumeSpecName "kube-api-access-z5bnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.737294 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a04af4e-b3be-4b1c-938c-f78a1ead2eba" (UID: "2a04af4e-b3be-4b1c-938c-f78a1ead2eba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:54 crc kubenswrapper[4794]: E0216 17:21:54.737559 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b podName:2a04af4e-b3be-4b1c-938c-f78a1ead2eba nodeName:}" failed. No retries permitted until 2026-02-16 17:21:55.237536482 +0000 UTC m=+1341.185631129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "glance" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b") pod "2a04af4e-b3be-4b1c-938c-f78a1ead2eba" (UID: "2a04af4e-b3be-4b1c-938c-f78a1ead2eba") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.759735 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.792488 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2a04af4e-b3be-4b1c-938c-f78a1ead2eba" (UID: "2a04af4e-b3be-4b1c-938c-f78a1ead2eba"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.795495 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-config-data" (OuterVolumeSpecName: "config-data") pod "2a04af4e-b3be-4b1c-938c-f78a1ead2eba" (UID: "2a04af4e-b3be-4b1c-938c-f78a1ead2eba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.799471 4794 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.799507 4794 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.799521 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.799533 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.799545 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5bnx\" (UniqueName: \"kubernetes.io/projected/2a04af4e-b3be-4b1c-938c-f78a1ead2eba-kube-api-access-z5bnx\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:54 crc kubenswrapper[4794]: I0216 17:21:54.815611 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b032f9fb-6222-45f9-a022-cf7ff5b697f5" path="/var/lib/kubelet/pods/b032f9fb-6222-45f9-a022-cf7ff5b697f5/volumes" Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.033074 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2a04af4e-b3be-4b1c-938c-f78a1ead2eba","Type":"ContainerDied","Data":"5dd349dad127e31e89e62475ca46cbf47f9cb9391d8c8fab481f6bb3d025b1b3"} Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.033420 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.313763 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b\") pod \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\" (UID: \"2a04af4e-b3be-4b1c-938c-f78a1ead2eba\") " Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.334110 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b" (OuterVolumeSpecName: "glance") pod "2a04af4e-b3be-4b1c-938c-f78a1ead2eba" (UID: "2a04af4e-b3be-4b1c-938c-f78a1ead2eba"). InnerVolumeSpecName "pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.381803 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.393181 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.418439 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 17:21:55 crc kubenswrapper[4794]: E0216 17:21:55.419368 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a04af4e-b3be-4b1c-938c-f78a1ead2eba" containerName="glance-httpd" Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.419382 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a04af4e-b3be-4b1c-938c-f78a1ead2eba" containerName="glance-httpd" Feb 16 17:21:55 crc kubenswrapper[4794]: E0216 17:21:55.419405 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a04af4e-b3be-4b1c-938c-f78a1ead2eba" containerName="glance-log" Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.419412 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a04af4e-b3be-4b1c-938c-f78a1ead2eba" containerName="glance-log" Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.419847 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a04af4e-b3be-4b1c-938c-f78a1ead2eba" containerName="glance-httpd" Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.419869 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a04af4e-b3be-4b1c-938c-f78a1ead2eba" containerName="glance-log" Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.422887 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.425638 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.425999 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.428194 4794 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b\") on node \"crc\" " Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.458459 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.480875 4794 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.481004 4794 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b") on node "crc" Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.531058 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8rjm\" (UniqueName: \"kubernetes.io/projected/4cf7b50d-6ee8-41b2-b69f-123961055859-kube-api-access-c8rjm\") pod \"glance-default-external-api-0\" (UID: \"4cf7b50d-6ee8-41b2-b69f-123961055859\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.531159 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cf7b50d-6ee8-41b2-b69f-123961055859-scripts\") pod \"glance-default-external-api-0\" (UID: \"4cf7b50d-6ee8-41b2-b69f-123961055859\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.531293 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cf7b50d-6ee8-41b2-b69f-123961055859-logs\") pod \"glance-default-external-api-0\" (UID: \"4cf7b50d-6ee8-41b2-b69f-123961055859\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.531359 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b\") pod \"glance-default-external-api-0\" (UID: \"4cf7b50d-6ee8-41b2-b69f-123961055859\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.531422 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cf7b50d-6ee8-41b2-b69f-123961055859-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4cf7b50d-6ee8-41b2-b69f-123961055859\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.531452 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cf7b50d-6ee8-41b2-b69f-123961055859-config-data\") pod \"glance-default-external-api-0\" (UID: \"4cf7b50d-6ee8-41b2-b69f-123961055859\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.531487 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cf7b50d-6ee8-41b2-b69f-123961055859-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4cf7b50d-6ee8-41b2-b69f-123961055859\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.531518 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4cf7b50d-6ee8-41b2-b69f-123961055859-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4cf7b50d-6ee8-41b2-b69f-123961055859\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.533429 4794 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.533460 4794 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b\") pod \"glance-default-external-api-0\" (UID: \"4cf7b50d-6ee8-41b2-b69f-123961055859\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/02eb96684cb2daee1e7757d905c4024416c5994d26b1f18fcded63c6a3978ca1/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.584036 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b\") pod \"glance-default-external-api-0\" (UID: \"4cf7b50d-6ee8-41b2-b69f-123961055859\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.632969 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cf7b50d-6ee8-41b2-b69f-123961055859-scripts\") pod \"glance-default-external-api-0\" (UID: \"4cf7b50d-6ee8-41b2-b69f-123961055859\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.633033 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cf7b50d-6ee8-41b2-b69f-123961055859-logs\") pod \"glance-default-external-api-0\" (UID: \"4cf7b50d-6ee8-41b2-b69f-123961055859\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.633075 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cf7b50d-6ee8-41b2-b69f-123961055859-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4cf7b50d-6ee8-41b2-b69f-123961055859\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.633096 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cf7b50d-6ee8-41b2-b69f-123961055859-config-data\") pod \"glance-default-external-api-0\" (UID: \"4cf7b50d-6ee8-41b2-b69f-123961055859\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.633120 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cf7b50d-6ee8-41b2-b69f-123961055859-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4cf7b50d-6ee8-41b2-b69f-123961055859\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.633145 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4cf7b50d-6ee8-41b2-b69f-123961055859-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4cf7b50d-6ee8-41b2-b69f-123961055859\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.633252 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8rjm\" (UniqueName: \"kubernetes.io/projected/4cf7b50d-6ee8-41b2-b69f-123961055859-kube-api-access-c8rjm\") pod \"glance-default-external-api-0\" (UID: \"4cf7b50d-6ee8-41b2-b69f-123961055859\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.633978 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cf7b50d-6ee8-41b2-b69f-123961055859-logs\") pod \"glance-default-external-api-0\" (UID: \"4cf7b50d-6ee8-41b2-b69f-123961055859\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.633997 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4cf7b50d-6ee8-41b2-b69f-123961055859-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4cf7b50d-6ee8-41b2-b69f-123961055859\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.646643 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cf7b50d-6ee8-41b2-b69f-123961055859-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4cf7b50d-6ee8-41b2-b69f-123961055859\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.647176 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cf7b50d-6ee8-41b2-b69f-123961055859-scripts\") pod \"glance-default-external-api-0\" (UID: \"4cf7b50d-6ee8-41b2-b69f-123961055859\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.648474 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cf7b50d-6ee8-41b2-b69f-123961055859-config-data\") pod \"glance-default-external-api-0\" (UID: \"4cf7b50d-6ee8-41b2-b69f-123961055859\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.648584 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cf7b50d-6ee8-41b2-b69f-123961055859-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4cf7b50d-6ee8-41b2-b69f-123961055859\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.651342 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8rjm\" (UniqueName: \"kubernetes.io/projected/4cf7b50d-6ee8-41b2-b69f-123961055859-kube-api-access-c8rjm\") pod \"glance-default-external-api-0\" (UID: \"4cf7b50d-6ee8-41b2-b69f-123961055859\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:55 crc kubenswrapper[4794]: I0216 17:21:55.787901 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 17:21:56 crc kubenswrapper[4794]: I0216 17:21:56.806390 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a04af4e-b3be-4b1c-938c-f78a1ead2eba" path="/var/lib/kubelet/pods/2a04af4e-b3be-4b1c-938c-f78a1ead2eba/volumes" Feb 16 17:21:58 crc kubenswrapper[4794]: I0216 17:21:58.470955 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-lswtm" podUID="2f564c83-65cd-4eb0-81b3-155b5a221041" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.152:5353: connect: connection refused" Feb 16 17:22:03 crc kubenswrapper[4794]: I0216 17:22:03.470464 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-lswtm" podUID="2f564c83-65cd-4eb0-81b3-155b5a221041" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.152:5353: connect: connection refused" Feb 16 17:22:03 crc kubenswrapper[4794]: I0216 17:22:03.470941 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-lswtm" Feb 16 17:22:04 crc kubenswrapper[4794]: E0216 17:22:04.507050 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Feb 16 17:22:04 crc kubenswrapper[4794]: E0216 17:22:04.507407 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgjh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-rppbn_openstack(8f3b58ad-6afe-4194-a578-2f4fec69367c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 17:22:04 crc kubenswrapper[4794]: E0216 17:22:04.508678 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-rppbn" podUID="8f3b58ad-6afe-4194-a578-2f4fec69367c" Feb 16 17:22:04 crc kubenswrapper[4794]: I0216 17:22:04.515254 4794 scope.go:117] "RemoveContainer" containerID="f8a174e154933e4369ba53e3ce2f7424863eca96a616906616ff5d8f4a0a9f6f" Feb 16 17:22:04 crc kubenswrapper[4794]: E0216 17:22:04.951361 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 16 17:22:04 crc kubenswrapper[4794]: E0216 17:22:04.951929 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wnm8q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-rs2k4_openstack(865acfbb-330f-4594-a7d8-64962cab3cd5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 17:22:04 crc kubenswrapper[4794]: E0216 17:22:04.953534 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-rs2k4" podUID="865acfbb-330f-4594-a7d8-64962cab3cd5" Feb 16 17:22:05 crc kubenswrapper[4794]: E0216 17:22:05.143566 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-rs2k4" podUID="865acfbb-330f-4594-a7d8-64962cab3cd5" Feb 16 17:22:05 crc kubenswrapper[4794]: E0216 17:22:05.144013 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-rppbn" podUID="8f3b58ad-6afe-4194-a578-2f4fec69367c" Feb 16 17:22:06 crc kubenswrapper[4794]: I0216 17:22:06.157927 4794 generic.go:334] "Generic (PLEG): container finished" podID="67e15f05-9d62-45f7-a278-aeb9583be1a3" containerID="4aaae524dab826255e6a2ba268bb0f7d36c73d90aa6fb43b268b42cf915e4a6d" exitCode=0 Feb 16 17:22:06 crc kubenswrapper[4794]: I0216 17:22:06.157974 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fs2n9" event={"ID":"67e15f05-9d62-45f7-a278-aeb9583be1a3","Type":"ContainerDied","Data":"4aaae524dab826255e6a2ba268bb0f7d36c73d90aa6fb43b268b42cf915e4a6d"} Feb 16 17:22:12 crc kubenswrapper[4794]: I0216 17:22:12.847217 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-snkpw" Feb 16 17:22:12 crc kubenswrapper[4794]: I0216 17:22:12.855691 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-lswtm" Feb 16 17:22:12 crc kubenswrapper[4794]: I0216 17:22:12.865167 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fs2n9" Feb 16 17:22:12 crc kubenswrapper[4794]: I0216 17:22:12.927283 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/957098f0-d0b0-425d-b74a-fe3c84889eab-config-data\") pod \"957098f0-d0b0-425d-b74a-fe3c84889eab\" (UID: \"957098f0-d0b0-425d-b74a-fe3c84889eab\") " Feb 16 17:22:12 crc kubenswrapper[4794]: I0216 17:22:12.927509 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/957098f0-d0b0-425d-b74a-fe3c84889eab-fernet-keys\") pod \"957098f0-d0b0-425d-b74a-fe3c84889eab\" (UID: \"957098f0-d0b0-425d-b74a-fe3c84889eab\") " Feb 16 17:22:12 crc kubenswrapper[4794]: I0216 17:22:12.927542 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/957098f0-d0b0-425d-b74a-fe3c84889eab-credential-keys\") pod \"957098f0-d0b0-425d-b74a-fe3c84889eab\" (UID: \"957098f0-d0b0-425d-b74a-fe3c84889eab\") " Feb 16 17:22:12 crc kubenswrapper[4794]: I0216 17:22:12.927583 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdtfj\" (UniqueName: \"kubernetes.io/projected/957098f0-d0b0-425d-b74a-fe3c84889eab-kube-api-access-bdtfj\") pod \"957098f0-d0b0-425d-b74a-fe3c84889eab\" (UID: \"957098f0-d0b0-425d-b74a-fe3c84889eab\") " Feb 16 17:22:12 crc kubenswrapper[4794]: I0216 17:22:12.927636 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f564c83-65cd-4eb0-81b3-155b5a221041-dns-svc\") pod \"2f564c83-65cd-4eb0-81b3-155b5a221041\" (UID: \"2f564c83-65cd-4eb0-81b3-155b5a221041\") " Feb 16 17:22:12 crc kubenswrapper[4794]: I0216 17:22:12.927714 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/957098f0-d0b0-425d-b74a-fe3c84889eab-combined-ca-bundle\") pod \"957098f0-d0b0-425d-b74a-fe3c84889eab\" (UID: \"957098f0-d0b0-425d-b74a-fe3c84889eab\") " Feb 16 17:22:12 crc kubenswrapper[4794]: I0216 17:22:12.927749 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f564c83-65cd-4eb0-81b3-155b5a221041-config\") pod \"2f564c83-65cd-4eb0-81b3-155b5a221041\" (UID: \"2f564c83-65cd-4eb0-81b3-155b5a221041\") " Feb 16 17:22:12 crc kubenswrapper[4794]: I0216 17:22:12.927773 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/957098f0-d0b0-425d-b74a-fe3c84889eab-scripts\") pod \"957098f0-d0b0-425d-b74a-fe3c84889eab\" (UID: \"957098f0-d0b0-425d-b74a-fe3c84889eab\") " Feb 16 17:22:12 crc kubenswrapper[4794]: I0216 17:22:12.927812 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f564c83-65cd-4eb0-81b3-155b5a221041-ovsdbserver-sb\") pod \"2f564c83-65cd-4eb0-81b3-155b5a221041\" (UID: \"2f564c83-65cd-4eb0-81b3-155b5a221041\") " Feb 16 17:22:12 crc kubenswrapper[4794]: I0216 17:22:12.927911 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f564c83-65cd-4eb0-81b3-155b5a221041-ovsdbserver-nb\") pod \"2f564c83-65cd-4eb0-81b3-155b5a221041\" (UID: \"2f564c83-65cd-4eb0-81b3-155b5a221041\") " Feb 16 17:22:12 crc kubenswrapper[4794]: I0216 17:22:12.927992 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56xrn\" (UniqueName: \"kubernetes.io/projected/2f564c83-65cd-4eb0-81b3-155b5a221041-kube-api-access-56xrn\") pod \"2f564c83-65cd-4eb0-81b3-155b5a221041\" (UID: \"2f564c83-65cd-4eb0-81b3-155b5a221041\") " Feb 16 17:22:12 crc kubenswrapper[4794]: I0216 17:22:12.931846 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/957098f0-d0b0-425d-b74a-fe3c84889eab-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "957098f0-d0b0-425d-b74a-fe3c84889eab" (UID: "957098f0-d0b0-425d-b74a-fe3c84889eab"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:12 crc kubenswrapper[4794]: I0216 17:22:12.937214 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/957098f0-d0b0-425d-b74a-fe3c84889eab-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "957098f0-d0b0-425d-b74a-fe3c84889eab" (UID: "957098f0-d0b0-425d-b74a-fe3c84889eab"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:12 crc kubenswrapper[4794]: I0216 17:22:12.937614 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f564c83-65cd-4eb0-81b3-155b5a221041-kube-api-access-56xrn" (OuterVolumeSpecName: "kube-api-access-56xrn") pod "2f564c83-65cd-4eb0-81b3-155b5a221041" (UID: "2f564c83-65cd-4eb0-81b3-155b5a221041"). InnerVolumeSpecName "kube-api-access-56xrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:22:12 crc kubenswrapper[4794]: I0216 17:22:12.942574 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/957098f0-d0b0-425d-b74a-fe3c84889eab-scripts" (OuterVolumeSpecName: "scripts") pod "957098f0-d0b0-425d-b74a-fe3c84889eab" (UID: "957098f0-d0b0-425d-b74a-fe3c84889eab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:12 crc kubenswrapper[4794]: I0216 17:22:12.953934 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/957098f0-d0b0-425d-b74a-fe3c84889eab-kube-api-access-bdtfj" (OuterVolumeSpecName: "kube-api-access-bdtfj") pod "957098f0-d0b0-425d-b74a-fe3c84889eab" (UID: "957098f0-d0b0-425d-b74a-fe3c84889eab"). InnerVolumeSpecName "kube-api-access-bdtfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:22:12 crc kubenswrapper[4794]: I0216 17:22:12.992473 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f564c83-65cd-4eb0-81b3-155b5a221041-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2f564c83-65cd-4eb0-81b3-155b5a221041" (UID: "2f564c83-65cd-4eb0-81b3-155b5a221041"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:22:12 crc kubenswrapper[4794]: I0216 17:22:12.997696 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/957098f0-d0b0-425d-b74a-fe3c84889eab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "957098f0-d0b0-425d-b74a-fe3c84889eab" (UID: "957098f0-d0b0-425d-b74a-fe3c84889eab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:13 crc kubenswrapper[4794]: I0216 17:22:13.008963 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f564c83-65cd-4eb0-81b3-155b5a221041-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2f564c83-65cd-4eb0-81b3-155b5a221041" (UID: "2f564c83-65cd-4eb0-81b3-155b5a221041"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:22:13 crc kubenswrapper[4794]: I0216 17:22:13.013204 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/957098f0-d0b0-425d-b74a-fe3c84889eab-config-data" (OuterVolumeSpecName: "config-data") pod "957098f0-d0b0-425d-b74a-fe3c84889eab" (UID: "957098f0-d0b0-425d-b74a-fe3c84889eab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:13 crc kubenswrapper[4794]: I0216 17:22:13.030886 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67e15f05-9d62-45f7-a278-aeb9583be1a3-combined-ca-bundle\") pod \"67e15f05-9d62-45f7-a278-aeb9583be1a3\" (UID: \"67e15f05-9d62-45f7-a278-aeb9583be1a3\") " Feb 16 17:22:13 crc kubenswrapper[4794]: I0216 17:22:13.031014 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/67e15f05-9d62-45f7-a278-aeb9583be1a3-config\") pod \"67e15f05-9d62-45f7-a278-aeb9583be1a3\" (UID: \"67e15f05-9d62-45f7-a278-aeb9583be1a3\") " Feb 16 17:22:13 crc kubenswrapper[4794]: I0216 17:22:13.031053 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gxcp\" (UniqueName: \"kubernetes.io/projected/67e15f05-9d62-45f7-a278-aeb9583be1a3-kube-api-access-9gxcp\") pod \"67e15f05-9d62-45f7-a278-aeb9583be1a3\" (UID: \"67e15f05-9d62-45f7-a278-aeb9583be1a3\") " Feb 16 17:22:13 crc kubenswrapper[4794]: I0216 17:22:13.031573 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f564c83-65cd-4eb0-81b3-155b5a221041-config" (OuterVolumeSpecName: "config") pod "2f564c83-65cd-4eb0-81b3-155b5a221041" (UID: "2f564c83-65cd-4eb0-81b3-155b5a221041"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:22:13 crc kubenswrapper[4794]: I0216 17:22:13.032200 4794 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/957098f0-d0b0-425d-b74a-fe3c84889eab-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:13 crc kubenswrapper[4794]: I0216 17:22:13.032221 4794 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/957098f0-d0b0-425d-b74a-fe3c84889eab-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:13 crc kubenswrapper[4794]: I0216 17:22:13.032237 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdtfj\" (UniqueName: \"kubernetes.io/projected/957098f0-d0b0-425d-b74a-fe3c84889eab-kube-api-access-bdtfj\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:13 crc kubenswrapper[4794]: I0216 17:22:13.032250 4794 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f564c83-65cd-4eb0-81b3-155b5a221041-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:13 crc kubenswrapper[4794]: I0216 17:22:13.032262 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/957098f0-d0b0-425d-b74a-fe3c84889eab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:13 crc kubenswrapper[4794]: I0216 17:22:13.032273 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f564c83-65cd-4eb0-81b3-155b5a221041-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:13 crc kubenswrapper[4794]: I0216 17:22:13.032285 4794 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/957098f0-d0b0-425d-b74a-fe3c84889eab-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:13 crc kubenswrapper[4794]: I0216 17:22:13.032296 4794 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f564c83-65cd-4eb0-81b3-155b5a221041-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:13 crc kubenswrapper[4794]: I0216 17:22:13.032325 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56xrn\" (UniqueName: \"kubernetes.io/projected/2f564c83-65cd-4eb0-81b3-155b5a221041-kube-api-access-56xrn\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:13 crc kubenswrapper[4794]: I0216 17:22:13.032337 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/957098f0-d0b0-425d-b74a-fe3c84889eab-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:13 crc kubenswrapper[4794]: I0216 17:22:13.034827 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67e15f05-9d62-45f7-a278-aeb9583be1a3-kube-api-access-9gxcp" (OuterVolumeSpecName: "kube-api-access-9gxcp") pod "67e15f05-9d62-45f7-a278-aeb9583be1a3" (UID: "67e15f05-9d62-45f7-a278-aeb9583be1a3"). InnerVolumeSpecName "kube-api-access-9gxcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:22:13 crc kubenswrapper[4794]: I0216 17:22:13.036148 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f564c83-65cd-4eb0-81b3-155b5a221041-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2f564c83-65cd-4eb0-81b3-155b5a221041" (UID: "2f564c83-65cd-4eb0-81b3-155b5a221041"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:22:13 crc kubenswrapper[4794]: I0216 17:22:13.063183 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67e15f05-9d62-45f7-a278-aeb9583be1a3-config" (OuterVolumeSpecName: "config") pod "67e15f05-9d62-45f7-a278-aeb9583be1a3" (UID: "67e15f05-9d62-45f7-a278-aeb9583be1a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:13 crc kubenswrapper[4794]: I0216 17:22:13.063781 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67e15f05-9d62-45f7-a278-aeb9583be1a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67e15f05-9d62-45f7-a278-aeb9583be1a3" (UID: "67e15f05-9d62-45f7-a278-aeb9583be1a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:13 crc kubenswrapper[4794]: I0216 17:22:13.134601 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67e15f05-9d62-45f7-a278-aeb9583be1a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:13 crc kubenswrapper[4794]: I0216 17:22:13.134637 4794 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f564c83-65cd-4eb0-81b3-155b5a221041-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:13 crc kubenswrapper[4794]: I0216 17:22:13.134647 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/67e15f05-9d62-45f7-a278-aeb9583be1a3-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:13 crc kubenswrapper[4794]: I0216 17:22:13.134655 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gxcp\" (UniqueName: \"kubernetes.io/projected/67e15f05-9d62-45f7-a278-aeb9583be1a3-kube-api-access-9gxcp\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:13 crc kubenswrapper[4794]: I0216 17:22:13.248656 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fs2n9" Feb 16 17:22:13 crc kubenswrapper[4794]: I0216 17:22:13.248652 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fs2n9" event={"ID":"67e15f05-9d62-45f7-a278-aeb9583be1a3","Type":"ContainerDied","Data":"6760cd516640d34fd2ed80148d91fc3cd7143898a65aa80120940bf912ffa854"} Feb 16 17:22:13 crc kubenswrapper[4794]: I0216 17:22:13.248796 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6760cd516640d34fd2ed80148d91fc3cd7143898a65aa80120940bf912ffa854" Feb 16 17:22:13 crc kubenswrapper[4794]: I0216 17:22:13.250328 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-snkpw" event={"ID":"957098f0-d0b0-425d-b74a-fe3c84889eab","Type":"ContainerDied","Data":"48c9789245a1615bed212ea31ea7e332480d110b604cfdc9b4a7ce8083bf90a9"} Feb 16 17:22:13 crc kubenswrapper[4794]: I0216 17:22:13.250362 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48c9789245a1615bed212ea31ea7e332480d110b604cfdc9b4a7ce8083bf90a9" Feb 16 17:22:13 crc kubenswrapper[4794]: I0216 17:22:13.250409 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-snkpw" Feb 16 17:22:13 crc kubenswrapper[4794]: I0216 17:22:13.252025 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lswtm" event={"ID":"2f564c83-65cd-4eb0-81b3-155b5a221041","Type":"ContainerDied","Data":"4d44083cae3fae77c9ae90af61e3dbf2c76a470baa3b1941555216813438bf24"} Feb 16 17:22:13 crc kubenswrapper[4794]: I0216 17:22:13.252079 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-lswtm" Feb 16 17:22:13 crc kubenswrapper[4794]: I0216 17:22:13.293517 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lswtm"] Feb 16 17:22:13 crc kubenswrapper[4794]: I0216 17:22:13.303689 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lswtm"] Feb 16 17:22:13 crc kubenswrapper[4794]: I0216 17:22:13.470763 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-lswtm" podUID="2f564c83-65cd-4eb0-81b3-155b5a221041" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.152:5353: i/o timeout" Feb 16 17:22:13 crc kubenswrapper[4794]: I0216 17:22:13.947901 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-snkpw"] Feb 16 17:22:13 crc kubenswrapper[4794]: I0216 17:22:13.960803 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-snkpw"] Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.074252 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-wnm9v"] Feb 16 17:22:14 crc kubenswrapper[4794]: E0216 17:22:14.082600 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f564c83-65cd-4eb0-81b3-155b5a221041" containerName="init" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.082643 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f564c83-65cd-4eb0-81b3-155b5a221041" containerName="init" Feb 16 17:22:14 crc kubenswrapper[4794]: E0216 17:22:14.082672 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e15f05-9d62-45f7-a278-aeb9583be1a3" containerName="neutron-db-sync" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.082680 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e15f05-9d62-45f7-a278-aeb9583be1a3" containerName="neutron-db-sync" Feb 16 17:22:14 crc kubenswrapper[4794]: E0216 17:22:14.082713 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f564c83-65cd-4eb0-81b3-155b5a221041" containerName="dnsmasq-dns" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.082721 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f564c83-65cd-4eb0-81b3-155b5a221041" containerName="dnsmasq-dns" Feb 16 17:22:14 crc kubenswrapper[4794]: E0216 17:22:14.082769 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="957098f0-d0b0-425d-b74a-fe3c84889eab" containerName="keystone-bootstrap" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.082779 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="957098f0-d0b0-425d-b74a-fe3c84889eab" containerName="keystone-bootstrap" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.093035 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f564c83-65cd-4eb0-81b3-155b5a221041" containerName="dnsmasq-dns" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.093111 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="957098f0-d0b0-425d-b74a-fe3c84889eab" containerName="keystone-bootstrap" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.093147 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="67e15f05-9d62-45f7-a278-aeb9583be1a3" containerName="neutron-db-sync" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.103905 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wnm9v" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.112613 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.113982 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.114709 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gm9wc" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.114745 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.114749 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.122491 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wnm9v"] Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.124132 4794 scope.go:117] "RemoveContainer" containerID="2e70cdea89f15323990b59c352d72a60dddb5e38a27004b3d85849bf805fa539" Feb 16 17:22:14 crc kubenswrapper[4794]: E0216 17:22:14.176693 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 16 17:22:14 crc kubenswrapper[4794]: E0216 17:22:14.176874 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fvqfk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-t9x9p_openstack(706ed090-ccb8-4488-ae71-8c991476fd08): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 17:22:14 crc kubenswrapper[4794]: E0216 17:22:14.178564 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-t9x9p" podUID="706ed090-ccb8-4488-ae71-8c991476fd08" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.184038 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/15d18e7f-9229-47e4-97f3-d5515e5c59fb-fernet-keys\") pod \"keystone-bootstrap-wnm9v\" (UID: \"15d18e7f-9229-47e4-97f3-d5515e5c59fb\") " pod="openstack/keystone-bootstrap-wnm9v" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.184364 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15d18e7f-9229-47e4-97f3-d5515e5c59fb-scripts\") pod \"keystone-bootstrap-wnm9v\" (UID: \"15d18e7f-9229-47e4-97f3-d5515e5c59fb\") " pod="openstack/keystone-bootstrap-wnm9v" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.184516 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15d18e7f-9229-47e4-97f3-d5515e5c59fb-config-data\") pod \"keystone-bootstrap-wnm9v\" (UID: \"15d18e7f-9229-47e4-97f3-d5515e5c59fb\") " pod="openstack/keystone-bootstrap-wnm9v" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.184696 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svq5g\" (UniqueName: \"kubernetes.io/projected/15d18e7f-9229-47e4-97f3-d5515e5c59fb-kube-api-access-svq5g\") pod \"keystone-bootstrap-wnm9v\" (UID: \"15d18e7f-9229-47e4-97f3-d5515e5c59fb\") " pod="openstack/keystone-bootstrap-wnm9v" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.184813 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15d18e7f-9229-47e4-97f3-d5515e5c59fb-combined-ca-bundle\") pod \"keystone-bootstrap-wnm9v\" (UID: \"15d18e7f-9229-47e4-97f3-d5515e5c59fb\") " pod="openstack/keystone-bootstrap-wnm9v" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.184999 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/15d18e7f-9229-47e4-97f3-d5515e5c59fb-credential-keys\") pod \"keystone-bootstrap-wnm9v\" (UID: \"15d18e7f-9229-47e4-97f3-d5515e5c59fb\") " pod="openstack/keystone-bootstrap-wnm9v" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.242337 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-bxq86"] Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.246109 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-bxq86" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.275749 4794 scope.go:117] "RemoveContainer" containerID="7a8351be092aba96088e5aa491898dc2b25cc8446898d1008791eadabd0ab52c" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.278719 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-99f86f5f6-sdjdr"] Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.286692 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/15d18e7f-9229-47e4-97f3-d5515e5c59fb-credential-keys\") pod \"keystone-bootstrap-wnm9v\" (UID: \"15d18e7f-9229-47e4-97f3-d5515e5c59fb\") " pod="openstack/keystone-bootstrap-wnm9v" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.292852 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/15d18e7f-9229-47e4-97f3-d5515e5c59fb-fernet-keys\") pod \"keystone-bootstrap-wnm9v\" (UID: \"15d18e7f-9229-47e4-97f3-d5515e5c59fb\") " pod="openstack/keystone-bootstrap-wnm9v" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.295957 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15d18e7f-9229-47e4-97f3-d5515e5c59fb-scripts\") pod \"keystone-bootstrap-wnm9v\" (UID: \"15d18e7f-9229-47e4-97f3-d5515e5c59fb\") " pod="openstack/keystone-bootstrap-wnm9v" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.296526 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15d18e7f-9229-47e4-97f3-d5515e5c59fb-config-data\") pod \"keystone-bootstrap-wnm9v\" (UID: \"15d18e7f-9229-47e4-97f3-d5515e5c59fb\") " pod="openstack/keystone-bootstrap-wnm9v" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.296704 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svq5g\" (UniqueName: \"kubernetes.io/projected/15d18e7f-9229-47e4-97f3-d5515e5c59fb-kube-api-access-svq5g\") pod \"keystone-bootstrap-wnm9v\" (UID: \"15d18e7f-9229-47e4-97f3-d5515e5c59fb\") " pod="openstack/keystone-bootstrap-wnm9v" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.296818 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15d18e7f-9229-47e4-97f3-d5515e5c59fb-combined-ca-bundle\") pod \"keystone-bootstrap-wnm9v\" (UID: \"15d18e7f-9229-47e4-97f3-d5515e5c59fb\") " pod="openstack/keystone-bootstrap-wnm9v" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.296181 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-bxq86"] Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.296278 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-99f86f5f6-sdjdr" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.299652 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qxq52" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.302520 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/15d18e7f-9229-47e4-97f3-d5515e5c59fb-credential-keys\") pod \"keystone-bootstrap-wnm9v\" (UID: \"15d18e7f-9229-47e4-97f3-d5515e5c59fb\") " pod="openstack/keystone-bootstrap-wnm9v" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.303127 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.307217 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15d18e7f-9229-47e4-97f3-d5515e5c59fb-config-data\") pod \"keystone-bootstrap-wnm9v\" (UID: \"15d18e7f-9229-47e4-97f3-d5515e5c59fb\") " pod="openstack/keystone-bootstrap-wnm9v" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.303239 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.307274 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-99f86f5f6-sdjdr"] Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.309394 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15d18e7f-9229-47e4-97f3-d5515e5c59fb-scripts\") pod \"keystone-bootstrap-wnm9v\" (UID: \"15d18e7f-9229-47e4-97f3-d5515e5c59fb\") " pod="openstack/keystone-bootstrap-wnm9v" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.303349 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.311143 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/15d18e7f-9229-47e4-97f3-d5515e5c59fb-fernet-keys\") pod \"keystone-bootstrap-wnm9v\" (UID: \"15d18e7f-9229-47e4-97f3-d5515e5c59fb\") " pod="openstack/keystone-bootstrap-wnm9v" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.316571 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15d18e7f-9229-47e4-97f3-d5515e5c59fb-combined-ca-bundle\") pod \"keystone-bootstrap-wnm9v\" (UID: \"15d18e7f-9229-47e4-97f3-d5515e5c59fb\") " pod="openstack/keystone-bootstrap-wnm9v" Feb 16 17:22:14 crc kubenswrapper[4794]: E0216 17:22:14.340472 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-t9x9p" podUID="706ed090-ccb8-4488-ae71-8c991476fd08" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.344750 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svq5g\" (UniqueName: \"kubernetes.io/projected/15d18e7f-9229-47e4-97f3-d5515e5c59fb-kube-api-access-svq5g\") pod \"keystone-bootstrap-wnm9v\" (UID: \"15d18e7f-9229-47e4-97f3-d5515e5c59fb\") " pod="openstack/keystone-bootstrap-wnm9v" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.398318 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqjfz\" (UniqueName: \"kubernetes.io/projected/a0a2ba29-1ca7-4b10-9f24-5810b4e27296-kube-api-access-hqjfz\") pod \"dnsmasq-dns-6b7b667979-bxq86\" (UID: \"a0a2ba29-1ca7-4b10-9f24-5810b4e27296\") " pod="openstack/dnsmasq-dns-6b7b667979-bxq86" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.398565 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5b69fea3-061c-40bb-86ff-ca8af8587049-httpd-config\") pod \"neutron-99f86f5f6-sdjdr\" (UID: \"5b69fea3-061c-40bb-86ff-ca8af8587049\") " pod="openstack/neutron-99f86f5f6-sdjdr" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.398595 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a0a2ba29-1ca7-4b10-9f24-5810b4e27296-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-bxq86\" (UID: \"a0a2ba29-1ca7-4b10-9f24-5810b4e27296\") " pod="openstack/dnsmasq-dns-6b7b667979-bxq86" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.398621 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b69fea3-061c-40bb-86ff-ca8af8587049-combined-ca-bundle\") pod \"neutron-99f86f5f6-sdjdr\" (UID: \"5b69fea3-061c-40bb-86ff-ca8af8587049\") " pod="openstack/neutron-99f86f5f6-sdjdr" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.398638 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn5d7\" (UniqueName: \"kubernetes.io/projected/5b69fea3-061c-40bb-86ff-ca8af8587049-kube-api-access-qn5d7\") pod \"neutron-99f86f5f6-sdjdr\" (UID: \"5b69fea3-061c-40bb-86ff-ca8af8587049\") " pod="openstack/neutron-99f86f5f6-sdjdr" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.398686 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0a2ba29-1ca7-4b10-9f24-5810b4e27296-dns-svc\") pod \"dnsmasq-dns-6b7b667979-bxq86\" (UID: \"a0a2ba29-1ca7-4b10-9f24-5810b4e27296\") " pod="openstack/dnsmasq-dns-6b7b667979-bxq86" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.398707 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b69fea3-061c-40bb-86ff-ca8af8587049-ovndb-tls-certs\") pod \"neutron-99f86f5f6-sdjdr\" (UID: \"5b69fea3-061c-40bb-86ff-ca8af8587049\") " pod="openstack/neutron-99f86f5f6-sdjdr" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.398735 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5b69fea3-061c-40bb-86ff-ca8af8587049-config\") pod \"neutron-99f86f5f6-sdjdr\" (UID: \"5b69fea3-061c-40bb-86ff-ca8af8587049\") " pod="openstack/neutron-99f86f5f6-sdjdr" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.398786 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0a2ba29-1ca7-4b10-9f24-5810b4e27296-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-bxq86\" (UID: \"a0a2ba29-1ca7-4b10-9f24-5810b4e27296\") " pod="openstack/dnsmasq-dns-6b7b667979-bxq86" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.398819 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0a2ba29-1ca7-4b10-9f24-5810b4e27296-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-bxq86\" (UID: \"a0a2ba29-1ca7-4b10-9f24-5810b4e27296\") " pod="openstack/dnsmasq-dns-6b7b667979-bxq86" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.398867 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0a2ba29-1ca7-4b10-9f24-5810b4e27296-config\") pod \"dnsmasq-dns-6b7b667979-bxq86\" (UID: \"a0a2ba29-1ca7-4b10-9f24-5810b4e27296\") " pod="openstack/dnsmasq-dns-6b7b667979-bxq86" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.432894 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wnm9v" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.484131 4794 scope.go:117] "RemoveContainer" containerID="a9197f571a6a4ec904f6ebf4455d0bbf732cd435435fcc0805cffabdeb5ad6df" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.503745 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0a2ba29-1ca7-4b10-9f24-5810b4e27296-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-bxq86\" (UID: \"a0a2ba29-1ca7-4b10-9f24-5810b4e27296\") " pod="openstack/dnsmasq-dns-6b7b667979-bxq86" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.503869 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0a2ba29-1ca7-4b10-9f24-5810b4e27296-config\") pod \"dnsmasq-dns-6b7b667979-bxq86\" (UID: \"a0a2ba29-1ca7-4b10-9f24-5810b4e27296\") " pod="openstack/dnsmasq-dns-6b7b667979-bxq86" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.503989 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqjfz\" (UniqueName: \"kubernetes.io/projected/a0a2ba29-1ca7-4b10-9f24-5810b4e27296-kube-api-access-hqjfz\") pod \"dnsmasq-dns-6b7b667979-bxq86\" (UID: \"a0a2ba29-1ca7-4b10-9f24-5810b4e27296\") " pod="openstack/dnsmasq-dns-6b7b667979-bxq86" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.504053 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5b69fea3-061c-40bb-86ff-ca8af8587049-httpd-config\") pod \"neutron-99f86f5f6-sdjdr\" (UID: \"5b69fea3-061c-40bb-86ff-ca8af8587049\") " pod="openstack/neutron-99f86f5f6-sdjdr" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.504091 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a0a2ba29-1ca7-4b10-9f24-5810b4e27296-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-bxq86\" (UID: \"a0a2ba29-1ca7-4b10-9f24-5810b4e27296\") " pod="openstack/dnsmasq-dns-6b7b667979-bxq86" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.504126 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b69fea3-061c-40bb-86ff-ca8af8587049-combined-ca-bundle\") pod \"neutron-99f86f5f6-sdjdr\" (UID: \"5b69fea3-061c-40bb-86ff-ca8af8587049\") " pod="openstack/neutron-99f86f5f6-sdjdr" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.504150 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn5d7\" (UniqueName: \"kubernetes.io/projected/5b69fea3-061c-40bb-86ff-ca8af8587049-kube-api-access-qn5d7\") pod \"neutron-99f86f5f6-sdjdr\" (UID: \"5b69fea3-061c-40bb-86ff-ca8af8587049\") " pod="openstack/neutron-99f86f5f6-sdjdr" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.504246 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0a2ba29-1ca7-4b10-9f24-5810b4e27296-dns-svc\") pod \"dnsmasq-dns-6b7b667979-bxq86\" (UID: \"a0a2ba29-1ca7-4b10-9f24-5810b4e27296\") " pod="openstack/dnsmasq-dns-6b7b667979-bxq86" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.504284 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b69fea3-061c-40bb-86ff-ca8af8587049-ovndb-tls-certs\") pod \"neutron-99f86f5f6-sdjdr\" (UID: \"5b69fea3-061c-40bb-86ff-ca8af8587049\") " pod="openstack/neutron-99f86f5f6-sdjdr" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.504355 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5b69fea3-061c-40bb-86ff-ca8af8587049-config\") pod \"neutron-99f86f5f6-sdjdr\" (UID: \"5b69fea3-061c-40bb-86ff-ca8af8587049\") " pod="openstack/neutron-99f86f5f6-sdjdr" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.504445 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0a2ba29-1ca7-4b10-9f24-5810b4e27296-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-bxq86\" (UID: \"a0a2ba29-1ca7-4b10-9f24-5810b4e27296\") " pod="openstack/dnsmasq-dns-6b7b667979-bxq86" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.509064 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0a2ba29-1ca7-4b10-9f24-5810b4e27296-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-bxq86\" (UID: \"a0a2ba29-1ca7-4b10-9f24-5810b4e27296\") " pod="openstack/dnsmasq-dns-6b7b667979-bxq86" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.511000 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0a2ba29-1ca7-4b10-9f24-5810b4e27296-config\") pod \"dnsmasq-dns-6b7b667979-bxq86\" (UID: \"a0a2ba29-1ca7-4b10-9f24-5810b4e27296\") " pod="openstack/dnsmasq-dns-6b7b667979-bxq86" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.511882 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0a2ba29-1ca7-4b10-9f24-5810b4e27296-dns-svc\") pod \"dnsmasq-dns-6b7b667979-bxq86\" (UID: \"a0a2ba29-1ca7-4b10-9f24-5810b4e27296\") " pod="openstack/dnsmasq-dns-6b7b667979-bxq86" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.511925 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0a2ba29-1ca7-4b10-9f24-5810b4e27296-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-bxq86\" (UID: \"a0a2ba29-1ca7-4b10-9f24-5810b4e27296\") " pod="openstack/dnsmasq-dns-6b7b667979-bxq86" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.512740 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a0a2ba29-1ca7-4b10-9f24-5810b4e27296-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-bxq86\" (UID: \"a0a2ba29-1ca7-4b10-9f24-5810b4e27296\") " pod="openstack/dnsmasq-dns-6b7b667979-bxq86" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.518538 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5b69fea3-061c-40bb-86ff-ca8af8587049-config\") pod \"neutron-99f86f5f6-sdjdr\" (UID: \"5b69fea3-061c-40bb-86ff-ca8af8587049\") " pod="openstack/neutron-99f86f5f6-sdjdr" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.520805 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b69fea3-061c-40bb-86ff-ca8af8587049-ovndb-tls-certs\") pod \"neutron-99f86f5f6-sdjdr\" (UID: \"5b69fea3-061c-40bb-86ff-ca8af8587049\") " pod="openstack/neutron-99f86f5f6-sdjdr" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.523288 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b69fea3-061c-40bb-86ff-ca8af8587049-combined-ca-bundle\") pod \"neutron-99f86f5f6-sdjdr\" (UID: \"5b69fea3-061c-40bb-86ff-ca8af8587049\") " pod="openstack/neutron-99f86f5f6-sdjdr" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.523360 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5b69fea3-061c-40bb-86ff-ca8af8587049-httpd-config\") pod \"neutron-99f86f5f6-sdjdr\" (UID: \"5b69fea3-061c-40bb-86ff-ca8af8587049\") " pod="openstack/neutron-99f86f5f6-sdjdr" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.525740 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn5d7\" (UniqueName: \"kubernetes.io/projected/5b69fea3-061c-40bb-86ff-ca8af8587049-kube-api-access-qn5d7\") pod \"neutron-99f86f5f6-sdjdr\" (UID: \"5b69fea3-061c-40bb-86ff-ca8af8587049\") " pod="openstack/neutron-99f86f5f6-sdjdr" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.546727 4794 scope.go:117] "RemoveContainer" containerID="9a1377941f258a19d948dcca0bb9670bdaac5c722217194a63ccabb43428ad31" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.559814 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqjfz\" (UniqueName: \"kubernetes.io/projected/a0a2ba29-1ca7-4b10-9f24-5810b4e27296-kube-api-access-hqjfz\") pod \"dnsmasq-dns-6b7b667979-bxq86\" (UID: \"a0a2ba29-1ca7-4b10-9f24-5810b4e27296\") " pod="openstack/dnsmasq-dns-6b7b667979-bxq86" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.743881 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-bxq86" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.761634 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-99f86f5f6-sdjdr" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.834586 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f564c83-65cd-4eb0-81b3-155b5a221041" path="/var/lib/kubelet/pods/2f564c83-65cd-4eb0-81b3-155b5a221041/volumes" Feb 16 17:22:14 crc kubenswrapper[4794]: I0216 17:22:14.835425 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="957098f0-d0b0-425d-b74a-fe3c84889eab" path="/var/lib/kubelet/pods/957098f0-d0b0-425d-b74a-fe3c84889eab/volumes" Feb 16 17:22:15 crc kubenswrapper[4794]: I0216 17:22:14.999081 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 17:22:15 crc kubenswrapper[4794]: I0216 17:22:15.199693 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 17:22:15 crc kubenswrapper[4794]: W0216 17:22:15.224720 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4cf7b50d_6ee8_41b2_b69f_123961055859.slice/crio-6f0e21fb54e8b5c703a514d2f3c7034ddb9a4aac709051e03a9fb6c053f3b800 WatchSource:0}: Error finding container 6f0e21fb54e8b5c703a514d2f3c7034ddb9a4aac709051e03a9fb6c053f3b800: Status 404 returned error can't find the container with id 6f0e21fb54e8b5c703a514d2f3c7034ddb9a4aac709051e03a9fb6c053f3b800 Feb 16 17:22:15 crc kubenswrapper[4794]: I0216 17:22:15.319946 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wnm9v"] Feb 16 17:22:15 crc kubenswrapper[4794]: I0216 17:22:15.413779 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f584ef06-0506-4130-b87a-ec406e89d1f5","Type":"ContainerStarted","Data":"2cc6c7a597b2080303daf29b00c33c2018487cb789f92dfa77811f37fe0d75a5"} Feb 16 17:22:15 crc kubenswrapper[4794]: I0216 17:22:15.471013 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4cf7b50d-6ee8-41b2-b69f-123961055859","Type":"ContainerStarted","Data":"6f0e21fb54e8b5c703a514d2f3c7034ddb9a4aac709051e03a9fb6c053f3b800"} Feb 16 17:22:15 crc kubenswrapper[4794]: I0216 17:22:15.479382 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4fdhf" event={"ID":"7473b04b-0d0a-4c73-ac81-f0ad2959dc79","Type":"ContainerStarted","Data":"5e365cd8b92e9b70ab7a1ff326aa3ea071de71ca4a3fca7e51d64e7410449362"} Feb 16 17:22:15 crc kubenswrapper[4794]: I0216 17:22:15.512040 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-4fdhf" podStartSLOduration=5.113412941 podStartE2EDuration="33.512013851s" podCreationTimestamp="2026-02-16 17:21:42 +0000 UTC" firstStartedPulling="2026-02-16 17:21:44.346796552 +0000 UTC m=+1330.294891209" lastFinishedPulling="2026-02-16 17:22:12.745397472 +0000 UTC m=+1358.693492119" observedRunningTime="2026-02-16 17:22:15.510147968 +0000 UTC m=+1361.458242615" watchObservedRunningTime="2026-02-16 17:22:15.512013851 +0000 UTC m=+1361.460108498" Feb 16 17:22:15 crc kubenswrapper[4794]: I0216 17:22:15.515582 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d19b9ba9-ea39-41de-a397-3c5e844f24d8","Type":"ContainerStarted","Data":"95320eead86b6201e11386ddce890128fb5bfb64949195e914dbc9f3fa6fdfc1"} Feb 16 17:22:15 crc kubenswrapper[4794]: I0216 17:22:15.633947 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-99f86f5f6-sdjdr"] Feb 16 17:22:15 crc kubenswrapper[4794]: W0216 17:22:15.641909 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b69fea3_061c_40bb_86ff_ca8af8587049.slice/crio-b5bd6823370a9894ca639d4b897e2b1cb0f3900a56cff1cd8a184ab7f6f72b08 WatchSource:0}: Error finding container b5bd6823370a9894ca639d4b897e2b1cb0f3900a56cff1cd8a184ab7f6f72b08: Status 404 returned error can't find the container with id b5bd6823370a9894ca639d4b897e2b1cb0f3900a56cff1cd8a184ab7f6f72b08 Feb 16 17:22:15 crc kubenswrapper[4794]: I0216 17:22:15.751756 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-bxq86"] Feb 16 17:22:16 crc kubenswrapper[4794]: I0216 17:22:16.531335 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4cf7b50d-6ee8-41b2-b69f-123961055859","Type":"ContainerStarted","Data":"ef455fabfdbfeea902086b034c0c5be8b9f499365193ee1ac962d1962cc87e5a"} Feb 16 17:22:16 crc kubenswrapper[4794]: I0216 17:22:16.534883 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d19b9ba9-ea39-41de-a397-3c5e844f24d8","Type":"ContainerStarted","Data":"f7fcc9d2e2a3cf045de6933c6f3157fa88ef76b4c31433b15d47742f69c704c4"} Feb 16 17:22:16 crc kubenswrapper[4794]: I0216 17:22:16.537257 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-99f86f5f6-sdjdr" event={"ID":"5b69fea3-061c-40bb-86ff-ca8af8587049","Type":"ContainerStarted","Data":"12db5c04ecc3f1a679a59c218185982a095aeb876b9954d19b4c4aecd06fef40"} Feb 16 17:22:16 crc kubenswrapper[4794]: I0216 17:22:16.537313 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-99f86f5f6-sdjdr" event={"ID":"5b69fea3-061c-40bb-86ff-ca8af8587049","Type":"ContainerStarted","Data":"b5bd6823370a9894ca639d4b897e2b1cb0f3900a56cff1cd8a184ab7f6f72b08"} Feb 16 17:22:16 crc kubenswrapper[4794]: I0216 17:22:16.539058 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wnm9v" event={"ID":"15d18e7f-9229-47e4-97f3-d5515e5c59fb","Type":"ContainerStarted","Data":"2979acd342e4124f130f2b0129a7af906efdbe4e15b83cddb51e005fb30ea921"} Feb 16 17:22:16 crc kubenswrapper[4794]: I0216 17:22:16.539089 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wnm9v" event={"ID":"15d18e7f-9229-47e4-97f3-d5515e5c59fb","Type":"ContainerStarted","Data":"7e3d25f279085738649571f02f3b3eb93f94d22d9e6780463e18c2d524d1dcd5"} Feb 16 17:22:16 crc kubenswrapper[4794]: I0216 17:22:16.544009 4794 generic.go:334] "Generic (PLEG): container finished" podID="a0a2ba29-1ca7-4b10-9f24-5810b4e27296" containerID="8a9203a7a5fab91c4951810d4dfba524f991e23a7c384f1706fb1634b26b3f6c" exitCode=0 Feb 16 17:22:16 crc kubenswrapper[4794]: I0216 17:22:16.545375 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-bxq86" event={"ID":"a0a2ba29-1ca7-4b10-9f24-5810b4e27296","Type":"ContainerDied","Data":"8a9203a7a5fab91c4951810d4dfba524f991e23a7c384f1706fb1634b26b3f6c"} Feb 16 17:22:16 crc kubenswrapper[4794]: I0216 17:22:16.545413 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-bxq86" event={"ID":"a0a2ba29-1ca7-4b10-9f24-5810b4e27296","Type":"ContainerStarted","Data":"bb4c5a21c6516f61e89aa83b46a15b1d2b63c00c0afd69a3e0c8aaf1ddd1a330"} Feb 16 17:22:16 crc kubenswrapper[4794]: I0216 17:22:16.560126 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-wnm9v" podStartSLOduration=2.560110642 podStartE2EDuration="2.560110642s" podCreationTimestamp="2026-02-16 17:22:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:22:16.558473276 +0000 UTC m=+1362.506567923" watchObservedRunningTime="2026-02-16 17:22:16.560110642 +0000 UTC m=+1362.508205289" Feb 16 17:22:16 crc kubenswrapper[4794]: I0216 17:22:16.809210 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-c659f6967-vsf27"] Feb 16 17:22:16 crc kubenswrapper[4794]: I0216 17:22:16.811192 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c659f6967-vsf27" Feb 16 17:22:16 crc kubenswrapper[4794]: I0216 17:22:16.815318 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 16 17:22:16 crc kubenswrapper[4794]: I0216 17:22:16.815443 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 16 17:22:16 crc kubenswrapper[4794]: I0216 17:22:16.820179 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c659f6967-vsf27"] Feb 16 17:22:16 crc kubenswrapper[4794]: I0216 17:22:16.912721 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/20805f32-52bf-4449-90fd-8e83635f8154-ovndb-tls-certs\") pod \"neutron-c659f6967-vsf27\" (UID: \"20805f32-52bf-4449-90fd-8e83635f8154\") " pod="openstack/neutron-c659f6967-vsf27" Feb 16 17:22:16 crc kubenswrapper[4794]: I0216 17:22:16.913040 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20805f32-52bf-4449-90fd-8e83635f8154-public-tls-certs\") pod \"neutron-c659f6967-vsf27\" (UID: \"20805f32-52bf-4449-90fd-8e83635f8154\") " pod="openstack/neutron-c659f6967-vsf27" Feb 16 17:22:16 crc kubenswrapper[4794]: I0216 17:22:16.913097 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/20805f32-52bf-4449-90fd-8e83635f8154-httpd-config\") pod \"neutron-c659f6967-vsf27\" (UID: \"20805f32-52bf-4449-90fd-8e83635f8154\") " pod="openstack/neutron-c659f6967-vsf27" Feb 16 17:22:16 crc kubenswrapper[4794]: I0216 17:22:16.913197 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20805f32-52bf-4449-90fd-8e83635f8154-combined-ca-bundle\") pod \"neutron-c659f6967-vsf27\" (UID: \"20805f32-52bf-4449-90fd-8e83635f8154\") " pod="openstack/neutron-c659f6967-vsf27" Feb 16 17:22:16 crc kubenswrapper[4794]: I0216 17:22:16.913259 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/20805f32-52bf-4449-90fd-8e83635f8154-config\") pod \"neutron-c659f6967-vsf27\" (UID: \"20805f32-52bf-4449-90fd-8e83635f8154\") " pod="openstack/neutron-c659f6967-vsf27" Feb 16 17:22:16 crc kubenswrapper[4794]: I0216 17:22:16.913332 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29rk8\" (UniqueName: \"kubernetes.io/projected/20805f32-52bf-4449-90fd-8e83635f8154-kube-api-access-29rk8\") pod \"neutron-c659f6967-vsf27\" (UID: \"20805f32-52bf-4449-90fd-8e83635f8154\") " pod="openstack/neutron-c659f6967-vsf27" Feb 16 17:22:16 crc kubenswrapper[4794]: I0216 17:22:16.913354 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20805f32-52bf-4449-90fd-8e83635f8154-internal-tls-certs\") pod \"neutron-c659f6967-vsf27\" (UID: \"20805f32-52bf-4449-90fd-8e83635f8154\") " pod="openstack/neutron-c659f6967-vsf27" Feb 16 17:22:17 crc kubenswrapper[4794]: I0216 17:22:17.015796 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/20805f32-52bf-4449-90fd-8e83635f8154-ovndb-tls-certs\") pod \"neutron-c659f6967-vsf27\" (UID: \"20805f32-52bf-4449-90fd-8e83635f8154\") " pod="openstack/neutron-c659f6967-vsf27" Feb 16 17:22:17 crc kubenswrapper[4794]: I0216 17:22:17.015848 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20805f32-52bf-4449-90fd-8e83635f8154-public-tls-certs\") pod \"neutron-c659f6967-vsf27\" (UID: \"20805f32-52bf-4449-90fd-8e83635f8154\") " pod="openstack/neutron-c659f6967-vsf27" Feb 16 17:22:17 crc kubenswrapper[4794]: I0216 17:22:17.015873 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/20805f32-52bf-4449-90fd-8e83635f8154-httpd-config\") pod \"neutron-c659f6967-vsf27\" (UID: \"20805f32-52bf-4449-90fd-8e83635f8154\") " pod="openstack/neutron-c659f6967-vsf27" Feb 16 17:22:17 crc kubenswrapper[4794]: I0216 17:22:17.015983 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20805f32-52bf-4449-90fd-8e83635f8154-combined-ca-bundle\") pod \"neutron-c659f6967-vsf27\" (UID: \"20805f32-52bf-4449-90fd-8e83635f8154\") " pod="openstack/neutron-c659f6967-vsf27" Feb 16 17:22:17 crc kubenswrapper[4794]: I0216 17:22:17.016851 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/20805f32-52bf-4449-90fd-8e83635f8154-config\") pod \"neutron-c659f6967-vsf27\" (UID: \"20805f32-52bf-4449-90fd-8e83635f8154\") " pod="openstack/neutron-c659f6967-vsf27" Feb 16 17:22:17 crc kubenswrapper[4794]: I0216 17:22:17.016924 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29rk8\" (UniqueName: \"kubernetes.io/projected/20805f32-52bf-4449-90fd-8e83635f8154-kube-api-access-29rk8\") pod \"neutron-c659f6967-vsf27\" (UID: \"20805f32-52bf-4449-90fd-8e83635f8154\") " pod="openstack/neutron-c659f6967-vsf27" Feb 16 17:22:17 crc kubenswrapper[4794]: I0216 17:22:17.016954 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20805f32-52bf-4449-90fd-8e83635f8154-internal-tls-certs\") pod \"neutron-c659f6967-vsf27\" (UID: \"20805f32-52bf-4449-90fd-8e83635f8154\") " pod="openstack/neutron-c659f6967-vsf27" Feb 16 17:22:17 crc kubenswrapper[4794]: I0216 17:22:17.020993 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20805f32-52bf-4449-90fd-8e83635f8154-internal-tls-certs\") pod \"neutron-c659f6967-vsf27\" (UID: \"20805f32-52bf-4449-90fd-8e83635f8154\") " pod="openstack/neutron-c659f6967-vsf27" Feb 16 17:22:17 crc kubenswrapper[4794]: I0216 17:22:17.025164 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/20805f32-52bf-4449-90fd-8e83635f8154-config\") pod \"neutron-c659f6967-vsf27\" (UID: \"20805f32-52bf-4449-90fd-8e83635f8154\") " pod="openstack/neutron-c659f6967-vsf27" Feb 16 17:22:17 crc kubenswrapper[4794]: I0216 17:22:17.033641 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20805f32-52bf-4449-90fd-8e83635f8154-public-tls-certs\") pod \"neutron-c659f6967-vsf27\" (UID: \"20805f32-52bf-4449-90fd-8e83635f8154\") " pod="openstack/neutron-c659f6967-vsf27" Feb 16 17:22:17 crc kubenswrapper[4794]: I0216 17:22:17.040820 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20805f32-52bf-4449-90fd-8e83635f8154-combined-ca-bundle\") pod \"neutron-c659f6967-vsf27\" (UID: \"20805f32-52bf-4449-90fd-8e83635f8154\") " pod="openstack/neutron-c659f6967-vsf27" Feb 16 17:22:17 crc kubenswrapper[4794]: I0216 17:22:17.041039 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/20805f32-52bf-4449-90fd-8e83635f8154-ovndb-tls-certs\") pod \"neutron-c659f6967-vsf27\" (UID: \"20805f32-52bf-4449-90fd-8e83635f8154\") " pod="openstack/neutron-c659f6967-vsf27" Feb 16 17:22:17 crc kubenswrapper[4794]: I0216 17:22:17.044782 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/20805f32-52bf-4449-90fd-8e83635f8154-httpd-config\") pod \"neutron-c659f6967-vsf27\" (UID: \"20805f32-52bf-4449-90fd-8e83635f8154\") " pod="openstack/neutron-c659f6967-vsf27" Feb 16 17:22:17 crc kubenswrapper[4794]: I0216 17:22:17.069186 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29rk8\" (UniqueName: \"kubernetes.io/projected/20805f32-52bf-4449-90fd-8e83635f8154-kube-api-access-29rk8\") pod \"neutron-c659f6967-vsf27\" (UID: \"20805f32-52bf-4449-90fd-8e83635f8154\") " pod="openstack/neutron-c659f6967-vsf27" Feb 16 17:22:17 crc kubenswrapper[4794]: I0216 17:22:17.149260 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c659f6967-vsf27" Feb 16 17:22:17 crc kubenswrapper[4794]: I0216 17:22:17.601070 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-bxq86" event={"ID":"a0a2ba29-1ca7-4b10-9f24-5810b4e27296","Type":"ContainerStarted","Data":"22ab93557e6b5b080a320d16a6c25285734d1893f3818562159a1838e2b62e67"} Feb 16 17:22:17 crc kubenswrapper[4794]: I0216 17:22:17.601366 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-bxq86" Feb 16 17:22:17 crc kubenswrapper[4794]: I0216 17:22:17.604854 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f584ef06-0506-4130-b87a-ec406e89d1f5","Type":"ContainerStarted","Data":"5e1fa942c0ff1b85cda1c6ce6325bf99bad53aec28836eee80f2ac547e95187d"} Feb 16 17:22:17 crc kubenswrapper[4794]: I0216 17:22:17.611593 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-99f86f5f6-sdjdr" event={"ID":"5b69fea3-061c-40bb-86ff-ca8af8587049","Type":"ContainerStarted","Data":"e72c1c52f98c2b1baaab6d99b99add46e1dd0d4a019fedd86f26bdd1e4265a79"} Feb 16 17:22:17 crc kubenswrapper[4794]: I0216 17:22:17.636943 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-bxq86" podStartSLOduration=3.636924081 podStartE2EDuration="3.636924081s" podCreationTimestamp="2026-02-16 17:22:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:22:17.624622524 +0000 UTC m=+1363.572717181" watchObservedRunningTime="2026-02-16 17:22:17.636924081 +0000 UTC m=+1363.585018728" Feb 16 17:22:17 crc kubenswrapper[4794]: I0216 17:22:17.663059 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-99f86f5f6-sdjdr" podStartSLOduration=3.663042339 podStartE2EDuration="3.663042339s" podCreationTimestamp="2026-02-16 17:22:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:22:17.661764423 +0000 UTC m=+1363.609859070" watchObservedRunningTime="2026-02-16 17:22:17.663042339 +0000 UTC m=+1363.611136986" Feb 16 17:22:17 crc kubenswrapper[4794]: I0216 17:22:17.911414 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c659f6967-vsf27"] Feb 16 17:22:18 crc kubenswrapper[4794]: I0216 17:22:18.651136 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c659f6967-vsf27" event={"ID":"20805f32-52bf-4449-90fd-8e83635f8154","Type":"ContainerStarted","Data":"dfba6fb97b7e10eb69d8f50d615d60e352c844939be4b756403304dee500a66e"} Feb 16 17:22:18 crc kubenswrapper[4794]: I0216 17:22:18.651724 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c659f6967-vsf27" event={"ID":"20805f32-52bf-4449-90fd-8e83635f8154","Type":"ContainerStarted","Data":"a07003dd6d04b6392ad6a95ed24662d7aed38806951188bf1495200e0a697d3e"} Feb 16 17:22:18 crc kubenswrapper[4794]: I0216 17:22:18.651743 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c659f6967-vsf27" event={"ID":"20805f32-52bf-4449-90fd-8e83635f8154","Type":"ContainerStarted","Data":"1a982d82083335fe814998413022d818815014621912fcf221278afcc2aba732"} Feb 16 17:22:18 crc kubenswrapper[4794]: I0216 17:22:18.651794 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-c659f6967-vsf27" Feb 16 17:22:18 crc kubenswrapper[4794]: I0216 17:22:18.661959 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4cf7b50d-6ee8-41b2-b69f-123961055859","Type":"ContainerStarted","Data":"89e84af07a003d96cbc866678c72d698389a2a203439d0e7c5e5c35be3e29833"} Feb 16 17:22:18 crc kubenswrapper[4794]: I0216 17:22:18.683592 4794 generic.go:334] "Generic (PLEG): container finished" podID="7473b04b-0d0a-4c73-ac81-f0ad2959dc79" containerID="5e365cd8b92e9b70ab7a1ff326aa3ea071de71ca4a3fca7e51d64e7410449362" exitCode=0 Feb 16 17:22:18 crc kubenswrapper[4794]: I0216 17:22:18.683695 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4fdhf" event={"ID":"7473b04b-0d0a-4c73-ac81-f0ad2959dc79","Type":"ContainerDied","Data":"5e365cd8b92e9b70ab7a1ff326aa3ea071de71ca4a3fca7e51d64e7410449362"} Feb 16 17:22:18 crc kubenswrapper[4794]: I0216 17:22:18.690271 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-c659f6967-vsf27" podStartSLOduration=2.690242817 podStartE2EDuration="2.690242817s" podCreationTimestamp="2026-02-16 17:22:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:22:18.670770018 +0000 UTC m=+1364.618864665" watchObservedRunningTime="2026-02-16 17:22:18.690242817 +0000 UTC m=+1364.638337464" Feb 16 17:22:18 crc kubenswrapper[4794]: I0216 17:22:18.695240 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d19b9ba9-ea39-41de-a397-3c5e844f24d8","Type":"ContainerStarted","Data":"32f52e093ea4ea104e1eb0b7e91432ca9fd23e6ef1f6124fdb162c3d7dca3c56"} Feb 16 17:22:18 crc kubenswrapper[4794]: I0216 17:22:18.695827 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-99f86f5f6-sdjdr" Feb 16 17:22:18 crc kubenswrapper[4794]: I0216 17:22:18.728148 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=23.728122907 podStartE2EDuration="23.728122907s" podCreationTimestamp="2026-02-16 17:21:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:22:18.694319503 +0000 UTC m=+1364.642414160" watchObservedRunningTime="2026-02-16 17:22:18.728122907 +0000 UTC m=+1364.676217554" Feb 16 17:22:18 crc kubenswrapper[4794]: I0216 17:22:18.805365 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=24.805346117 podStartE2EDuration="24.805346117s" podCreationTimestamp="2026-02-16 17:21:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:22:18.79588931 +0000 UTC m=+1364.743983977" watchObservedRunningTime="2026-02-16 17:22:18.805346117 +0000 UTC m=+1364.753440764" Feb 16 17:22:20 crc kubenswrapper[4794]: I0216 17:22:20.254079 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4fdhf" Feb 16 17:22:20 crc kubenswrapper[4794]: I0216 17:22:20.327253 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7473b04b-0d0a-4c73-ac81-f0ad2959dc79-scripts\") pod \"7473b04b-0d0a-4c73-ac81-f0ad2959dc79\" (UID: \"7473b04b-0d0a-4c73-ac81-f0ad2959dc79\") " Feb 16 17:22:20 crc kubenswrapper[4794]: I0216 17:22:20.327605 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mghxx\" (UniqueName: \"kubernetes.io/projected/7473b04b-0d0a-4c73-ac81-f0ad2959dc79-kube-api-access-mghxx\") pod \"7473b04b-0d0a-4c73-ac81-f0ad2959dc79\" (UID: \"7473b04b-0d0a-4c73-ac81-f0ad2959dc79\") " Feb 16 17:22:20 crc kubenswrapper[4794]: I0216 17:22:20.327663 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7473b04b-0d0a-4c73-ac81-f0ad2959dc79-config-data\") pod \"7473b04b-0d0a-4c73-ac81-f0ad2959dc79\" (UID: \"7473b04b-0d0a-4c73-ac81-f0ad2959dc79\") " Feb 16 17:22:20 crc kubenswrapper[4794]: I0216 17:22:20.327696 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7473b04b-0d0a-4c73-ac81-f0ad2959dc79-combined-ca-bundle\") pod \"7473b04b-0d0a-4c73-ac81-f0ad2959dc79\" (UID: \"7473b04b-0d0a-4c73-ac81-f0ad2959dc79\") " Feb 16 17:22:20 crc kubenswrapper[4794]: I0216 17:22:20.327721 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7473b04b-0d0a-4c73-ac81-f0ad2959dc79-logs\") pod \"7473b04b-0d0a-4c73-ac81-f0ad2959dc79\" (UID: \"7473b04b-0d0a-4c73-ac81-f0ad2959dc79\") " Feb 16 17:22:20 crc kubenswrapper[4794]: I0216 17:22:20.328725 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7473b04b-0d0a-4c73-ac81-f0ad2959dc79-logs" (OuterVolumeSpecName: "logs") pod "7473b04b-0d0a-4c73-ac81-f0ad2959dc79" (UID: "7473b04b-0d0a-4c73-ac81-f0ad2959dc79"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:22:20 crc kubenswrapper[4794]: I0216 17:22:20.338440 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7473b04b-0d0a-4c73-ac81-f0ad2959dc79-scripts" (OuterVolumeSpecName: "scripts") pod "7473b04b-0d0a-4c73-ac81-f0ad2959dc79" (UID: "7473b04b-0d0a-4c73-ac81-f0ad2959dc79"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:20 crc kubenswrapper[4794]: I0216 17:22:20.339466 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7473b04b-0d0a-4c73-ac81-f0ad2959dc79-kube-api-access-mghxx" (OuterVolumeSpecName: "kube-api-access-mghxx") pod "7473b04b-0d0a-4c73-ac81-f0ad2959dc79" (UID: "7473b04b-0d0a-4c73-ac81-f0ad2959dc79"). InnerVolumeSpecName "kube-api-access-mghxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:22:20 crc kubenswrapper[4794]: I0216 17:22:20.379983 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7473b04b-0d0a-4c73-ac81-f0ad2959dc79-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7473b04b-0d0a-4c73-ac81-f0ad2959dc79" (UID: "7473b04b-0d0a-4c73-ac81-f0ad2959dc79"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:20 crc kubenswrapper[4794]: I0216 17:22:20.385847 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7473b04b-0d0a-4c73-ac81-f0ad2959dc79-config-data" (OuterVolumeSpecName: "config-data") pod "7473b04b-0d0a-4c73-ac81-f0ad2959dc79" (UID: "7473b04b-0d0a-4c73-ac81-f0ad2959dc79"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:20 crc kubenswrapper[4794]: I0216 17:22:20.430410 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mghxx\" (UniqueName: \"kubernetes.io/projected/7473b04b-0d0a-4c73-ac81-f0ad2959dc79-kube-api-access-mghxx\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:20 crc kubenswrapper[4794]: I0216 17:22:20.430452 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7473b04b-0d0a-4c73-ac81-f0ad2959dc79-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:20 crc kubenswrapper[4794]: I0216 17:22:20.430465 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7473b04b-0d0a-4c73-ac81-f0ad2959dc79-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:20 crc kubenswrapper[4794]: I0216 17:22:20.430475 4794 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7473b04b-0d0a-4c73-ac81-f0ad2959dc79-logs\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:20 crc kubenswrapper[4794]: I0216 17:22:20.430489 4794 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7473b04b-0d0a-4c73-ac81-f0ad2959dc79-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:20 crc kubenswrapper[4794]: I0216 17:22:20.755016 4794 generic.go:334] "Generic (PLEG): container finished" podID="15d18e7f-9229-47e4-97f3-d5515e5c59fb" containerID="2979acd342e4124f130f2b0129a7af906efdbe4e15b83cddb51e005fb30ea921" exitCode=0 Feb 16 17:22:20 crc kubenswrapper[4794]: I0216 17:22:20.755131 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wnm9v" event={"ID":"15d18e7f-9229-47e4-97f3-d5515e5c59fb","Type":"ContainerDied","Data":"2979acd342e4124f130f2b0129a7af906efdbe4e15b83cddb51e005fb30ea921"} Feb 16 17:22:20 crc kubenswrapper[4794]: I0216 17:22:20.758935 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4fdhf" event={"ID":"7473b04b-0d0a-4c73-ac81-f0ad2959dc79","Type":"ContainerDied","Data":"b55e385fe345aa7f5e4ba35a458faa3e72711f8e04097aa0674a24d2d8131db6"} Feb 16 17:22:20 crc kubenswrapper[4794]: I0216 17:22:20.758966 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4fdhf" Feb 16 17:22:20 crc kubenswrapper[4794]: I0216 17:22:20.758969 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b55e385fe345aa7f5e4ba35a458faa3e72711f8e04097aa0674a24d2d8131db6" Feb 16 17:22:20 crc kubenswrapper[4794]: I0216 17:22:20.870754 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-57b87468-bqjtk"] Feb 16 17:22:20 crc kubenswrapper[4794]: E0216 17:22:20.871325 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7473b04b-0d0a-4c73-ac81-f0ad2959dc79" containerName="placement-db-sync" Feb 16 17:22:20 crc kubenswrapper[4794]: I0216 17:22:20.871350 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="7473b04b-0d0a-4c73-ac81-f0ad2959dc79" containerName="placement-db-sync" Feb 16 17:22:20 crc kubenswrapper[4794]: I0216 17:22:20.871656 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="7473b04b-0d0a-4c73-ac81-f0ad2959dc79" containerName="placement-db-sync" Feb 16 17:22:20 crc kubenswrapper[4794]: I0216 17:22:20.873189 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-57b87468-bqjtk" Feb 16 17:22:20 crc kubenswrapper[4794]: I0216 17:22:20.893836 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-9lc9q" Feb 16 17:22:20 crc kubenswrapper[4794]: I0216 17:22:20.894169 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 16 17:22:20 crc kubenswrapper[4794]: I0216 17:22:20.894891 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 16 17:22:20 crc kubenswrapper[4794]: I0216 17:22:20.895037 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 16 17:22:20 crc kubenswrapper[4794]: I0216 17:22:20.895203 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 16 17:22:20 crc kubenswrapper[4794]: I0216 17:22:20.905423 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-57b87468-bqjtk"] Feb 16 17:22:20 crc kubenswrapper[4794]: I0216 17:22:20.953282 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-scripts\") pod \"placement-57b87468-bqjtk\" (UID: \"b92c8cdf-5125-46d9-89c1-8549a2dc1b74\") " pod="openstack/placement-57b87468-bqjtk" Feb 16 17:22:20 crc kubenswrapper[4794]: I0216 17:22:20.953499 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-internal-tls-certs\") pod \"placement-57b87468-bqjtk\" (UID: \"b92c8cdf-5125-46d9-89c1-8549a2dc1b74\") " pod="openstack/placement-57b87468-bqjtk" Feb 16 17:22:20 crc kubenswrapper[4794]: I0216 17:22:20.953565 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-public-tls-certs\") pod \"placement-57b87468-bqjtk\" (UID: \"b92c8cdf-5125-46d9-89c1-8549a2dc1b74\") " pod="openstack/placement-57b87468-bqjtk" Feb 16 17:22:20 crc kubenswrapper[4794]: I0216 17:22:20.953658 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-combined-ca-bundle\") pod \"placement-57b87468-bqjtk\" (UID: \"b92c8cdf-5125-46d9-89c1-8549a2dc1b74\") " pod="openstack/placement-57b87468-bqjtk" Feb 16 17:22:20 crc kubenswrapper[4794]: I0216 17:22:20.953719 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-config-data\") pod \"placement-57b87468-bqjtk\" (UID: \"b92c8cdf-5125-46d9-89c1-8549a2dc1b74\") " pod="openstack/placement-57b87468-bqjtk" Feb 16 17:22:20 crc kubenswrapper[4794]: I0216 17:22:20.953837 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-logs\") pod \"placement-57b87468-bqjtk\" (UID: \"b92c8cdf-5125-46d9-89c1-8549a2dc1b74\") " pod="openstack/placement-57b87468-bqjtk" Feb 16 17:22:20 crc kubenswrapper[4794]: I0216 17:22:20.953932 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbdkq\" (UniqueName: \"kubernetes.io/projected/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-kube-api-access-zbdkq\") pod \"placement-57b87468-bqjtk\" (UID: \"b92c8cdf-5125-46d9-89c1-8549a2dc1b74\") " pod="openstack/placement-57b87468-bqjtk" Feb 16 17:22:21 crc kubenswrapper[4794]: I0216 17:22:21.056679 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-internal-tls-certs\") pod \"placement-57b87468-bqjtk\" (UID: \"b92c8cdf-5125-46d9-89c1-8549a2dc1b74\") " pod="openstack/placement-57b87468-bqjtk" Feb 16 17:22:21 crc kubenswrapper[4794]: I0216 17:22:21.056736 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-public-tls-certs\") pod \"placement-57b87468-bqjtk\" (UID: \"b92c8cdf-5125-46d9-89c1-8549a2dc1b74\") " pod="openstack/placement-57b87468-bqjtk" Feb 16 17:22:21 crc kubenswrapper[4794]: I0216 17:22:21.056772 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-combined-ca-bundle\") pod \"placement-57b87468-bqjtk\" (UID: \"b92c8cdf-5125-46d9-89c1-8549a2dc1b74\") " pod="openstack/placement-57b87468-bqjtk" Feb 16 17:22:21 crc kubenswrapper[4794]: I0216 17:22:21.056796 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-config-data\") pod \"placement-57b87468-bqjtk\" (UID: \"b92c8cdf-5125-46d9-89c1-8549a2dc1b74\") " pod="openstack/placement-57b87468-bqjtk" Feb 16 17:22:21 crc kubenswrapper[4794]: I0216 17:22:21.056843 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-logs\") pod \"placement-57b87468-bqjtk\" (UID: \"b92c8cdf-5125-46d9-89c1-8549a2dc1b74\") " pod="openstack/placement-57b87468-bqjtk" Feb 16 17:22:21 crc kubenswrapper[4794]: I0216 17:22:21.056879 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbdkq\" (UniqueName: \"kubernetes.io/projected/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-kube-api-access-zbdkq\") pod \"placement-57b87468-bqjtk\" (UID: \"b92c8cdf-5125-46d9-89c1-8549a2dc1b74\") " pod="openstack/placement-57b87468-bqjtk" Feb 16 17:22:21 crc kubenswrapper[4794]: I0216 17:22:21.056933 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-scripts\") pod \"placement-57b87468-bqjtk\" (UID: \"b92c8cdf-5125-46d9-89c1-8549a2dc1b74\") " pod="openstack/placement-57b87468-bqjtk" Feb 16 17:22:21 crc kubenswrapper[4794]: I0216 17:22:21.058945 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-logs\") pod \"placement-57b87468-bqjtk\" (UID: \"b92c8cdf-5125-46d9-89c1-8549a2dc1b74\") " pod="openstack/placement-57b87468-bqjtk" Feb 16 17:22:21 crc kubenswrapper[4794]: I0216 17:22:21.064182 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-config-data\") pod \"placement-57b87468-bqjtk\" (UID: \"b92c8cdf-5125-46d9-89c1-8549a2dc1b74\") " pod="openstack/placement-57b87468-bqjtk" Feb 16 17:22:21 crc kubenswrapper[4794]: I0216 17:22:21.064754 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-combined-ca-bundle\") pod \"placement-57b87468-bqjtk\" (UID: \"b92c8cdf-5125-46d9-89c1-8549a2dc1b74\") " pod="openstack/placement-57b87468-bqjtk" Feb 16 17:22:21 crc kubenswrapper[4794]: I0216 17:22:21.070729 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-internal-tls-certs\") pod \"placement-57b87468-bqjtk\" (UID: \"b92c8cdf-5125-46d9-89c1-8549a2dc1b74\") " pod="openstack/placement-57b87468-bqjtk" Feb 16 17:22:21 crc kubenswrapper[4794]: I0216 17:22:21.081572 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-scripts\") pod \"placement-57b87468-bqjtk\" (UID: \"b92c8cdf-5125-46d9-89c1-8549a2dc1b74\") " pod="openstack/placement-57b87468-bqjtk" Feb 16 17:22:21 crc kubenswrapper[4794]: I0216 17:22:21.081598 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-public-tls-certs\") pod \"placement-57b87468-bqjtk\" (UID: \"b92c8cdf-5125-46d9-89c1-8549a2dc1b74\") " pod="openstack/placement-57b87468-bqjtk" Feb 16 17:22:21 crc kubenswrapper[4794]: I0216 17:22:21.089150 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbdkq\" (UniqueName: \"kubernetes.io/projected/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-kube-api-access-zbdkq\") pod \"placement-57b87468-bqjtk\" (UID: \"b92c8cdf-5125-46d9-89c1-8549a2dc1b74\") " pod="openstack/placement-57b87468-bqjtk" Feb 16 17:22:21 crc kubenswrapper[4794]: I0216 17:22:21.204686 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-57b87468-bqjtk" Feb 16 17:22:23 crc kubenswrapper[4794]: I0216 17:22:23.125156 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wnm9v" Feb 16 17:22:23 crc kubenswrapper[4794]: I0216 17:22:23.210625 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svq5g\" (UniqueName: \"kubernetes.io/projected/15d18e7f-9229-47e4-97f3-d5515e5c59fb-kube-api-access-svq5g\") pod \"15d18e7f-9229-47e4-97f3-d5515e5c59fb\" (UID: \"15d18e7f-9229-47e4-97f3-d5515e5c59fb\") " Feb 16 17:22:23 crc kubenswrapper[4794]: I0216 17:22:23.210802 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/15d18e7f-9229-47e4-97f3-d5515e5c59fb-fernet-keys\") pod \"15d18e7f-9229-47e4-97f3-d5515e5c59fb\" (UID: \"15d18e7f-9229-47e4-97f3-d5515e5c59fb\") " Feb 16 17:22:23 crc kubenswrapper[4794]: I0216 17:22:23.210960 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15d18e7f-9229-47e4-97f3-d5515e5c59fb-scripts\") pod \"15d18e7f-9229-47e4-97f3-d5515e5c59fb\" (UID: \"15d18e7f-9229-47e4-97f3-d5515e5c59fb\") " Feb 16 17:22:23 crc kubenswrapper[4794]: I0216 17:22:23.210994 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15d18e7f-9229-47e4-97f3-d5515e5c59fb-config-data\") pod \"15d18e7f-9229-47e4-97f3-d5515e5c59fb\" (UID: \"15d18e7f-9229-47e4-97f3-d5515e5c59fb\") " Feb 16 17:22:23 crc kubenswrapper[4794]: I0216 17:22:23.211026 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/15d18e7f-9229-47e4-97f3-d5515e5c59fb-credential-keys\") pod \"15d18e7f-9229-47e4-97f3-d5515e5c59fb\" (UID: \"15d18e7f-9229-47e4-97f3-d5515e5c59fb\") " Feb 16 17:22:23 crc kubenswrapper[4794]: I0216 17:22:23.211195 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15d18e7f-9229-47e4-97f3-d5515e5c59fb-combined-ca-bundle\") pod \"15d18e7f-9229-47e4-97f3-d5515e5c59fb\" (UID: \"15d18e7f-9229-47e4-97f3-d5515e5c59fb\") " Feb 16 17:22:23 crc kubenswrapper[4794]: I0216 17:22:23.216459 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15d18e7f-9229-47e4-97f3-d5515e5c59fb-kube-api-access-svq5g" (OuterVolumeSpecName: "kube-api-access-svq5g") pod "15d18e7f-9229-47e4-97f3-d5515e5c59fb" (UID: "15d18e7f-9229-47e4-97f3-d5515e5c59fb"). InnerVolumeSpecName "kube-api-access-svq5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:22:23 crc kubenswrapper[4794]: I0216 17:22:23.219202 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15d18e7f-9229-47e4-97f3-d5515e5c59fb-scripts" (OuterVolumeSpecName: "scripts") pod "15d18e7f-9229-47e4-97f3-d5515e5c59fb" (UID: "15d18e7f-9229-47e4-97f3-d5515e5c59fb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:23 crc kubenswrapper[4794]: I0216 17:22:23.220426 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15d18e7f-9229-47e4-97f3-d5515e5c59fb-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "15d18e7f-9229-47e4-97f3-d5515e5c59fb" (UID: "15d18e7f-9229-47e4-97f3-d5515e5c59fb"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:23 crc kubenswrapper[4794]: I0216 17:22:23.225724 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15d18e7f-9229-47e4-97f3-d5515e5c59fb-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "15d18e7f-9229-47e4-97f3-d5515e5c59fb" (UID: "15d18e7f-9229-47e4-97f3-d5515e5c59fb"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:23 crc kubenswrapper[4794]: I0216 17:22:23.251805 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15d18e7f-9229-47e4-97f3-d5515e5c59fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15d18e7f-9229-47e4-97f3-d5515e5c59fb" (UID: "15d18e7f-9229-47e4-97f3-d5515e5c59fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:23 crc kubenswrapper[4794]: I0216 17:22:23.254701 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15d18e7f-9229-47e4-97f3-d5515e5c59fb-config-data" (OuterVolumeSpecName: "config-data") pod "15d18e7f-9229-47e4-97f3-d5515e5c59fb" (UID: "15d18e7f-9229-47e4-97f3-d5515e5c59fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:23 crc kubenswrapper[4794]: I0216 17:22:23.314494 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15d18e7f-9229-47e4-97f3-d5515e5c59fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:23 crc kubenswrapper[4794]: I0216 17:22:23.314538 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svq5g\" (UniqueName: \"kubernetes.io/projected/15d18e7f-9229-47e4-97f3-d5515e5c59fb-kube-api-access-svq5g\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:23 crc kubenswrapper[4794]: I0216 17:22:23.314555 4794 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/15d18e7f-9229-47e4-97f3-d5515e5c59fb-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:23 crc kubenswrapper[4794]: I0216 17:22:23.314565 4794 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15d18e7f-9229-47e4-97f3-d5515e5c59fb-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:23 crc kubenswrapper[4794]: I0216 17:22:23.314574 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15d18e7f-9229-47e4-97f3-d5515e5c59fb-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:23 crc kubenswrapper[4794]: I0216 17:22:23.314583 4794 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/15d18e7f-9229-47e4-97f3-d5515e5c59fb-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:23 crc kubenswrapper[4794]: I0216 17:22:23.791239 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wnm9v" event={"ID":"15d18e7f-9229-47e4-97f3-d5515e5c59fb","Type":"ContainerDied","Data":"7e3d25f279085738649571f02f3b3eb93f94d22d9e6780463e18c2d524d1dcd5"} Feb 16 17:22:23 crc kubenswrapper[4794]: I0216 17:22:23.791276 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e3d25f279085738649571f02f3b3eb93f94d22d9e6780463e18c2d524d1dcd5" Feb 16 17:22:23 crc kubenswrapper[4794]: I0216 17:22:23.791339 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wnm9v" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.239816 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5bfdb47d5f-nhr7b"] Feb 16 17:22:24 crc kubenswrapper[4794]: E0216 17:22:24.240290 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15d18e7f-9229-47e4-97f3-d5515e5c59fb" containerName="keystone-bootstrap" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.240331 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="15d18e7f-9229-47e4-97f3-d5515e5c59fb" containerName="keystone-bootstrap" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.240519 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="15d18e7f-9229-47e4-97f3-d5515e5c59fb" containerName="keystone-bootstrap" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.241369 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5bfdb47d5f-nhr7b" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.245237 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.245592 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.245715 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gm9wc" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.245825 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.246510 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.246651 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.253517 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5bfdb47d5f-nhr7b"] Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.349996 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e6e652d-e656-43a1-9272-bc48d55d7c35-config-data\") pod \"keystone-5bfdb47d5f-nhr7b\" (UID: \"0e6e652d-e656-43a1-9272-bc48d55d7c35\") " pod="openstack/keystone-5bfdb47d5f-nhr7b" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.350475 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e6e652d-e656-43a1-9272-bc48d55d7c35-internal-tls-certs\") pod \"keystone-5bfdb47d5f-nhr7b\" (UID: \"0e6e652d-e656-43a1-9272-bc48d55d7c35\") " pod="openstack/keystone-5bfdb47d5f-nhr7b" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.350508 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e6e652d-e656-43a1-9272-bc48d55d7c35-scripts\") pod \"keystone-5bfdb47d5f-nhr7b\" (UID: \"0e6e652d-e656-43a1-9272-bc48d55d7c35\") " pod="openstack/keystone-5bfdb47d5f-nhr7b" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.350626 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e6e652d-e656-43a1-9272-bc48d55d7c35-combined-ca-bundle\") pod \"keystone-5bfdb47d5f-nhr7b\" (UID: \"0e6e652d-e656-43a1-9272-bc48d55d7c35\") " pod="openstack/keystone-5bfdb47d5f-nhr7b" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.350673 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7fpp\" (UniqueName: \"kubernetes.io/projected/0e6e652d-e656-43a1-9272-bc48d55d7c35-kube-api-access-w7fpp\") pod \"keystone-5bfdb47d5f-nhr7b\" (UID: \"0e6e652d-e656-43a1-9272-bc48d55d7c35\") " pod="openstack/keystone-5bfdb47d5f-nhr7b" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.350708 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e6e652d-e656-43a1-9272-bc48d55d7c35-public-tls-certs\") pod \"keystone-5bfdb47d5f-nhr7b\" (UID: \"0e6e652d-e656-43a1-9272-bc48d55d7c35\") " pod="openstack/keystone-5bfdb47d5f-nhr7b" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.350757 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0e6e652d-e656-43a1-9272-bc48d55d7c35-fernet-keys\") pod \"keystone-5bfdb47d5f-nhr7b\" (UID: \"0e6e652d-e656-43a1-9272-bc48d55d7c35\") " pod="openstack/keystone-5bfdb47d5f-nhr7b" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.350815 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0e6e652d-e656-43a1-9272-bc48d55d7c35-credential-keys\") pod \"keystone-5bfdb47d5f-nhr7b\" (UID: \"0e6e652d-e656-43a1-9272-bc48d55d7c35\") " pod="openstack/keystone-5bfdb47d5f-nhr7b" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.453415 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e6e652d-e656-43a1-9272-bc48d55d7c35-combined-ca-bundle\") pod \"keystone-5bfdb47d5f-nhr7b\" (UID: \"0e6e652d-e656-43a1-9272-bc48d55d7c35\") " pod="openstack/keystone-5bfdb47d5f-nhr7b" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.453492 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7fpp\" (UniqueName: \"kubernetes.io/projected/0e6e652d-e656-43a1-9272-bc48d55d7c35-kube-api-access-w7fpp\") pod \"keystone-5bfdb47d5f-nhr7b\" (UID: \"0e6e652d-e656-43a1-9272-bc48d55d7c35\") " pod="openstack/keystone-5bfdb47d5f-nhr7b" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.453527 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e6e652d-e656-43a1-9272-bc48d55d7c35-public-tls-certs\") pod \"keystone-5bfdb47d5f-nhr7b\" (UID: \"0e6e652d-e656-43a1-9272-bc48d55d7c35\") " pod="openstack/keystone-5bfdb47d5f-nhr7b" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.453600 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0e6e652d-e656-43a1-9272-bc48d55d7c35-fernet-keys\") pod \"keystone-5bfdb47d5f-nhr7b\" (UID: \"0e6e652d-e656-43a1-9272-bc48d55d7c35\") " pod="openstack/keystone-5bfdb47d5f-nhr7b" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.453670 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0e6e652d-e656-43a1-9272-bc48d55d7c35-credential-keys\") pod \"keystone-5bfdb47d5f-nhr7b\" (UID: \"0e6e652d-e656-43a1-9272-bc48d55d7c35\") " pod="openstack/keystone-5bfdb47d5f-nhr7b" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.453838 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e6e652d-e656-43a1-9272-bc48d55d7c35-config-data\") pod \"keystone-5bfdb47d5f-nhr7b\" (UID: \"0e6e652d-e656-43a1-9272-bc48d55d7c35\") " pod="openstack/keystone-5bfdb47d5f-nhr7b" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.453955 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e6e652d-e656-43a1-9272-bc48d55d7c35-internal-tls-certs\") pod \"keystone-5bfdb47d5f-nhr7b\" (UID: \"0e6e652d-e656-43a1-9272-bc48d55d7c35\") " pod="openstack/keystone-5bfdb47d5f-nhr7b" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.453983 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e6e652d-e656-43a1-9272-bc48d55d7c35-scripts\") pod \"keystone-5bfdb47d5f-nhr7b\" (UID: \"0e6e652d-e656-43a1-9272-bc48d55d7c35\") " pod="openstack/keystone-5bfdb47d5f-nhr7b" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.465274 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e6e652d-e656-43a1-9272-bc48d55d7c35-scripts\") pod \"keystone-5bfdb47d5f-nhr7b\" (UID: \"0e6e652d-e656-43a1-9272-bc48d55d7c35\") " pod="openstack/keystone-5bfdb47d5f-nhr7b" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.473713 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0e6e652d-e656-43a1-9272-bc48d55d7c35-credential-keys\") pod \"keystone-5bfdb47d5f-nhr7b\" (UID: \"0e6e652d-e656-43a1-9272-bc48d55d7c35\") " pod="openstack/keystone-5bfdb47d5f-nhr7b" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.512071 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e6e652d-e656-43a1-9272-bc48d55d7c35-combined-ca-bundle\") pod \"keystone-5bfdb47d5f-nhr7b\" (UID: \"0e6e652d-e656-43a1-9272-bc48d55d7c35\") " pod="openstack/keystone-5bfdb47d5f-nhr7b" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.512921 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e6e652d-e656-43a1-9272-bc48d55d7c35-config-data\") pod \"keystone-5bfdb47d5f-nhr7b\" (UID: \"0e6e652d-e656-43a1-9272-bc48d55d7c35\") " pod="openstack/keystone-5bfdb47d5f-nhr7b" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.516329 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0e6e652d-e656-43a1-9272-bc48d55d7c35-fernet-keys\") pod \"keystone-5bfdb47d5f-nhr7b\" (UID: \"0e6e652d-e656-43a1-9272-bc48d55d7c35\") " pod="openstack/keystone-5bfdb47d5f-nhr7b" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.516797 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7fpp\" (UniqueName: \"kubernetes.io/projected/0e6e652d-e656-43a1-9272-bc48d55d7c35-kube-api-access-w7fpp\") pod \"keystone-5bfdb47d5f-nhr7b\" (UID: \"0e6e652d-e656-43a1-9272-bc48d55d7c35\") " pod="openstack/keystone-5bfdb47d5f-nhr7b" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.517201 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e6e652d-e656-43a1-9272-bc48d55d7c35-internal-tls-certs\") pod \"keystone-5bfdb47d5f-nhr7b\" (UID: \"0e6e652d-e656-43a1-9272-bc48d55d7c35\") " pod="openstack/keystone-5bfdb47d5f-nhr7b" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.519744 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e6e652d-e656-43a1-9272-bc48d55d7c35-public-tls-certs\") pod \"keystone-5bfdb47d5f-nhr7b\" (UID: \"0e6e652d-e656-43a1-9272-bc48d55d7c35\") " pod="openstack/keystone-5bfdb47d5f-nhr7b" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.561190 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5bfdb47d5f-nhr7b" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.745481 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7b667979-bxq86" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.763454 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.764546 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.764582 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.764594 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.830801 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-rdq4b"] Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.831050 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-rdq4b" podUID="b7b5f58b-cbab-4834-93ce-96d088299265" containerName="dnsmasq-dns" containerID="cri-o://a22d8a2d0cbc3a4e8c7541c2039e59c07416d0e3d5f570bf3913a332455860a9" gracePeriod=10 Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.851493 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 17:22:24 crc kubenswrapper[4794]: I0216 17:22:24.871661 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 17:22:25 crc kubenswrapper[4794]: I0216 17:22:25.789018 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 17:22:25 crc kubenswrapper[4794]: I0216 17:22:25.791291 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 17:22:25 crc kubenswrapper[4794]: I0216 17:22:25.791353 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 17:22:25 crc kubenswrapper[4794]: I0216 17:22:25.791881 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 17:22:25 crc kubenswrapper[4794]: I0216 17:22:25.829332 4794 generic.go:334] "Generic (PLEG): container finished" podID="b7b5f58b-cbab-4834-93ce-96d088299265" containerID="a22d8a2d0cbc3a4e8c7541c2039e59c07416d0e3d5f570bf3913a332455860a9" exitCode=0 Feb 16 17:22:25 crc kubenswrapper[4794]: I0216 17:22:25.830925 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-rdq4b" event={"ID":"b7b5f58b-cbab-4834-93ce-96d088299265","Type":"ContainerDied","Data":"a22d8a2d0cbc3a4e8c7541c2039e59c07416d0e3d5f570bf3913a332455860a9"} Feb 16 17:22:25 crc kubenswrapper[4794]: I0216 17:22:25.858857 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 17:22:25 crc kubenswrapper[4794]: I0216 17:22:25.880873 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 17:22:26 crc kubenswrapper[4794]: I0216 17:22:26.061121 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-rdq4b" Feb 16 17:22:26 crc kubenswrapper[4794]: I0216 17:22:26.199272 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7b5f58b-cbab-4834-93ce-96d088299265-config\") pod \"b7b5f58b-cbab-4834-93ce-96d088299265\" (UID: \"b7b5f58b-cbab-4834-93ce-96d088299265\") " Feb 16 17:22:26 crc kubenswrapper[4794]: I0216 17:22:26.199537 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7b5f58b-cbab-4834-93ce-96d088299265-ovsdbserver-sb\") pod \"b7b5f58b-cbab-4834-93ce-96d088299265\" (UID: \"b7b5f58b-cbab-4834-93ce-96d088299265\") " Feb 16 17:22:26 crc kubenswrapper[4794]: I0216 17:22:26.199585 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7b5f58b-cbab-4834-93ce-96d088299265-dns-svc\") pod \"b7b5f58b-cbab-4834-93ce-96d088299265\" (UID: \"b7b5f58b-cbab-4834-93ce-96d088299265\") " Feb 16 17:22:26 crc kubenswrapper[4794]: I0216 17:22:26.199644 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dc67x\" (UniqueName: \"kubernetes.io/projected/b7b5f58b-cbab-4834-93ce-96d088299265-kube-api-access-dc67x\") pod \"b7b5f58b-cbab-4834-93ce-96d088299265\" (UID: \"b7b5f58b-cbab-4834-93ce-96d088299265\") " Feb 16 17:22:26 crc kubenswrapper[4794]: I0216 17:22:26.199672 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7b5f58b-cbab-4834-93ce-96d088299265-dns-swift-storage-0\") pod \"b7b5f58b-cbab-4834-93ce-96d088299265\" (UID: \"b7b5f58b-cbab-4834-93ce-96d088299265\") " Feb 16 17:22:26 crc kubenswrapper[4794]: I0216 17:22:26.199724 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7b5f58b-cbab-4834-93ce-96d088299265-ovsdbserver-nb\") pod \"b7b5f58b-cbab-4834-93ce-96d088299265\" (UID: \"b7b5f58b-cbab-4834-93ce-96d088299265\") " Feb 16 17:22:26 crc kubenswrapper[4794]: I0216 17:22:26.219348 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7b5f58b-cbab-4834-93ce-96d088299265-kube-api-access-dc67x" (OuterVolumeSpecName: "kube-api-access-dc67x") pod "b7b5f58b-cbab-4834-93ce-96d088299265" (UID: "b7b5f58b-cbab-4834-93ce-96d088299265"). InnerVolumeSpecName "kube-api-access-dc67x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:22:26 crc kubenswrapper[4794]: I0216 17:22:26.303077 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dc67x\" (UniqueName: \"kubernetes.io/projected/b7b5f58b-cbab-4834-93ce-96d088299265-kube-api-access-dc67x\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:26 crc kubenswrapper[4794]: I0216 17:22:26.454413 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-57b87468-bqjtk"] Feb 16 17:22:26 crc kubenswrapper[4794]: I0216 17:22:26.473943 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7b5f58b-cbab-4834-93ce-96d088299265-config" (OuterVolumeSpecName: "config") pod "b7b5f58b-cbab-4834-93ce-96d088299265" (UID: "b7b5f58b-cbab-4834-93ce-96d088299265"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:22:26 crc kubenswrapper[4794]: I0216 17:22:26.480805 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7b5f58b-cbab-4834-93ce-96d088299265-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b7b5f58b-cbab-4834-93ce-96d088299265" (UID: "b7b5f58b-cbab-4834-93ce-96d088299265"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:22:26 crc kubenswrapper[4794]: I0216 17:22:26.482455 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5bfdb47d5f-nhr7b"] Feb 16 17:22:26 crc kubenswrapper[4794]: I0216 17:22:26.499352 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7b5f58b-cbab-4834-93ce-96d088299265-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b7b5f58b-cbab-4834-93ce-96d088299265" (UID: "b7b5f58b-cbab-4834-93ce-96d088299265"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:22:26 crc kubenswrapper[4794]: I0216 17:22:26.507259 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7b5f58b-cbab-4834-93ce-96d088299265-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b7b5f58b-cbab-4834-93ce-96d088299265" (UID: "b7b5f58b-cbab-4834-93ce-96d088299265"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:22:26 crc kubenswrapper[4794]: I0216 17:22:26.508532 4794 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7b5f58b-cbab-4834-93ce-96d088299265-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:26 crc kubenswrapper[4794]: I0216 17:22:26.508599 4794 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7b5f58b-cbab-4834-93ce-96d088299265-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:26 crc kubenswrapper[4794]: I0216 17:22:26.508610 4794 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7b5f58b-cbab-4834-93ce-96d088299265-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:26 crc kubenswrapper[4794]: I0216 17:22:26.508620 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7b5f58b-cbab-4834-93ce-96d088299265-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:26 crc kubenswrapper[4794]: I0216 17:22:26.514334 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7b5f58b-cbab-4834-93ce-96d088299265-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b7b5f58b-cbab-4834-93ce-96d088299265" (UID: "b7b5f58b-cbab-4834-93ce-96d088299265"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:22:26 crc kubenswrapper[4794]: I0216 17:22:26.613043 4794 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7b5f58b-cbab-4834-93ce-96d088299265-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:26 crc kubenswrapper[4794]: I0216 17:22:26.871668 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-rppbn" event={"ID":"8f3b58ad-6afe-4194-a578-2f4fec69367c","Type":"ContainerStarted","Data":"42461fff9709ad54490eb287b1b85b0f2b88b64ac08a0a527d25b18ecc56ec7b"} Feb 16 17:22:26 crc kubenswrapper[4794]: I0216 17:22:26.881617 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-57b87468-bqjtk" event={"ID":"b92c8cdf-5125-46d9-89c1-8549a2dc1b74","Type":"ContainerStarted","Data":"9159454ff34f615c02055d29642b7f8c4cf8c4af2dab8a0f4af98030cad8168a"} Feb 16 17:22:26 crc kubenswrapper[4794]: I0216 17:22:26.891903 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-rdq4b" event={"ID":"b7b5f58b-cbab-4834-93ce-96d088299265","Type":"ContainerDied","Data":"f2c6a4d2b39428c3d6a9f448be232adbfc2116183e8e8de475f2ee49852f23b9"} Feb 16 17:22:26 crc kubenswrapper[4794]: I0216 17:22:26.891977 4794 scope.go:117] "RemoveContainer" containerID="a22d8a2d0cbc3a4e8c7541c2039e59c07416d0e3d5f570bf3913a332455860a9" Feb 16 17:22:26 crc kubenswrapper[4794]: I0216 17:22:26.891987 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-rdq4b" Feb 16 17:22:26 crc kubenswrapper[4794]: I0216 17:22:26.895883 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rs2k4" event={"ID":"865acfbb-330f-4594-a7d8-64962cab3cd5","Type":"ContainerStarted","Data":"9dbd51902899322ece34a2733ec3d8e16d85e9ac734b4818b20a5762bdbbbd8f"} Feb 16 17:22:26 crc kubenswrapper[4794]: I0216 17:22:26.904621 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5bfdb47d5f-nhr7b" event={"ID":"0e6e652d-e656-43a1-9272-bc48d55d7c35","Type":"ContainerStarted","Data":"8d8d85b7793633e558f193d07c6b01a397ce4c4cd5cb4aea4cd0d06a27ce0b71"} Feb 16 17:22:26 crc kubenswrapper[4794]: I0216 17:22:26.904644 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-rppbn" podStartSLOduration=3.994139069 podStartE2EDuration="45.904626434s" podCreationTimestamp="2026-02-16 17:21:41 +0000 UTC" firstStartedPulling="2026-02-16 17:21:43.772981355 +0000 UTC m=+1329.721076002" lastFinishedPulling="2026-02-16 17:22:25.68346872 +0000 UTC m=+1371.631563367" observedRunningTime="2026-02-16 17:22:26.900109857 +0000 UTC m=+1372.848204504" watchObservedRunningTime="2026-02-16 17:22:26.904626434 +0000 UTC m=+1372.852721081" Feb 16 17:22:26 crc kubenswrapper[4794]: I0216 17:22:26.911871 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f584ef06-0506-4130-b87a-ec406e89d1f5","Type":"ContainerStarted","Data":"b91edb3941ecf4c9d9844e364ee4b249a64e9932dc4fbf963e1f784d45802111"} Feb 16 17:22:26 crc kubenswrapper[4794]: I0216 17:22:26.938776 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-rs2k4" podStartSLOduration=4.537349065 podStartE2EDuration="45.938749496s" podCreationTimestamp="2026-02-16 17:21:41 +0000 UTC" firstStartedPulling="2026-02-16 17:21:44.345293714 +0000 UTC m=+1330.293388361" lastFinishedPulling="2026-02-16 17:22:25.746694145 +0000 UTC m=+1371.694788792" observedRunningTime="2026-02-16 17:22:26.925482083 +0000 UTC m=+1372.873576730" watchObservedRunningTime="2026-02-16 17:22:26.938749496 +0000 UTC m=+1372.886844143" Feb 16 17:22:26 crc kubenswrapper[4794]: I0216 17:22:26.947612 4794 scope.go:117] "RemoveContainer" containerID="a30a56f991894269edeb947b028c97f9f8858cb175549ca976e6c99044addb7d" Feb 16 17:22:26 crc kubenswrapper[4794]: I0216 17:22:26.965379 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-rdq4b"] Feb 16 17:22:26 crc kubenswrapper[4794]: I0216 17:22:26.978019 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-rdq4b"] Feb 16 17:22:27 crc kubenswrapper[4794]: I0216 17:22:27.929727 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-57b87468-bqjtk" event={"ID":"b92c8cdf-5125-46d9-89c1-8549a2dc1b74","Type":"ContainerStarted","Data":"ad5759da07de4f2d2fa94d28bda14c0227f2d26817e7a0edb7a7e29f3edd7c8f"} Feb 16 17:22:27 crc kubenswrapper[4794]: I0216 17:22:27.930298 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-57b87468-bqjtk" event={"ID":"b92c8cdf-5125-46d9-89c1-8549a2dc1b74","Type":"ContainerStarted","Data":"025c1074d07a609013845c69edfd72908e6163c9e3e9bc96693d96d9fbd6981f"} Feb 16 17:22:27 crc kubenswrapper[4794]: I0216 17:22:27.931857 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-57b87468-bqjtk" Feb 16 17:22:27 crc kubenswrapper[4794]: I0216 17:22:27.931891 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-57b87468-bqjtk" Feb 16 17:22:27 crc kubenswrapper[4794]: I0216 17:22:27.937784 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5bfdb47d5f-nhr7b" event={"ID":"0e6e652d-e656-43a1-9272-bc48d55d7c35","Type":"ContainerStarted","Data":"597d1243b8ebadd758e75044da131164c93740620dafb06ba8ba41b9b1ee3468"} Feb 16 17:22:27 crc kubenswrapper[4794]: I0216 17:22:27.937992 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5bfdb47d5f-nhr7b" Feb 16 17:22:27 crc kubenswrapper[4794]: I0216 17:22:27.999057 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5bfdb47d5f-nhr7b" podStartSLOduration=3.999041239 podStartE2EDuration="3.999041239s" podCreationTimestamp="2026-02-16 17:22:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:22:27.991742563 +0000 UTC m=+1373.939837210" watchObservedRunningTime="2026-02-16 17:22:27.999041239 +0000 UTC m=+1373.947135886" Feb 16 17:22:28 crc kubenswrapper[4794]: I0216 17:22:28.000008 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-57b87468-bqjtk" podStartSLOduration=8.000003346 podStartE2EDuration="8.000003346s" podCreationTimestamp="2026-02-16 17:22:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:22:27.969345221 +0000 UTC m=+1373.917439868" watchObservedRunningTime="2026-02-16 17:22:28.000003346 +0000 UTC m=+1373.948098003" Feb 16 17:22:28 crc kubenswrapper[4794]: I0216 17:22:28.808873 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7b5f58b-cbab-4834-93ce-96d088299265" path="/var/lib/kubelet/pods/b7b5f58b-cbab-4834-93ce-96d088299265/volumes" Feb 16 17:22:30 crc kubenswrapper[4794]: I0216 17:22:30.778045 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 17:22:30 crc kubenswrapper[4794]: I0216 17:22:30.778608 4794 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:22:30 crc kubenswrapper[4794]: I0216 17:22:30.862804 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 17:22:30 crc kubenswrapper[4794]: I0216 17:22:30.902966 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 17:22:30 crc kubenswrapper[4794]: I0216 17:22:30.903084 4794 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:22:30 crc kubenswrapper[4794]: I0216 17:22:30.904228 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 17:22:31 crc kubenswrapper[4794]: I0216 17:22:31.207347 4794 generic.go:334] "Generic (PLEG): container finished" podID="865acfbb-330f-4594-a7d8-64962cab3cd5" containerID="9dbd51902899322ece34a2733ec3d8e16d85e9ac734b4818b20a5762bdbbbd8f" exitCode=0 Feb 16 17:22:31 crc kubenswrapper[4794]: I0216 17:22:31.207437 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rs2k4" event={"ID":"865acfbb-330f-4594-a7d8-64962cab3cd5","Type":"ContainerDied","Data":"9dbd51902899322ece34a2733ec3d8e16d85e9ac734b4818b20a5762bdbbbd8f"} Feb 16 17:22:31 crc kubenswrapper[4794]: I0216 17:22:31.209456 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-t9x9p" event={"ID":"706ed090-ccb8-4488-ae71-8c991476fd08","Type":"ContainerStarted","Data":"7f12354d91da9ae57eb9a6a0abd89f7615e632c66398378e2e904dc37a6b95a0"} Feb 16 17:22:31 crc kubenswrapper[4794]: I0216 17:22:31.260364 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-t9x9p" podStartSLOduration=4.586504507 podStartE2EDuration="50.260281706s" podCreationTimestamp="2026-02-16 17:21:41 +0000 UTC" firstStartedPulling="2026-02-16 17:21:43.866044528 +0000 UTC m=+1329.814139175" lastFinishedPulling="2026-02-16 17:22:29.539821727 +0000 UTC m=+1375.487916374" observedRunningTime="2026-02-16 17:22:31.251726344 +0000 UTC m=+1377.199820981" watchObservedRunningTime="2026-02-16 17:22:31.260281706 +0000 UTC m=+1377.208376403" Feb 16 17:22:35 crc kubenswrapper[4794]: I0216 17:22:35.259404 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rs2k4" Feb 16 17:22:35 crc kubenswrapper[4794]: I0216 17:22:35.268240 4794 generic.go:334] "Generic (PLEG): container finished" podID="8f3b58ad-6afe-4194-a578-2f4fec69367c" containerID="42461fff9709ad54490eb287b1b85b0f2b88b64ac08a0a527d25b18ecc56ec7b" exitCode=0 Feb 16 17:22:35 crc kubenswrapper[4794]: I0216 17:22:35.268316 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-rppbn" event={"ID":"8f3b58ad-6afe-4194-a578-2f4fec69367c","Type":"ContainerDied","Data":"42461fff9709ad54490eb287b1b85b0f2b88b64ac08a0a527d25b18ecc56ec7b"} Feb 16 17:22:35 crc kubenswrapper[4794]: I0216 17:22:35.270995 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rs2k4" event={"ID":"865acfbb-330f-4594-a7d8-64962cab3cd5","Type":"ContainerDied","Data":"a201a8c9b7838d26172a1353efa266d9ce445b8e231a61ae7cb8e9eff3205964"} Feb 16 17:22:35 crc kubenswrapper[4794]: I0216 17:22:35.271042 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a201a8c9b7838d26172a1353efa266d9ce445b8e231a61ae7cb8e9eff3205964" Feb 16 17:22:35 crc kubenswrapper[4794]: I0216 17:22:35.271041 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rs2k4" Feb 16 17:22:35 crc kubenswrapper[4794]: I0216 17:22:35.419771 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/865acfbb-330f-4594-a7d8-64962cab3cd5-combined-ca-bundle\") pod \"865acfbb-330f-4594-a7d8-64962cab3cd5\" (UID: \"865acfbb-330f-4594-a7d8-64962cab3cd5\") " Feb 16 17:22:35 crc kubenswrapper[4794]: I0216 17:22:35.420118 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnm8q\" (UniqueName: \"kubernetes.io/projected/865acfbb-330f-4594-a7d8-64962cab3cd5-kube-api-access-wnm8q\") pod \"865acfbb-330f-4594-a7d8-64962cab3cd5\" (UID: \"865acfbb-330f-4594-a7d8-64962cab3cd5\") " Feb 16 17:22:35 crc kubenswrapper[4794]: I0216 17:22:35.420160 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/865acfbb-330f-4594-a7d8-64962cab3cd5-db-sync-config-data\") pod \"865acfbb-330f-4594-a7d8-64962cab3cd5\" (UID: \"865acfbb-330f-4594-a7d8-64962cab3cd5\") " Feb 16 17:22:35 crc kubenswrapper[4794]: I0216 17:22:35.424962 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/865acfbb-330f-4594-a7d8-64962cab3cd5-kube-api-access-wnm8q" (OuterVolumeSpecName: "kube-api-access-wnm8q") pod "865acfbb-330f-4594-a7d8-64962cab3cd5" (UID: "865acfbb-330f-4594-a7d8-64962cab3cd5"). InnerVolumeSpecName "kube-api-access-wnm8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:22:35 crc kubenswrapper[4794]: I0216 17:22:35.431850 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/865acfbb-330f-4594-a7d8-64962cab3cd5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "865acfbb-330f-4594-a7d8-64962cab3cd5" (UID: "865acfbb-330f-4594-a7d8-64962cab3cd5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:35 crc kubenswrapper[4794]: I0216 17:22:35.453448 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/865acfbb-330f-4594-a7d8-64962cab3cd5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "865acfbb-330f-4594-a7d8-64962cab3cd5" (UID: "865acfbb-330f-4594-a7d8-64962cab3cd5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:35 crc kubenswrapper[4794]: I0216 17:22:35.522571 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnm8q\" (UniqueName: \"kubernetes.io/projected/865acfbb-330f-4594-a7d8-64962cab3cd5-kube-api-access-wnm8q\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:35 crc kubenswrapper[4794]: I0216 17:22:35.522603 4794 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/865acfbb-330f-4594-a7d8-64962cab3cd5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:35 crc kubenswrapper[4794]: I0216 17:22:35.522617 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/865acfbb-330f-4594-a7d8-64962cab3cd5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.598283 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-8df6f765f-hzfz6"] Feb 16 17:22:36 crc kubenswrapper[4794]: E0216 17:22:36.603863 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="865acfbb-330f-4594-a7d8-64962cab3cd5" containerName="barbican-db-sync" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.603889 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="865acfbb-330f-4594-a7d8-64962cab3cd5" containerName="barbican-db-sync" Feb 16 17:22:36 crc kubenswrapper[4794]: E0216 17:22:36.603937 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7b5f58b-cbab-4834-93ce-96d088299265" containerName="init" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.603946 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7b5f58b-cbab-4834-93ce-96d088299265" containerName="init" Feb 16 17:22:36 crc kubenswrapper[4794]: E0216 17:22:36.603983 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7b5f58b-cbab-4834-93ce-96d088299265" containerName="dnsmasq-dns" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.603992 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7b5f58b-cbab-4834-93ce-96d088299265" containerName="dnsmasq-dns" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.604491 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7b5f58b-cbab-4834-93ce-96d088299265" containerName="dnsmasq-dns" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.604536 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="865acfbb-330f-4594-a7d8-64962cab3cd5" containerName="barbican-db-sync" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.611872 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-8df6f765f-hzfz6" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.618639 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.619118 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-tk9n7" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.619276 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.663266 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20d47909-0796-4ee7-8209-9c30ae86ff2f-combined-ca-bundle\") pod \"barbican-worker-8df6f765f-hzfz6\" (UID: \"20d47909-0796-4ee7-8209-9c30ae86ff2f\") " pod="openstack/barbican-worker-8df6f765f-hzfz6" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.680748 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20d47909-0796-4ee7-8209-9c30ae86ff2f-config-data-custom\") pod \"barbican-worker-8df6f765f-hzfz6\" (UID: \"20d47909-0796-4ee7-8209-9c30ae86ff2f\") " pod="openstack/barbican-worker-8df6f765f-hzfz6" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.680935 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20d47909-0796-4ee7-8209-9c30ae86ff2f-logs\") pod \"barbican-worker-8df6f765f-hzfz6\" (UID: \"20d47909-0796-4ee7-8209-9c30ae86ff2f\") " pod="openstack/barbican-worker-8df6f765f-hzfz6" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.681199 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20d47909-0796-4ee7-8209-9c30ae86ff2f-config-data\") pod \"barbican-worker-8df6f765f-hzfz6\" (UID: \"20d47909-0796-4ee7-8209-9c30ae86ff2f\") " pod="openstack/barbican-worker-8df6f765f-hzfz6" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.682052 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6wmk\" (UniqueName: \"kubernetes.io/projected/20d47909-0796-4ee7-8209-9c30ae86ff2f-kube-api-access-p6wmk\") pod \"barbican-worker-8df6f765f-hzfz6\" (UID: \"20d47909-0796-4ee7-8209-9c30ae86ff2f\") " pod="openstack/barbican-worker-8df6f765f-hzfz6" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.717875 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-8df6f765f-hzfz6"] Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.727782 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-796f585bbb-7grdw"] Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.732220 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-796f585bbb-7grdw" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.740499 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.740783 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-796f585bbb-7grdw"] Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.784937 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6wmk\" (UniqueName: \"kubernetes.io/projected/20d47909-0796-4ee7-8209-9c30ae86ff2f-kube-api-access-p6wmk\") pod \"barbican-worker-8df6f765f-hzfz6\" (UID: \"20d47909-0796-4ee7-8209-9c30ae86ff2f\") " pod="openstack/barbican-worker-8df6f765f-hzfz6" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.785206 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2beacbf-4b81-4375-be49-872edd3d0d9d-config-data-custom\") pod \"barbican-keystone-listener-796f585bbb-7grdw\" (UID: \"f2beacbf-4b81-4375-be49-872edd3d0d9d\") " pod="openstack/barbican-keystone-listener-796f585bbb-7grdw" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.785576 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2beacbf-4b81-4375-be49-872edd3d0d9d-logs\") pod \"barbican-keystone-listener-796f585bbb-7grdw\" (UID: \"f2beacbf-4b81-4375-be49-872edd3d0d9d\") " pod="openstack/barbican-keystone-listener-796f585bbb-7grdw" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.785662 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z87tr\" (UniqueName: \"kubernetes.io/projected/f2beacbf-4b81-4375-be49-872edd3d0d9d-kube-api-access-z87tr\") pod \"barbican-keystone-listener-796f585bbb-7grdw\" (UID: \"f2beacbf-4b81-4375-be49-872edd3d0d9d\") " pod="openstack/barbican-keystone-listener-796f585bbb-7grdw" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.785798 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20d47909-0796-4ee7-8209-9c30ae86ff2f-combined-ca-bundle\") pod \"barbican-worker-8df6f765f-hzfz6\" (UID: \"20d47909-0796-4ee7-8209-9c30ae86ff2f\") " pod="openstack/barbican-worker-8df6f765f-hzfz6" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.785936 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20d47909-0796-4ee7-8209-9c30ae86ff2f-config-data-custom\") pod \"barbican-worker-8df6f765f-hzfz6\" (UID: \"20d47909-0796-4ee7-8209-9c30ae86ff2f\") " pod="openstack/barbican-worker-8df6f765f-hzfz6" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.786025 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20d47909-0796-4ee7-8209-9c30ae86ff2f-logs\") pod \"barbican-worker-8df6f765f-hzfz6\" (UID: \"20d47909-0796-4ee7-8209-9c30ae86ff2f\") " pod="openstack/barbican-worker-8df6f765f-hzfz6" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.786191 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20d47909-0796-4ee7-8209-9c30ae86ff2f-config-data\") pod \"barbican-worker-8df6f765f-hzfz6\" (UID: \"20d47909-0796-4ee7-8209-9c30ae86ff2f\") " pod="openstack/barbican-worker-8df6f765f-hzfz6" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.786276 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2beacbf-4b81-4375-be49-872edd3d0d9d-combined-ca-bundle\") pod \"barbican-keystone-listener-796f585bbb-7grdw\" (UID: \"f2beacbf-4b81-4375-be49-872edd3d0d9d\") " pod="openstack/barbican-keystone-listener-796f585bbb-7grdw" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.787891 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2beacbf-4b81-4375-be49-872edd3d0d9d-config-data\") pod \"barbican-keystone-listener-796f585bbb-7grdw\" (UID: \"f2beacbf-4b81-4375-be49-872edd3d0d9d\") " pod="openstack/barbican-keystone-listener-796f585bbb-7grdw" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.788377 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20d47909-0796-4ee7-8209-9c30ae86ff2f-logs\") pod \"barbican-worker-8df6f765f-hzfz6\" (UID: \"20d47909-0796-4ee7-8209-9c30ae86ff2f\") " pod="openstack/barbican-worker-8df6f765f-hzfz6" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.802526 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20d47909-0796-4ee7-8209-9c30ae86ff2f-config-data-custom\") pod \"barbican-worker-8df6f765f-hzfz6\" (UID: \"20d47909-0796-4ee7-8209-9c30ae86ff2f\") " pod="openstack/barbican-worker-8df6f765f-hzfz6" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.807531 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6wmk\" (UniqueName: \"kubernetes.io/projected/20d47909-0796-4ee7-8209-9c30ae86ff2f-kube-api-access-p6wmk\") pod \"barbican-worker-8df6f765f-hzfz6\" (UID: \"20d47909-0796-4ee7-8209-9c30ae86ff2f\") " pod="openstack/barbican-worker-8df6f765f-hzfz6" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.809896 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20d47909-0796-4ee7-8209-9c30ae86ff2f-config-data\") pod \"barbican-worker-8df6f765f-hzfz6\" (UID: \"20d47909-0796-4ee7-8209-9c30ae86ff2f\") " pod="openstack/barbican-worker-8df6f765f-hzfz6" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.819052 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20d47909-0796-4ee7-8209-9c30ae86ff2f-combined-ca-bundle\") pod \"barbican-worker-8df6f765f-hzfz6\" (UID: \"20d47909-0796-4ee7-8209-9c30ae86ff2f\") " pod="openstack/barbican-worker-8df6f765f-hzfz6" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.828457 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-k7qkz"] Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.831099 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-k7qkz"] Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.831335 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-k7qkz" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.887134 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5544448f6b-g648r"] Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.889515 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5544448f6b-g648r" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.894489 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.902793 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5544448f6b-g648r"] Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.915574 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f75a3156-6e40-4c41-b47d-0e0cda2882ba-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-k7qkz\" (UID: \"f75a3156-6e40-4c41-b47d-0e0cda2882ba\") " pod="openstack/dnsmasq-dns-848cf88cfc-k7qkz" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.916946 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2beacbf-4b81-4375-be49-872edd3d0d9d-logs\") pod \"barbican-keystone-listener-796f585bbb-7grdw\" (UID: \"f2beacbf-4b81-4375-be49-872edd3d0d9d\") " pod="openstack/barbican-keystone-listener-796f585bbb-7grdw" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.917081 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z87tr\" (UniqueName: \"kubernetes.io/projected/f2beacbf-4b81-4375-be49-872edd3d0d9d-kube-api-access-z87tr\") pod \"barbican-keystone-listener-796f585bbb-7grdw\" (UID: \"f2beacbf-4b81-4375-be49-872edd3d0d9d\") " pod="openstack/barbican-keystone-listener-796f585bbb-7grdw" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.917339 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f75a3156-6e40-4c41-b47d-0e0cda2882ba-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-k7qkz\" (UID: \"f75a3156-6e40-4c41-b47d-0e0cda2882ba\") " pod="openstack/dnsmasq-dns-848cf88cfc-k7qkz" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.917590 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqdlt\" (UniqueName: \"kubernetes.io/projected/f75a3156-6e40-4c41-b47d-0e0cda2882ba-kube-api-access-pqdlt\") pod \"dnsmasq-dns-848cf88cfc-k7qkz\" (UID: \"f75a3156-6e40-4c41-b47d-0e0cda2882ba\") " pod="openstack/dnsmasq-dns-848cf88cfc-k7qkz" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.917759 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f75a3156-6e40-4c41-b47d-0e0cda2882ba-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-k7qkz\" (UID: \"f75a3156-6e40-4c41-b47d-0e0cda2882ba\") " pod="openstack/dnsmasq-dns-848cf88cfc-k7qkz" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.917920 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2beacbf-4b81-4375-be49-872edd3d0d9d-combined-ca-bundle\") pod \"barbican-keystone-listener-796f585bbb-7grdw\" (UID: \"f2beacbf-4b81-4375-be49-872edd3d0d9d\") " pod="openstack/barbican-keystone-listener-796f585bbb-7grdw" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.918103 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f75a3156-6e40-4c41-b47d-0e0cda2882ba-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-k7qkz\" (UID: \"f75a3156-6e40-4c41-b47d-0e0cda2882ba\") " pod="openstack/dnsmasq-dns-848cf88cfc-k7qkz" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.918362 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2beacbf-4b81-4375-be49-872edd3d0d9d-config-data\") pod \"barbican-keystone-listener-796f585bbb-7grdw\" (UID: \"f2beacbf-4b81-4375-be49-872edd3d0d9d\") " pod="openstack/barbican-keystone-listener-796f585bbb-7grdw" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.918531 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f75a3156-6e40-4c41-b47d-0e0cda2882ba-config\") pod \"dnsmasq-dns-848cf88cfc-k7qkz\" (UID: \"f75a3156-6e40-4c41-b47d-0e0cda2882ba\") " pod="openstack/dnsmasq-dns-848cf88cfc-k7qkz" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.918703 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2beacbf-4b81-4375-be49-872edd3d0d9d-config-data-custom\") pod \"barbican-keystone-listener-796f585bbb-7grdw\" (UID: \"f2beacbf-4b81-4375-be49-872edd3d0d9d\") " pod="openstack/barbican-keystone-listener-796f585bbb-7grdw" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.920397 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2beacbf-4b81-4375-be49-872edd3d0d9d-logs\") pod \"barbican-keystone-listener-796f585bbb-7grdw\" (UID: \"f2beacbf-4b81-4375-be49-872edd3d0d9d\") " pod="openstack/barbican-keystone-listener-796f585bbb-7grdw" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.925011 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2beacbf-4b81-4375-be49-872edd3d0d9d-config-data-custom\") pod \"barbican-keystone-listener-796f585bbb-7grdw\" (UID: \"f2beacbf-4b81-4375-be49-872edd3d0d9d\") " pod="openstack/barbican-keystone-listener-796f585bbb-7grdw" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.926201 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2beacbf-4b81-4375-be49-872edd3d0d9d-config-data\") pod \"barbican-keystone-listener-796f585bbb-7grdw\" (UID: \"f2beacbf-4b81-4375-be49-872edd3d0d9d\") " pod="openstack/barbican-keystone-listener-796f585bbb-7grdw" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.927796 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2beacbf-4b81-4375-be49-872edd3d0d9d-combined-ca-bundle\") pod \"barbican-keystone-listener-796f585bbb-7grdw\" (UID: \"f2beacbf-4b81-4375-be49-872edd3d0d9d\") " pod="openstack/barbican-keystone-listener-796f585bbb-7grdw" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.936046 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-8df6f765f-hzfz6" Feb 16 17:22:36 crc kubenswrapper[4794]: I0216 17:22:36.938902 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z87tr\" (UniqueName: \"kubernetes.io/projected/f2beacbf-4b81-4375-be49-872edd3d0d9d-kube-api-access-z87tr\") pod \"barbican-keystone-listener-796f585bbb-7grdw\" (UID: \"f2beacbf-4b81-4375-be49-872edd3d0d9d\") " pod="openstack/barbican-keystone-listener-796f585bbb-7grdw" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.012760 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-796f585bbb-7grdw" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.017856 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-rppbn" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.020537 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/483c093a-519b-46a6-87c0-a4b43efc587e-logs\") pod \"barbican-api-5544448f6b-g648r\" (UID: \"483c093a-519b-46a6-87c0-a4b43efc587e\") " pod="openstack/barbican-api-5544448f6b-g648r" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.020624 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqdlt\" (UniqueName: \"kubernetes.io/projected/f75a3156-6e40-4c41-b47d-0e0cda2882ba-kube-api-access-pqdlt\") pod \"dnsmasq-dns-848cf88cfc-k7qkz\" (UID: \"f75a3156-6e40-4c41-b47d-0e0cda2882ba\") " pod="openstack/dnsmasq-dns-848cf88cfc-k7qkz" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.020688 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f75a3156-6e40-4c41-b47d-0e0cda2882ba-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-k7qkz\" (UID: \"f75a3156-6e40-4c41-b47d-0e0cda2882ba\") " pod="openstack/dnsmasq-dns-848cf88cfc-k7qkz" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.020727 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f75a3156-6e40-4c41-b47d-0e0cda2882ba-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-k7qkz\" (UID: \"f75a3156-6e40-4c41-b47d-0e0cda2882ba\") " pod="openstack/dnsmasq-dns-848cf88cfc-k7qkz" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.020795 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f75a3156-6e40-4c41-b47d-0e0cda2882ba-config\") pod \"dnsmasq-dns-848cf88cfc-k7qkz\" (UID: \"f75a3156-6e40-4c41-b47d-0e0cda2882ba\") " pod="openstack/dnsmasq-dns-848cf88cfc-k7qkz" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.020825 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/483c093a-519b-46a6-87c0-a4b43efc587e-config-data-custom\") pod \"barbican-api-5544448f6b-g648r\" (UID: \"483c093a-519b-46a6-87c0-a4b43efc587e\") " pod="openstack/barbican-api-5544448f6b-g648r" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.020886 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f75a3156-6e40-4c41-b47d-0e0cda2882ba-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-k7qkz\" (UID: \"f75a3156-6e40-4c41-b47d-0e0cda2882ba\") " pod="openstack/dnsmasq-dns-848cf88cfc-k7qkz" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.020981 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/483c093a-519b-46a6-87c0-a4b43efc587e-config-data\") pod \"barbican-api-5544448f6b-g648r\" (UID: \"483c093a-519b-46a6-87c0-a4b43efc587e\") " pod="openstack/barbican-api-5544448f6b-g648r" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.021012 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f75a3156-6e40-4c41-b47d-0e0cda2882ba-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-k7qkz\" (UID: \"f75a3156-6e40-4c41-b47d-0e0cda2882ba\") " pod="openstack/dnsmasq-dns-848cf88cfc-k7qkz" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.021035 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb6wx\" (UniqueName: \"kubernetes.io/projected/483c093a-519b-46a6-87c0-a4b43efc587e-kube-api-access-mb6wx\") pod \"barbican-api-5544448f6b-g648r\" (UID: \"483c093a-519b-46a6-87c0-a4b43efc587e\") " pod="openstack/barbican-api-5544448f6b-g648r" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.021076 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/483c093a-519b-46a6-87c0-a4b43efc587e-combined-ca-bundle\") pod \"barbican-api-5544448f6b-g648r\" (UID: \"483c093a-519b-46a6-87c0-a4b43efc587e\") " pod="openstack/barbican-api-5544448f6b-g648r" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.022533 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f75a3156-6e40-4c41-b47d-0e0cda2882ba-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-k7qkz\" (UID: \"f75a3156-6e40-4c41-b47d-0e0cda2882ba\") " pod="openstack/dnsmasq-dns-848cf88cfc-k7qkz" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.023130 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f75a3156-6e40-4c41-b47d-0e0cda2882ba-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-k7qkz\" (UID: \"f75a3156-6e40-4c41-b47d-0e0cda2882ba\") " pod="openstack/dnsmasq-dns-848cf88cfc-k7qkz" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.023659 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f75a3156-6e40-4c41-b47d-0e0cda2882ba-config\") pod \"dnsmasq-dns-848cf88cfc-k7qkz\" (UID: \"f75a3156-6e40-4c41-b47d-0e0cda2882ba\") " pod="openstack/dnsmasq-dns-848cf88cfc-k7qkz" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.031091 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f75a3156-6e40-4c41-b47d-0e0cda2882ba-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-k7qkz\" (UID: \"f75a3156-6e40-4c41-b47d-0e0cda2882ba\") " pod="openstack/dnsmasq-dns-848cf88cfc-k7qkz" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.032437 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f75a3156-6e40-4c41-b47d-0e0cda2882ba-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-k7qkz\" (UID: \"f75a3156-6e40-4c41-b47d-0e0cda2882ba\") " pod="openstack/dnsmasq-dns-848cf88cfc-k7qkz" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.043996 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqdlt\" (UniqueName: \"kubernetes.io/projected/f75a3156-6e40-4c41-b47d-0e0cda2882ba-kube-api-access-pqdlt\") pod \"dnsmasq-dns-848cf88cfc-k7qkz\" (UID: \"f75a3156-6e40-4c41-b47d-0e0cda2882ba\") " pod="openstack/dnsmasq-dns-848cf88cfc-k7qkz" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.077675 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-k7qkz" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.122893 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f3b58ad-6afe-4194-a578-2f4fec69367c-combined-ca-bundle\") pod \"8f3b58ad-6afe-4194-a578-2f4fec69367c\" (UID: \"8f3b58ad-6afe-4194-a578-2f4fec69367c\") " Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.122932 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f3b58ad-6afe-4194-a578-2f4fec69367c-config-data\") pod \"8f3b58ad-6afe-4194-a578-2f4fec69367c\" (UID: \"8f3b58ad-6afe-4194-a578-2f4fec69367c\") " Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.123008 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgjh2\" (UniqueName: \"kubernetes.io/projected/8f3b58ad-6afe-4194-a578-2f4fec69367c-kube-api-access-pgjh2\") pod \"8f3b58ad-6afe-4194-a578-2f4fec69367c\" (UID: \"8f3b58ad-6afe-4194-a578-2f4fec69367c\") " Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.123450 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/483c093a-519b-46a6-87c0-a4b43efc587e-config-data\") pod \"barbican-api-5544448f6b-g648r\" (UID: \"483c093a-519b-46a6-87c0-a4b43efc587e\") " pod="openstack/barbican-api-5544448f6b-g648r" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.123484 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb6wx\" (UniqueName: \"kubernetes.io/projected/483c093a-519b-46a6-87c0-a4b43efc587e-kube-api-access-mb6wx\") pod \"barbican-api-5544448f6b-g648r\" (UID: \"483c093a-519b-46a6-87c0-a4b43efc587e\") " pod="openstack/barbican-api-5544448f6b-g648r" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.123521 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/483c093a-519b-46a6-87c0-a4b43efc587e-combined-ca-bundle\") pod \"barbican-api-5544448f6b-g648r\" (UID: \"483c093a-519b-46a6-87c0-a4b43efc587e\") " pod="openstack/barbican-api-5544448f6b-g648r" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.123578 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/483c093a-519b-46a6-87c0-a4b43efc587e-logs\") pod \"barbican-api-5544448f6b-g648r\" (UID: \"483c093a-519b-46a6-87c0-a4b43efc587e\") " pod="openstack/barbican-api-5544448f6b-g648r" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.123752 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/483c093a-519b-46a6-87c0-a4b43efc587e-config-data-custom\") pod \"barbican-api-5544448f6b-g648r\" (UID: \"483c093a-519b-46a6-87c0-a4b43efc587e\") " pod="openstack/barbican-api-5544448f6b-g648r" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.128141 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/483c093a-519b-46a6-87c0-a4b43efc587e-config-data\") pod \"barbican-api-5544448f6b-g648r\" (UID: \"483c093a-519b-46a6-87c0-a4b43efc587e\") " pod="openstack/barbican-api-5544448f6b-g648r" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.128617 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f3b58ad-6afe-4194-a578-2f4fec69367c-kube-api-access-pgjh2" (OuterVolumeSpecName: "kube-api-access-pgjh2") pod "8f3b58ad-6afe-4194-a578-2f4fec69367c" (UID: "8f3b58ad-6afe-4194-a578-2f4fec69367c"). InnerVolumeSpecName "kube-api-access-pgjh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.128691 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/483c093a-519b-46a6-87c0-a4b43efc587e-logs\") pod \"barbican-api-5544448f6b-g648r\" (UID: \"483c093a-519b-46a6-87c0-a4b43efc587e\") " pod="openstack/barbican-api-5544448f6b-g648r" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.132488 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/483c093a-519b-46a6-87c0-a4b43efc587e-config-data-custom\") pod \"barbican-api-5544448f6b-g648r\" (UID: \"483c093a-519b-46a6-87c0-a4b43efc587e\") " pod="openstack/barbican-api-5544448f6b-g648r" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.134343 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/483c093a-519b-46a6-87c0-a4b43efc587e-combined-ca-bundle\") pod \"barbican-api-5544448f6b-g648r\" (UID: \"483c093a-519b-46a6-87c0-a4b43efc587e\") " pod="openstack/barbican-api-5544448f6b-g648r" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.151336 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb6wx\" (UniqueName: \"kubernetes.io/projected/483c093a-519b-46a6-87c0-a4b43efc587e-kube-api-access-mb6wx\") pod \"barbican-api-5544448f6b-g648r\" (UID: \"483c093a-519b-46a6-87c0-a4b43efc587e\") " pod="openstack/barbican-api-5544448f6b-g648r" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.168837 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f3b58ad-6afe-4194-a578-2f4fec69367c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f3b58ad-6afe-4194-a578-2f4fec69367c" (UID: "8f3b58ad-6afe-4194-a578-2f4fec69367c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.226573 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f3b58ad-6afe-4194-a578-2f4fec69367c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.226614 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgjh2\" (UniqueName: \"kubernetes.io/projected/8f3b58ad-6afe-4194-a578-2f4fec69367c-kube-api-access-pgjh2\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.250719 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f3b58ad-6afe-4194-a578-2f4fec69367c-config-data" (OuterVolumeSpecName: "config-data") pod "8f3b58ad-6afe-4194-a578-2f4fec69367c" (UID: "8f3b58ad-6afe-4194-a578-2f4fec69367c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.290709 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-rppbn" event={"ID":"8f3b58ad-6afe-4194-a578-2f4fec69367c","Type":"ContainerDied","Data":"6872e37b7f84fd6443ab9ee4de35100ff379655d12d03f09f2131cd440b0be60"} Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.290748 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6872e37b7f84fd6443ab9ee4de35100ff379655d12d03f09f2131cd440b0be60" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.290801 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-rppbn" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.297726 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f584ef06-0506-4130-b87a-ec406e89d1f5","Type":"ContainerStarted","Data":"ea23711e60d4a95c2f0c358f340e02d91396bb5eb5ec1265f103955e3096dcb2"} Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.297918 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f584ef06-0506-4130-b87a-ec406e89d1f5" containerName="ceilometer-central-agent" containerID="cri-o://2cc6c7a597b2080303daf29b00c33c2018487cb789f92dfa77811f37fe0d75a5" gracePeriod=30 Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.297983 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.297991 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f584ef06-0506-4130-b87a-ec406e89d1f5" containerName="ceilometer-notification-agent" containerID="cri-o://5e1fa942c0ff1b85cda1c6ce6325bf99bad53aec28836eee80f2ac547e95187d" gracePeriod=30 Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.297952 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f584ef06-0506-4130-b87a-ec406e89d1f5" containerName="proxy-httpd" containerID="cri-o://ea23711e60d4a95c2f0c358f340e02d91396bb5eb5ec1265f103955e3096dcb2" gracePeriod=30 Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.298085 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f584ef06-0506-4130-b87a-ec406e89d1f5" containerName="sg-core" containerID="cri-o://b91edb3941ecf4c9d9844e364ee4b249a64e9932dc4fbf963e1f784d45802111" gracePeriod=30 Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.340709 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f3b58ad-6afe-4194-a578-2f4fec69367c-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.366459 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.099398636 podStartE2EDuration="55.366441587s" podCreationTimestamp="2026-02-16 17:21:42 +0000 UTC" firstStartedPulling="2026-02-16 17:21:44.395246927 +0000 UTC m=+1330.343341574" lastFinishedPulling="2026-02-16 17:22:36.662289878 +0000 UTC m=+1382.610384525" observedRunningTime="2026-02-16 17:22:37.335109852 +0000 UTC m=+1383.283204509" watchObservedRunningTime="2026-02-16 17:22:37.366441587 +0000 UTC m=+1383.314536234" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.389967 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5544448f6b-g648r" Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.525451 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-796f585bbb-7grdw"] Feb 16 17:22:37 crc kubenswrapper[4794]: W0216 17:22:37.692230 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20d47909_0796_4ee7_8209_9c30ae86ff2f.slice/crio-ae77c99ed2116136c69c3a29a90c065c1658f240d5bb91f3d43cb10ba2b83776 WatchSource:0}: Error finding container ae77c99ed2116136c69c3a29a90c065c1658f240d5bb91f3d43cb10ba2b83776: Status 404 returned error can't find the container with id ae77c99ed2116136c69c3a29a90c065c1658f240d5bb91f3d43cb10ba2b83776 Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.697867 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-8df6f765f-hzfz6"] Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.819763 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-k7qkz"] Feb 16 17:22:37 crc kubenswrapper[4794]: W0216 17:22:37.831821 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf75a3156_6e40_4c41_b47d_0e0cda2882ba.slice/crio-3e41b9144866a869b0a2623a3646d201ce07a6e8e0990c331363f3b767ec78ea WatchSource:0}: Error finding container 3e41b9144866a869b0a2623a3646d201ce07a6e8e0990c331363f3b767ec78ea: Status 404 returned error can't find the container with id 3e41b9144866a869b0a2623a3646d201ce07a6e8e0990c331363f3b767ec78ea Feb 16 17:22:37 crc kubenswrapper[4794]: I0216 17:22:37.989849 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5544448f6b-g648r"] Feb 16 17:22:38 crc kubenswrapper[4794]: W0216 17:22:38.002033 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod483c093a_519b_46a6_87c0_a4b43efc587e.slice/crio-036b321232f9713e98e85b71df988337d930763300082f61d3fd2c7a623ecc42 WatchSource:0}: Error finding container 036b321232f9713e98e85b71df988337d930763300082f61d3fd2c7a623ecc42: Status 404 returned error can't find the container with id 036b321232f9713e98e85b71df988337d930763300082f61d3fd2c7a623ecc42 Feb 16 17:22:38 crc kubenswrapper[4794]: I0216 17:22:38.329430 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8df6f765f-hzfz6" event={"ID":"20d47909-0796-4ee7-8209-9c30ae86ff2f","Type":"ContainerStarted","Data":"ae77c99ed2116136c69c3a29a90c065c1658f240d5bb91f3d43cb10ba2b83776"} Feb 16 17:22:38 crc kubenswrapper[4794]: I0216 17:22:38.361548 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-796f585bbb-7grdw" event={"ID":"f2beacbf-4b81-4375-be49-872edd3d0d9d","Type":"ContainerStarted","Data":"3c00fa68b8f0ec2232ca3e7438b430751161c93aa7a80cc657cb317c5e68d8a1"} Feb 16 17:22:38 crc kubenswrapper[4794]: I0216 17:22:38.395372 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5544448f6b-g648r" event={"ID":"483c093a-519b-46a6-87c0-a4b43efc587e","Type":"ContainerStarted","Data":"036b321232f9713e98e85b71df988337d930763300082f61d3fd2c7a623ecc42"} Feb 16 17:22:38 crc kubenswrapper[4794]: I0216 17:22:38.414760 4794 generic.go:334] "Generic (PLEG): container finished" podID="f584ef06-0506-4130-b87a-ec406e89d1f5" containerID="ea23711e60d4a95c2f0c358f340e02d91396bb5eb5ec1265f103955e3096dcb2" exitCode=0 Feb 16 17:22:38 crc kubenswrapper[4794]: I0216 17:22:38.414793 4794 generic.go:334] "Generic (PLEG): container finished" podID="f584ef06-0506-4130-b87a-ec406e89d1f5" containerID="b91edb3941ecf4c9d9844e364ee4b249a64e9932dc4fbf963e1f784d45802111" exitCode=2 Feb 16 17:22:38 crc kubenswrapper[4794]: I0216 17:22:38.414800 4794 generic.go:334] "Generic (PLEG): container finished" podID="f584ef06-0506-4130-b87a-ec406e89d1f5" containerID="2cc6c7a597b2080303daf29b00c33c2018487cb789f92dfa77811f37fe0d75a5" exitCode=0 Feb 16 17:22:38 crc kubenswrapper[4794]: I0216 17:22:38.414839 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f584ef06-0506-4130-b87a-ec406e89d1f5","Type":"ContainerDied","Data":"ea23711e60d4a95c2f0c358f340e02d91396bb5eb5ec1265f103955e3096dcb2"} Feb 16 17:22:38 crc kubenswrapper[4794]: I0216 17:22:38.414867 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f584ef06-0506-4130-b87a-ec406e89d1f5","Type":"ContainerDied","Data":"b91edb3941ecf4c9d9844e364ee4b249a64e9932dc4fbf963e1f784d45802111"} Feb 16 17:22:38 crc kubenswrapper[4794]: I0216 17:22:38.414876 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f584ef06-0506-4130-b87a-ec406e89d1f5","Type":"ContainerDied","Data":"2cc6c7a597b2080303daf29b00c33c2018487cb789f92dfa77811f37fe0d75a5"} Feb 16 17:22:38 crc kubenswrapper[4794]: I0216 17:22:38.417675 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-k7qkz" event={"ID":"f75a3156-6e40-4c41-b47d-0e0cda2882ba","Type":"ContainerStarted","Data":"3e41b9144866a869b0a2623a3646d201ce07a6e8e0990c331363f3b767ec78ea"} Feb 16 17:22:38 crc kubenswrapper[4794]: I0216 17:22:38.419259 4794 generic.go:334] "Generic (PLEG): container finished" podID="706ed090-ccb8-4488-ae71-8c991476fd08" containerID="7f12354d91da9ae57eb9a6a0abd89f7615e632c66398378e2e904dc37a6b95a0" exitCode=0 Feb 16 17:22:38 crc kubenswrapper[4794]: I0216 17:22:38.419298 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-t9x9p" event={"ID":"706ed090-ccb8-4488-ae71-8c991476fd08","Type":"ContainerDied","Data":"7f12354d91da9ae57eb9a6a0abd89f7615e632c66398378e2e904dc37a6b95a0"} Feb 16 17:22:39 crc kubenswrapper[4794]: I0216 17:22:39.446403 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5544448f6b-g648r" event={"ID":"483c093a-519b-46a6-87c0-a4b43efc587e","Type":"ContainerStarted","Data":"c7cd0a91ba9b29109548552b7d31997ae242b82a47a51e1549967df7d96246fe"} Feb 16 17:22:39 crc kubenswrapper[4794]: I0216 17:22:39.452951 4794 generic.go:334] "Generic (PLEG): container finished" podID="f584ef06-0506-4130-b87a-ec406e89d1f5" containerID="5e1fa942c0ff1b85cda1c6ce6325bf99bad53aec28836eee80f2ac547e95187d" exitCode=0 Feb 16 17:22:39 crc kubenswrapper[4794]: I0216 17:22:39.453027 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f584ef06-0506-4130-b87a-ec406e89d1f5","Type":"ContainerDied","Data":"5e1fa942c0ff1b85cda1c6ce6325bf99bad53aec28836eee80f2ac547e95187d"} Feb 16 17:22:39 crc kubenswrapper[4794]: I0216 17:22:39.456060 4794 generic.go:334] "Generic (PLEG): container finished" podID="f75a3156-6e40-4c41-b47d-0e0cda2882ba" containerID="e60002b6a45a5e88cdd12e847a8212a47ddd722bec6cc649e5b91db0f9b5a2b1" exitCode=0 Feb 16 17:22:39 crc kubenswrapper[4794]: I0216 17:22:39.457448 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-k7qkz" event={"ID":"f75a3156-6e40-4c41-b47d-0e0cda2882ba","Type":"ContainerDied","Data":"e60002b6a45a5e88cdd12e847a8212a47ddd722bec6cc649e5b91db0f9b5a2b1"} Feb 16 17:22:39 crc kubenswrapper[4794]: I0216 17:22:39.669101 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:22:39 crc kubenswrapper[4794]: I0216 17:22:39.821103 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f584ef06-0506-4130-b87a-ec406e89d1f5-scripts\") pod \"f584ef06-0506-4130-b87a-ec406e89d1f5\" (UID: \"f584ef06-0506-4130-b87a-ec406e89d1f5\") " Feb 16 17:22:39 crc kubenswrapper[4794]: I0216 17:22:39.821203 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f584ef06-0506-4130-b87a-ec406e89d1f5-combined-ca-bundle\") pod \"f584ef06-0506-4130-b87a-ec406e89d1f5\" (UID: \"f584ef06-0506-4130-b87a-ec406e89d1f5\") " Feb 16 17:22:39 crc kubenswrapper[4794]: I0216 17:22:39.821272 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4v52\" (UniqueName: \"kubernetes.io/projected/f584ef06-0506-4130-b87a-ec406e89d1f5-kube-api-access-p4v52\") pod \"f584ef06-0506-4130-b87a-ec406e89d1f5\" (UID: \"f584ef06-0506-4130-b87a-ec406e89d1f5\") " Feb 16 17:22:39 crc kubenswrapper[4794]: I0216 17:22:39.821336 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f584ef06-0506-4130-b87a-ec406e89d1f5-sg-core-conf-yaml\") pod \"f584ef06-0506-4130-b87a-ec406e89d1f5\" (UID: \"f584ef06-0506-4130-b87a-ec406e89d1f5\") " Feb 16 17:22:39 crc kubenswrapper[4794]: I0216 17:22:39.821419 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f584ef06-0506-4130-b87a-ec406e89d1f5-run-httpd\") pod \"f584ef06-0506-4130-b87a-ec406e89d1f5\" (UID: \"f584ef06-0506-4130-b87a-ec406e89d1f5\") " Feb 16 17:22:39 crc kubenswrapper[4794]: I0216 17:22:39.821438 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f584ef06-0506-4130-b87a-ec406e89d1f5-log-httpd\") pod \"f584ef06-0506-4130-b87a-ec406e89d1f5\" (UID: \"f584ef06-0506-4130-b87a-ec406e89d1f5\") " Feb 16 17:22:39 crc kubenswrapper[4794]: I0216 17:22:39.821481 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f584ef06-0506-4130-b87a-ec406e89d1f5-config-data\") pod \"f584ef06-0506-4130-b87a-ec406e89d1f5\" (UID: \"f584ef06-0506-4130-b87a-ec406e89d1f5\") " Feb 16 17:22:39 crc kubenswrapper[4794]: I0216 17:22:39.833485 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f584ef06-0506-4130-b87a-ec406e89d1f5-scripts" (OuterVolumeSpecName: "scripts") pod "f584ef06-0506-4130-b87a-ec406e89d1f5" (UID: "f584ef06-0506-4130-b87a-ec406e89d1f5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:39 crc kubenswrapper[4794]: I0216 17:22:39.834603 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f584ef06-0506-4130-b87a-ec406e89d1f5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f584ef06-0506-4130-b87a-ec406e89d1f5" (UID: "f584ef06-0506-4130-b87a-ec406e89d1f5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:22:39 crc kubenswrapper[4794]: I0216 17:22:39.835230 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f584ef06-0506-4130-b87a-ec406e89d1f5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f584ef06-0506-4130-b87a-ec406e89d1f5" (UID: "f584ef06-0506-4130-b87a-ec406e89d1f5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:22:39 crc kubenswrapper[4794]: I0216 17:22:39.856991 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f584ef06-0506-4130-b87a-ec406e89d1f5-kube-api-access-p4v52" (OuterVolumeSpecName: "kube-api-access-p4v52") pod "f584ef06-0506-4130-b87a-ec406e89d1f5" (UID: "f584ef06-0506-4130-b87a-ec406e89d1f5"). InnerVolumeSpecName "kube-api-access-p4v52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:22:39 crc kubenswrapper[4794]: I0216 17:22:39.893979 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f584ef06-0506-4130-b87a-ec406e89d1f5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f584ef06-0506-4130-b87a-ec406e89d1f5" (UID: "f584ef06-0506-4130-b87a-ec406e89d1f5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:39 crc kubenswrapper[4794]: I0216 17:22:39.924764 4794 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f584ef06-0506-4130-b87a-ec406e89d1f5-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:39 crc kubenswrapper[4794]: I0216 17:22:39.924797 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4v52\" (UniqueName: \"kubernetes.io/projected/f584ef06-0506-4130-b87a-ec406e89d1f5-kube-api-access-p4v52\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:39 crc kubenswrapper[4794]: I0216 17:22:39.924810 4794 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f584ef06-0506-4130-b87a-ec406e89d1f5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:39 crc kubenswrapper[4794]: I0216 17:22:39.924819 4794 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f584ef06-0506-4130-b87a-ec406e89d1f5-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:39 crc kubenswrapper[4794]: I0216 17:22:39.924828 4794 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f584ef06-0506-4130-b87a-ec406e89d1f5-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:39 crc kubenswrapper[4794]: I0216 17:22:39.985918 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f584ef06-0506-4130-b87a-ec406e89d1f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f584ef06-0506-4130-b87a-ec406e89d1f5" (UID: "f584ef06-0506-4130-b87a-ec406e89d1f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.014815 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f584ef06-0506-4130-b87a-ec406e89d1f5-config-data" (OuterVolumeSpecName: "config-data") pod "f584ef06-0506-4130-b87a-ec406e89d1f5" (UID: "f584ef06-0506-4130-b87a-ec406e89d1f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.027772 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f584ef06-0506-4130-b87a-ec406e89d1f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.027816 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f584ef06-0506-4130-b87a-ec406e89d1f5-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.141833 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-t9x9p" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.230648 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/706ed090-ccb8-4488-ae71-8c991476fd08-etc-machine-id\") pod \"706ed090-ccb8-4488-ae71-8c991476fd08\" (UID: \"706ed090-ccb8-4488-ae71-8c991476fd08\") " Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.230716 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/706ed090-ccb8-4488-ae71-8c991476fd08-config-data\") pod \"706ed090-ccb8-4488-ae71-8c991476fd08\" (UID: \"706ed090-ccb8-4488-ae71-8c991476fd08\") " Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.230764 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/706ed090-ccb8-4488-ae71-8c991476fd08-db-sync-config-data\") pod \"706ed090-ccb8-4488-ae71-8c991476fd08\" (UID: \"706ed090-ccb8-4488-ae71-8c991476fd08\") " Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.230831 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/706ed090-ccb8-4488-ae71-8c991476fd08-scripts\") pod \"706ed090-ccb8-4488-ae71-8c991476fd08\" (UID: \"706ed090-ccb8-4488-ae71-8c991476fd08\") " Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.230977 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvqfk\" (UniqueName: \"kubernetes.io/projected/706ed090-ccb8-4488-ae71-8c991476fd08-kube-api-access-fvqfk\") pod \"706ed090-ccb8-4488-ae71-8c991476fd08\" (UID: \"706ed090-ccb8-4488-ae71-8c991476fd08\") " Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.231146 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/706ed090-ccb8-4488-ae71-8c991476fd08-combined-ca-bundle\") pod \"706ed090-ccb8-4488-ae71-8c991476fd08\" (UID: \"706ed090-ccb8-4488-ae71-8c991476fd08\") " Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.240893 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/706ed090-ccb8-4488-ae71-8c991476fd08-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "706ed090-ccb8-4488-ae71-8c991476fd08" (UID: "706ed090-ccb8-4488-ae71-8c991476fd08"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.244997 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/706ed090-ccb8-4488-ae71-8c991476fd08-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "706ed090-ccb8-4488-ae71-8c991476fd08" (UID: "706ed090-ccb8-4488-ae71-8c991476fd08"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.257175 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/706ed090-ccb8-4488-ae71-8c991476fd08-scripts" (OuterVolumeSpecName: "scripts") pod "706ed090-ccb8-4488-ae71-8c991476fd08" (UID: "706ed090-ccb8-4488-ae71-8c991476fd08"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.257273 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/706ed090-ccb8-4488-ae71-8c991476fd08-kube-api-access-fvqfk" (OuterVolumeSpecName: "kube-api-access-fvqfk") pod "706ed090-ccb8-4488-ae71-8c991476fd08" (UID: "706ed090-ccb8-4488-ae71-8c991476fd08"). InnerVolumeSpecName "kube-api-access-fvqfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.322472 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/706ed090-ccb8-4488-ae71-8c991476fd08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "706ed090-ccb8-4488-ae71-8c991476fd08" (UID: "706ed090-ccb8-4488-ae71-8c991476fd08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.333649 4794 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/706ed090-ccb8-4488-ae71-8c991476fd08-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.333683 4794 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/706ed090-ccb8-4488-ae71-8c991476fd08-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.333706 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvqfk\" (UniqueName: \"kubernetes.io/projected/706ed090-ccb8-4488-ae71-8c991476fd08-kube-api-access-fvqfk\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.333717 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/706ed090-ccb8-4488-ae71-8c991476fd08-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.333729 4794 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/706ed090-ccb8-4488-ae71-8c991476fd08-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.341477 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/706ed090-ccb8-4488-ae71-8c991476fd08-config-data" (OuterVolumeSpecName: "config-data") pod "706ed090-ccb8-4488-ae71-8c991476fd08" (UID: "706ed090-ccb8-4488-ae71-8c991476fd08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.436199 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/706ed090-ccb8-4488-ae71-8c991476fd08-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.469870 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-796f585bbb-7grdw" event={"ID":"f2beacbf-4b81-4375-be49-872edd3d0d9d","Type":"ContainerStarted","Data":"d6d6a6b27e970c35cb42b810397797416abe54b7d4a72a7b7327b2c63c0bb316"} Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.469926 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-796f585bbb-7grdw" event={"ID":"f2beacbf-4b81-4375-be49-872edd3d0d9d","Type":"ContainerStarted","Data":"fa0bea6246926fcfb206a9db03244bbe6c231de4d30d4a9c0ebc4fcde2d31326"} Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.471418 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5544448f6b-g648r" event={"ID":"483c093a-519b-46a6-87c0-a4b43efc587e","Type":"ContainerStarted","Data":"26bc69357d6eb66460b7d582b9d3ce706bda07258ddf7e91852f21e6e928ccc0"} Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.472109 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5544448f6b-g648r" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.472271 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5544448f6b-g648r" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.477137 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f584ef06-0506-4130-b87a-ec406e89d1f5","Type":"ContainerDied","Data":"10eea464aeaf0310266524ae99b31a2de038fa9342d1f8fd78b3906d75a37ecd"} Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.477193 4794 scope.go:117] "RemoveContainer" containerID="ea23711e60d4a95c2f0c358f340e02d91396bb5eb5ec1265f103955e3096dcb2" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.477424 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.482327 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-k7qkz" event={"ID":"f75a3156-6e40-4c41-b47d-0e0cda2882ba","Type":"ContainerStarted","Data":"9075f7fef437bd0281c864892a8d228cc3fdc837fb39c37c2c25fea2896b1828"} Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.482400 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848cf88cfc-k7qkz" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.487221 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-t9x9p" event={"ID":"706ed090-ccb8-4488-ae71-8c991476fd08","Type":"ContainerDied","Data":"f7aed07a34d47035c6f2721756b53444d5dec7ca9ff6cd0d3708f67f76e1193a"} Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.487256 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7aed07a34d47035c6f2721756b53444d5dec7ca9ff6cd0d3708f67f76e1193a" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.487618 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-t9x9p" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.493076 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8df6f765f-hzfz6" event={"ID":"20d47909-0796-4ee7-8209-9c30ae86ff2f","Type":"ContainerStarted","Data":"8b33691a1fa7b087249d1602361b40a7bb527a3986df3012dc4f82495dbb204d"} Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.493108 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8df6f765f-hzfz6" event={"ID":"20d47909-0796-4ee7-8209-9c30ae86ff2f","Type":"ContainerStarted","Data":"3227463d76a42182baaba7377d10dae644bc53ca5ed06e8735822bc62061ea28"} Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.500926 4794 scope.go:117] "RemoveContainer" containerID="b91edb3941ecf4c9d9844e364ee4b249a64e9932dc4fbf963e1f784d45802111" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.502783 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-796f585bbb-7grdw" podStartSLOduration=2.732986564 podStartE2EDuration="4.502761555s" podCreationTimestamp="2026-02-16 17:22:36 +0000 UTC" firstStartedPulling="2026-02-16 17:22:37.536082866 +0000 UTC m=+1383.484177513" lastFinishedPulling="2026-02-16 17:22:39.305857867 +0000 UTC m=+1385.253952504" observedRunningTime="2026-02-16 17:22:40.49478305 +0000 UTC m=+1386.442877697" watchObservedRunningTime="2026-02-16 17:22:40.502761555 +0000 UTC m=+1386.450856202" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.518995 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5544448f6b-g648r" podStartSLOduration=4.518970913 podStartE2EDuration="4.518970913s" podCreationTimestamp="2026-02-16 17:22:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:22:40.518267663 +0000 UTC m=+1386.466362320" watchObservedRunningTime="2026-02-16 17:22:40.518970913 +0000 UTC m=+1386.467065560" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.525073 4794 scope.go:117] "RemoveContainer" containerID="5e1fa942c0ff1b85cda1c6ce6325bf99bad53aec28836eee80f2ac547e95187d" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.548775 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848cf88cfc-k7qkz" podStartSLOduration=4.548756334 podStartE2EDuration="4.548756334s" podCreationTimestamp="2026-02-16 17:22:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:22:40.548646741 +0000 UTC m=+1386.496741408" watchObservedRunningTime="2026-02-16 17:22:40.548756334 +0000 UTC m=+1386.496850981" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.552136 4794 scope.go:117] "RemoveContainer" containerID="2cc6c7a597b2080303daf29b00c33c2018487cb789f92dfa77811f37fe0d75a5" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.594154 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-8df6f765f-hzfz6" podStartSLOduration=2.9842054559999998 podStartE2EDuration="4.594132235s" podCreationTimestamp="2026-02-16 17:22:36 +0000 UTC" firstStartedPulling="2026-02-16 17:22:37.699003594 +0000 UTC m=+1383.647098241" lastFinishedPulling="2026-02-16 17:22:39.308930373 +0000 UTC m=+1385.257025020" observedRunningTime="2026-02-16 17:22:40.591525661 +0000 UTC m=+1386.539620308" watchObservedRunningTime="2026-02-16 17:22:40.594132235 +0000 UTC m=+1386.542226882" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.625228 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.637007 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.664844 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:22:40 crc kubenswrapper[4794]: E0216 17:22:40.665268 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f584ef06-0506-4130-b87a-ec406e89d1f5" containerName="sg-core" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.665284 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="f584ef06-0506-4130-b87a-ec406e89d1f5" containerName="sg-core" Feb 16 17:22:40 crc kubenswrapper[4794]: E0216 17:22:40.665415 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="706ed090-ccb8-4488-ae71-8c991476fd08" containerName="cinder-db-sync" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.665427 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="706ed090-ccb8-4488-ae71-8c991476fd08" containerName="cinder-db-sync" Feb 16 17:22:40 crc kubenswrapper[4794]: E0216 17:22:40.665438 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f584ef06-0506-4130-b87a-ec406e89d1f5" containerName="ceilometer-notification-agent" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.665446 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="f584ef06-0506-4130-b87a-ec406e89d1f5" containerName="ceilometer-notification-agent" Feb 16 17:22:40 crc kubenswrapper[4794]: E0216 17:22:40.665461 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f3b58ad-6afe-4194-a578-2f4fec69367c" containerName="heat-db-sync" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.665467 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f3b58ad-6afe-4194-a578-2f4fec69367c" containerName="heat-db-sync" Feb 16 17:22:40 crc kubenswrapper[4794]: E0216 17:22:40.665483 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f584ef06-0506-4130-b87a-ec406e89d1f5" containerName="ceilometer-central-agent" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.665489 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="f584ef06-0506-4130-b87a-ec406e89d1f5" containerName="ceilometer-central-agent" Feb 16 17:22:40 crc kubenswrapper[4794]: E0216 17:22:40.665498 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f584ef06-0506-4130-b87a-ec406e89d1f5" containerName="proxy-httpd" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.665504 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="f584ef06-0506-4130-b87a-ec406e89d1f5" containerName="proxy-httpd" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.665693 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="f584ef06-0506-4130-b87a-ec406e89d1f5" containerName="proxy-httpd" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.665716 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="f584ef06-0506-4130-b87a-ec406e89d1f5" containerName="ceilometer-notification-agent" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.665732 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="f584ef06-0506-4130-b87a-ec406e89d1f5" containerName="sg-core" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.665747 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="706ed090-ccb8-4488-ae71-8c991476fd08" containerName="cinder-db-sync" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.665758 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f3b58ad-6afe-4194-a578-2f4fec69367c" containerName="heat-db-sync" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.665765 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="f584ef06-0506-4130-b87a-ec406e89d1f5" containerName="ceilometer-central-agent" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.670499 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.672808 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.675730 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.749528 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5ddd9f53-b6ac-4624-92b4-a076ad62d8de\") " pod="openstack/ceilometer-0" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.749599 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-run-httpd\") pod \"ceilometer-0\" (UID: \"5ddd9f53-b6ac-4624-92b4-a076ad62d8de\") " pod="openstack/ceilometer-0" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.749640 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5ddd9f53-b6ac-4624-92b4-a076ad62d8de\") " pod="openstack/ceilometer-0" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.749684 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjtg2\" (UniqueName: \"kubernetes.io/projected/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-kube-api-access-zjtg2\") pod \"ceilometer-0\" (UID: \"5ddd9f53-b6ac-4624-92b4-a076ad62d8de\") " pod="openstack/ceilometer-0" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.749750 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-log-httpd\") pod \"ceilometer-0\" (UID: \"5ddd9f53-b6ac-4624-92b4-a076ad62d8de\") " pod="openstack/ceilometer-0" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.749786 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-scripts\") pod \"ceilometer-0\" (UID: \"5ddd9f53-b6ac-4624-92b4-a076ad62d8de\") " pod="openstack/ceilometer-0" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.749806 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-config-data\") pod \"ceilometer-0\" (UID: \"5ddd9f53-b6ac-4624-92b4-a076ad62d8de\") " pod="openstack/ceilometer-0" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.785814 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.837008 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f584ef06-0506-4130-b87a-ec406e89d1f5" path="/var/lib/kubelet/pods/f584ef06-0506-4130-b87a-ec406e89d1f5/volumes" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.839393 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.846147 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.846262 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.853943 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5ddd9f53-b6ac-4624-92b4-a076ad62d8de\") " pod="openstack/ceilometer-0" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.854030 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-run-httpd\") pod \"ceilometer-0\" (UID: \"5ddd9f53-b6ac-4624-92b4-a076ad62d8de\") " pod="openstack/ceilometer-0" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.854096 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5ddd9f53-b6ac-4624-92b4-a076ad62d8de\") " pod="openstack/ceilometer-0" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.854151 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjtg2\" (UniqueName: \"kubernetes.io/projected/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-kube-api-access-zjtg2\") pod \"ceilometer-0\" (UID: \"5ddd9f53-b6ac-4624-92b4-a076ad62d8de\") " pod="openstack/ceilometer-0" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.854249 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-log-httpd\") pod \"ceilometer-0\" (UID: \"5ddd9f53-b6ac-4624-92b4-a076ad62d8de\") " pod="openstack/ceilometer-0" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.854321 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-scripts\") pod \"ceilometer-0\" (UID: \"5ddd9f53-b6ac-4624-92b4-a076ad62d8de\") " pod="openstack/ceilometer-0" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.854356 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-config-data\") pod \"ceilometer-0\" (UID: \"5ddd9f53-b6ac-4624-92b4-a076ad62d8de\") " pod="openstack/ceilometer-0" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.873128 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.873500 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-lc8tq" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.874031 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-log-httpd\") pod \"ceilometer-0\" (UID: \"5ddd9f53-b6ac-4624-92b4-a076ad62d8de\") " pod="openstack/ceilometer-0" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.874492 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.875492 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.880070 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-run-httpd\") pod \"ceilometer-0\" (UID: \"5ddd9f53-b6ac-4624-92b4-a076ad62d8de\") " pod="openstack/ceilometer-0" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.880600 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-config-data\") pod \"ceilometer-0\" (UID: \"5ddd9f53-b6ac-4624-92b4-a076ad62d8de\") " pod="openstack/ceilometer-0" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.916374 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-k7qkz"] Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.960795 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5ddd9f53-b6ac-4624-92b4-a076ad62d8de\") " pod="openstack/ceilometer-0" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.963201 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjtg2\" (UniqueName: \"kubernetes.io/projected/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-kube-api-access-zjtg2\") pod \"ceilometer-0\" (UID: \"5ddd9f53-b6ac-4624-92b4-a076ad62d8de\") " pod="openstack/ceilometer-0" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.967149 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5ddd9f53-b6ac-4624-92b4-a076ad62d8de\") " pod="openstack/ceilometer-0" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.979903 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-scripts\") pod \"ceilometer-0\" (UID: \"5ddd9f53-b6ac-4624-92b4-a076ad62d8de\") " pod="openstack/ceilometer-0" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.996019 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:22:40 crc kubenswrapper[4794]: I0216 17:22:40.988859 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a3a7e64-dbe6-4fdb-94e1-83756c8a273a-config-data\") pod \"cinder-scheduler-0\" (UID: \"2a3a7e64-dbe6-4fdb-94e1-83756c8a273a\") " pod="openstack/cinder-scheduler-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.011910 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a3a7e64-dbe6-4fdb-94e1-83756c8a273a-scripts\") pod \"cinder-scheduler-0\" (UID: \"2a3a7e64-dbe6-4fdb-94e1-83756c8a273a\") " pod="openstack/cinder-scheduler-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.017918 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a3a7e64-dbe6-4fdb-94e1-83756c8a273a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2a3a7e64-dbe6-4fdb-94e1-83756c8a273a\") " pod="openstack/cinder-scheduler-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.013491 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-r78sp"] Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.022752 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-r78sp" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.024823 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a3a7e64-dbe6-4fdb-94e1-83756c8a273a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2a3a7e64-dbe6-4fdb-94e1-83756c8a273a\") " pod="openstack/cinder-scheduler-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.035623 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2a3a7e64-dbe6-4fdb-94e1-83756c8a273a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2a3a7e64-dbe6-4fdb-94e1-83756c8a273a\") " pod="openstack/cinder-scheduler-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.035910 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2xlg\" (UniqueName: \"kubernetes.io/projected/2a3a7e64-dbe6-4fdb-94e1-83756c8a273a-kube-api-access-c2xlg\") pod \"cinder-scheduler-0\" (UID: \"2a3a7e64-dbe6-4fdb-94e1-83756c8a273a\") " pod="openstack/cinder-scheduler-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.144211 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c932199b-1077-4aa1-aa88-7867c5c84212-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-r78sp\" (UID: \"c932199b-1077-4aa1-aa88-7867c5c84212\") " pod="openstack/dnsmasq-dns-6578955fd5-r78sp" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.144257 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a3a7e64-dbe6-4fdb-94e1-83756c8a273a-scripts\") pod \"cinder-scheduler-0\" (UID: \"2a3a7e64-dbe6-4fdb-94e1-83756c8a273a\") " pod="openstack/cinder-scheduler-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.144312 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a3a7e64-dbe6-4fdb-94e1-83756c8a273a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2a3a7e64-dbe6-4fdb-94e1-83756c8a273a\") " pod="openstack/cinder-scheduler-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.144336 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c932199b-1077-4aa1-aa88-7867c5c84212-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-r78sp\" (UID: \"c932199b-1077-4aa1-aa88-7867c5c84212\") " pod="openstack/dnsmasq-dns-6578955fd5-r78sp" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.144388 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a3a7e64-dbe6-4fdb-94e1-83756c8a273a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2a3a7e64-dbe6-4fdb-94e1-83756c8a273a\") " pod="openstack/cinder-scheduler-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.144409 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2a3a7e64-dbe6-4fdb-94e1-83756c8a273a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2a3a7e64-dbe6-4fdb-94e1-83756c8a273a\") " pod="openstack/cinder-scheduler-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.144436 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2xlg\" (UniqueName: \"kubernetes.io/projected/2a3a7e64-dbe6-4fdb-94e1-83756c8a273a-kube-api-access-c2xlg\") pod \"cinder-scheduler-0\" (UID: \"2a3a7e64-dbe6-4fdb-94e1-83756c8a273a\") " pod="openstack/cinder-scheduler-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.144472 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxgj7\" (UniqueName: \"kubernetes.io/projected/c932199b-1077-4aa1-aa88-7867c5c84212-kube-api-access-mxgj7\") pod \"dnsmasq-dns-6578955fd5-r78sp\" (UID: \"c932199b-1077-4aa1-aa88-7867c5c84212\") " pod="openstack/dnsmasq-dns-6578955fd5-r78sp" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.144530 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c932199b-1077-4aa1-aa88-7867c5c84212-dns-svc\") pod \"dnsmasq-dns-6578955fd5-r78sp\" (UID: \"c932199b-1077-4aa1-aa88-7867c5c84212\") " pod="openstack/dnsmasq-dns-6578955fd5-r78sp" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.144546 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c932199b-1077-4aa1-aa88-7867c5c84212-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-r78sp\" (UID: \"c932199b-1077-4aa1-aa88-7867c5c84212\") " pod="openstack/dnsmasq-dns-6578955fd5-r78sp" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.144598 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c932199b-1077-4aa1-aa88-7867c5c84212-config\") pod \"dnsmasq-dns-6578955fd5-r78sp\" (UID: \"c932199b-1077-4aa1-aa88-7867c5c84212\") " pod="openstack/dnsmasq-dns-6578955fd5-r78sp" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.144642 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a3a7e64-dbe6-4fdb-94e1-83756c8a273a-config-data\") pod \"cinder-scheduler-0\" (UID: \"2a3a7e64-dbe6-4fdb-94e1-83756c8a273a\") " pod="openstack/cinder-scheduler-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.145780 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2a3a7e64-dbe6-4fdb-94e1-83756c8a273a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2a3a7e64-dbe6-4fdb-94e1-83756c8a273a\") " pod="openstack/cinder-scheduler-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.153278 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a3a7e64-dbe6-4fdb-94e1-83756c8a273a-config-data\") pod \"cinder-scheduler-0\" (UID: \"2a3a7e64-dbe6-4fdb-94e1-83756c8a273a\") " pod="openstack/cinder-scheduler-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.167051 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a3a7e64-dbe6-4fdb-94e1-83756c8a273a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2a3a7e64-dbe6-4fdb-94e1-83756c8a273a\") " pod="openstack/cinder-scheduler-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.168681 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2xlg\" (UniqueName: \"kubernetes.io/projected/2a3a7e64-dbe6-4fdb-94e1-83756c8a273a-kube-api-access-c2xlg\") pod \"cinder-scheduler-0\" (UID: \"2a3a7e64-dbe6-4fdb-94e1-83756c8a273a\") " pod="openstack/cinder-scheduler-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.170727 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a3a7e64-dbe6-4fdb-94e1-83756c8a273a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2a3a7e64-dbe6-4fdb-94e1-83756c8a273a\") " pod="openstack/cinder-scheduler-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.171102 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a3a7e64-dbe6-4fdb-94e1-83756c8a273a-scripts\") pod \"cinder-scheduler-0\" (UID: \"2a3a7e64-dbe6-4fdb-94e1-83756c8a273a\") " pod="openstack/cinder-scheduler-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.186432 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-r78sp"] Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.210503 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.212601 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.216985 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.247027 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c932199b-1077-4aa1-aa88-7867c5c84212-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-r78sp\" (UID: \"c932199b-1077-4aa1-aa88-7867c5c84212\") " pod="openstack/dnsmasq-dns-6578955fd5-r78sp" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.247249 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c932199b-1077-4aa1-aa88-7867c5c84212-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-r78sp\" (UID: \"c932199b-1077-4aa1-aa88-7867c5c84212\") " pod="openstack/dnsmasq-dns-6578955fd5-r78sp" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.247494 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxgj7\" (UniqueName: \"kubernetes.io/projected/c932199b-1077-4aa1-aa88-7867c5c84212-kube-api-access-mxgj7\") pod \"dnsmasq-dns-6578955fd5-r78sp\" (UID: \"c932199b-1077-4aa1-aa88-7867c5c84212\") " pod="openstack/dnsmasq-dns-6578955fd5-r78sp" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.247612 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c932199b-1077-4aa1-aa88-7867c5c84212-dns-svc\") pod \"dnsmasq-dns-6578955fd5-r78sp\" (UID: \"c932199b-1077-4aa1-aa88-7867c5c84212\") " pod="openstack/dnsmasq-dns-6578955fd5-r78sp" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.247705 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c932199b-1077-4aa1-aa88-7867c5c84212-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-r78sp\" (UID: \"c932199b-1077-4aa1-aa88-7867c5c84212\") " pod="openstack/dnsmasq-dns-6578955fd5-r78sp" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.247860 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c932199b-1077-4aa1-aa88-7867c5c84212-config\") pod \"dnsmasq-dns-6578955fd5-r78sp\" (UID: \"c932199b-1077-4aa1-aa88-7867c5c84212\") " pod="openstack/dnsmasq-dns-6578955fd5-r78sp" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.248163 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c932199b-1077-4aa1-aa88-7867c5c84212-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-r78sp\" (UID: \"c932199b-1077-4aa1-aa88-7867c5c84212\") " pod="openstack/dnsmasq-dns-6578955fd5-r78sp" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.248227 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.249340 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c932199b-1077-4aa1-aa88-7867c5c84212-config\") pod \"dnsmasq-dns-6578955fd5-r78sp\" (UID: \"c932199b-1077-4aa1-aa88-7867c5c84212\") " pod="openstack/dnsmasq-dns-6578955fd5-r78sp" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.250698 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c932199b-1077-4aa1-aa88-7867c5c84212-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-r78sp\" (UID: \"c932199b-1077-4aa1-aa88-7867c5c84212\") " pod="openstack/dnsmasq-dns-6578955fd5-r78sp" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.251189 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c932199b-1077-4aa1-aa88-7867c5c84212-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-r78sp\" (UID: \"c932199b-1077-4aa1-aa88-7867c5c84212\") " pod="openstack/dnsmasq-dns-6578955fd5-r78sp" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.251647 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c932199b-1077-4aa1-aa88-7867c5c84212-dns-svc\") pod \"dnsmasq-dns-6578955fd5-r78sp\" (UID: \"c932199b-1077-4aa1-aa88-7867c5c84212\") " pod="openstack/dnsmasq-dns-6578955fd5-r78sp" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.276446 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxgj7\" (UniqueName: \"kubernetes.io/projected/c932199b-1077-4aa1-aa88-7867c5c84212-kube-api-access-mxgj7\") pod \"dnsmasq-dns-6578955fd5-r78sp\" (UID: \"c932199b-1077-4aa1-aa88-7867c5c84212\") " pod="openstack/dnsmasq-dns-6578955fd5-r78sp" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.347072 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6c7c9b8d66-vz9b5"] Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.349296 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c7c9b8d66-vz9b5" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.351652 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks6rh\" (UniqueName: \"kubernetes.io/projected/a3ddbc68-5d23-420f-a844-c55759155260-kube-api-access-ks6rh\") pod \"cinder-api-0\" (UID: \"a3ddbc68-5d23-420f-a844-c55759155260\") " pod="openstack/cinder-api-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.352931 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.353065 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.353372 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3ddbc68-5d23-420f-a844-c55759155260-logs\") pod \"cinder-api-0\" (UID: \"a3ddbc68-5d23-420f-a844-c55759155260\") " pod="openstack/cinder-api-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.353402 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a3ddbc68-5d23-420f-a844-c55759155260-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a3ddbc68-5d23-420f-a844-c55759155260\") " pod="openstack/cinder-api-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.353430 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3ddbc68-5d23-420f-a844-c55759155260-config-data-custom\") pod \"cinder-api-0\" (UID: \"a3ddbc68-5d23-420f-a844-c55759155260\") " pod="openstack/cinder-api-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.353481 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3ddbc68-5d23-420f-a844-c55759155260-scripts\") pod \"cinder-api-0\" (UID: \"a3ddbc68-5d23-420f-a844-c55759155260\") " pod="openstack/cinder-api-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.353774 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3ddbc68-5d23-420f-a844-c55759155260-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a3ddbc68-5d23-420f-a844-c55759155260\") " pod="openstack/cinder-api-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.353795 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3ddbc68-5d23-420f-a844-c55759155260-config-data\") pod \"cinder-api-0\" (UID: \"a3ddbc68-5d23-420f-a844-c55759155260\") " pod="openstack/cinder-api-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.357543 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6c7c9b8d66-vz9b5"] Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.386477 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.425115 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-r78sp" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.461610 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42a40424-c14f-4779-ac7c-d2c5828db304-config-data\") pod \"barbican-api-6c7c9b8d66-vz9b5\" (UID: \"42a40424-c14f-4779-ac7c-d2c5828db304\") " pod="openstack/barbican-api-6c7c9b8d66-vz9b5" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.461721 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42a40424-c14f-4779-ac7c-d2c5828db304-combined-ca-bundle\") pod \"barbican-api-6c7c9b8d66-vz9b5\" (UID: \"42a40424-c14f-4779-ac7c-d2c5828db304\") " pod="openstack/barbican-api-6c7c9b8d66-vz9b5" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.461785 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkmc4\" (UniqueName: \"kubernetes.io/projected/42a40424-c14f-4779-ac7c-d2c5828db304-kube-api-access-jkmc4\") pod \"barbican-api-6c7c9b8d66-vz9b5\" (UID: \"42a40424-c14f-4779-ac7c-d2c5828db304\") " pod="openstack/barbican-api-6c7c9b8d66-vz9b5" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.461846 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42a40424-c14f-4779-ac7c-d2c5828db304-logs\") pod \"barbican-api-6c7c9b8d66-vz9b5\" (UID: \"42a40424-c14f-4779-ac7c-d2c5828db304\") " pod="openstack/barbican-api-6c7c9b8d66-vz9b5" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.461872 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42a40424-c14f-4779-ac7c-d2c5828db304-config-data-custom\") pod \"barbican-api-6c7c9b8d66-vz9b5\" (UID: \"42a40424-c14f-4779-ac7c-d2c5828db304\") " pod="openstack/barbican-api-6c7c9b8d66-vz9b5" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.461909 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3ddbc68-5d23-420f-a844-c55759155260-logs\") pod \"cinder-api-0\" (UID: \"a3ddbc68-5d23-420f-a844-c55759155260\") " pod="openstack/cinder-api-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.461936 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a3ddbc68-5d23-420f-a844-c55759155260-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a3ddbc68-5d23-420f-a844-c55759155260\") " pod="openstack/cinder-api-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.461958 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3ddbc68-5d23-420f-a844-c55759155260-config-data-custom\") pod \"cinder-api-0\" (UID: \"a3ddbc68-5d23-420f-a844-c55759155260\") " pod="openstack/cinder-api-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.461991 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3ddbc68-5d23-420f-a844-c55759155260-scripts\") pod \"cinder-api-0\" (UID: \"a3ddbc68-5d23-420f-a844-c55759155260\") " pod="openstack/cinder-api-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.462017 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3ddbc68-5d23-420f-a844-c55759155260-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a3ddbc68-5d23-420f-a844-c55759155260\") " pod="openstack/cinder-api-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.462032 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42a40424-c14f-4779-ac7c-d2c5828db304-public-tls-certs\") pod \"barbican-api-6c7c9b8d66-vz9b5\" (UID: \"42a40424-c14f-4779-ac7c-d2c5828db304\") " pod="openstack/barbican-api-6c7c9b8d66-vz9b5" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.462048 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3ddbc68-5d23-420f-a844-c55759155260-config-data\") pod \"cinder-api-0\" (UID: \"a3ddbc68-5d23-420f-a844-c55759155260\") " pod="openstack/cinder-api-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.462081 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42a40424-c14f-4779-ac7c-d2c5828db304-internal-tls-certs\") pod \"barbican-api-6c7c9b8d66-vz9b5\" (UID: \"42a40424-c14f-4779-ac7c-d2c5828db304\") " pod="openstack/barbican-api-6c7c9b8d66-vz9b5" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.462083 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a3ddbc68-5d23-420f-a844-c55759155260-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a3ddbc68-5d23-420f-a844-c55759155260\") " pod="openstack/cinder-api-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.462501 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3ddbc68-5d23-420f-a844-c55759155260-logs\") pod \"cinder-api-0\" (UID: \"a3ddbc68-5d23-420f-a844-c55759155260\") " pod="openstack/cinder-api-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.463360 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks6rh\" (UniqueName: \"kubernetes.io/projected/a3ddbc68-5d23-420f-a844-c55759155260-kube-api-access-ks6rh\") pod \"cinder-api-0\" (UID: \"a3ddbc68-5d23-420f-a844-c55759155260\") " pod="openstack/cinder-api-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.466737 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3ddbc68-5d23-420f-a844-c55759155260-scripts\") pod \"cinder-api-0\" (UID: \"a3ddbc68-5d23-420f-a844-c55759155260\") " pod="openstack/cinder-api-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.466985 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3ddbc68-5d23-420f-a844-c55759155260-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a3ddbc68-5d23-420f-a844-c55759155260\") " pod="openstack/cinder-api-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.468511 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3ddbc68-5d23-420f-a844-c55759155260-config-data-custom\") pod \"cinder-api-0\" (UID: \"a3ddbc68-5d23-420f-a844-c55759155260\") " pod="openstack/cinder-api-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.469840 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3ddbc68-5d23-420f-a844-c55759155260-config-data\") pod \"cinder-api-0\" (UID: \"a3ddbc68-5d23-420f-a844-c55759155260\") " pod="openstack/cinder-api-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.485531 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks6rh\" (UniqueName: \"kubernetes.io/projected/a3ddbc68-5d23-420f-a844-c55759155260-kube-api-access-ks6rh\") pod \"cinder-api-0\" (UID: \"a3ddbc68-5d23-420f-a844-c55759155260\") " pod="openstack/cinder-api-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.568865 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42a40424-c14f-4779-ac7c-d2c5828db304-config-data\") pod \"barbican-api-6c7c9b8d66-vz9b5\" (UID: \"42a40424-c14f-4779-ac7c-d2c5828db304\") " pod="openstack/barbican-api-6c7c9b8d66-vz9b5" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.569008 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42a40424-c14f-4779-ac7c-d2c5828db304-combined-ca-bundle\") pod \"barbican-api-6c7c9b8d66-vz9b5\" (UID: \"42a40424-c14f-4779-ac7c-d2c5828db304\") " pod="openstack/barbican-api-6c7c9b8d66-vz9b5" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.569037 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkmc4\" (UniqueName: \"kubernetes.io/projected/42a40424-c14f-4779-ac7c-d2c5828db304-kube-api-access-jkmc4\") pod \"barbican-api-6c7c9b8d66-vz9b5\" (UID: \"42a40424-c14f-4779-ac7c-d2c5828db304\") " pod="openstack/barbican-api-6c7c9b8d66-vz9b5" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.569097 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42a40424-c14f-4779-ac7c-d2c5828db304-logs\") pod \"barbican-api-6c7c9b8d66-vz9b5\" (UID: \"42a40424-c14f-4779-ac7c-d2c5828db304\") " pod="openstack/barbican-api-6c7c9b8d66-vz9b5" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.569126 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42a40424-c14f-4779-ac7c-d2c5828db304-config-data-custom\") pod \"barbican-api-6c7c9b8d66-vz9b5\" (UID: \"42a40424-c14f-4779-ac7c-d2c5828db304\") " pod="openstack/barbican-api-6c7c9b8d66-vz9b5" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.569209 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42a40424-c14f-4779-ac7c-d2c5828db304-public-tls-certs\") pod \"barbican-api-6c7c9b8d66-vz9b5\" (UID: \"42a40424-c14f-4779-ac7c-d2c5828db304\") " pod="openstack/barbican-api-6c7c9b8d66-vz9b5" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.569257 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42a40424-c14f-4779-ac7c-d2c5828db304-internal-tls-certs\") pod \"barbican-api-6c7c9b8d66-vz9b5\" (UID: \"42a40424-c14f-4779-ac7c-d2c5828db304\") " pod="openstack/barbican-api-6c7c9b8d66-vz9b5" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.574671 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42a40424-c14f-4779-ac7c-d2c5828db304-logs\") pod \"barbican-api-6c7c9b8d66-vz9b5\" (UID: \"42a40424-c14f-4779-ac7c-d2c5828db304\") " pod="openstack/barbican-api-6c7c9b8d66-vz9b5" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.579581 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42a40424-c14f-4779-ac7c-d2c5828db304-config-data\") pod \"barbican-api-6c7c9b8d66-vz9b5\" (UID: \"42a40424-c14f-4779-ac7c-d2c5828db304\") " pod="openstack/barbican-api-6c7c9b8d66-vz9b5" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.580503 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42a40424-c14f-4779-ac7c-d2c5828db304-config-data-custom\") pod \"barbican-api-6c7c9b8d66-vz9b5\" (UID: \"42a40424-c14f-4779-ac7c-d2c5828db304\") " pod="openstack/barbican-api-6c7c9b8d66-vz9b5" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.583108 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.598147 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42a40424-c14f-4779-ac7c-d2c5828db304-combined-ca-bundle\") pod \"barbican-api-6c7c9b8d66-vz9b5\" (UID: \"42a40424-c14f-4779-ac7c-d2c5828db304\") " pod="openstack/barbican-api-6c7c9b8d66-vz9b5" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.598503 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42a40424-c14f-4779-ac7c-d2c5828db304-internal-tls-certs\") pod \"barbican-api-6c7c9b8d66-vz9b5\" (UID: \"42a40424-c14f-4779-ac7c-d2c5828db304\") " pod="openstack/barbican-api-6c7c9b8d66-vz9b5" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.599157 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42a40424-c14f-4779-ac7c-d2c5828db304-public-tls-certs\") pod \"barbican-api-6c7c9b8d66-vz9b5\" (UID: \"42a40424-c14f-4779-ac7c-d2c5828db304\") " pod="openstack/barbican-api-6c7c9b8d66-vz9b5" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.610133 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkmc4\" (UniqueName: \"kubernetes.io/projected/42a40424-c14f-4779-ac7c-d2c5828db304-kube-api-access-jkmc4\") pod \"barbican-api-6c7c9b8d66-vz9b5\" (UID: \"42a40424-c14f-4779-ac7c-d2c5828db304\") " pod="openstack/barbican-api-6c7c9b8d66-vz9b5" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.676589 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c7c9b8d66-vz9b5" Feb 16 17:22:41 crc kubenswrapper[4794]: I0216 17:22:41.738739 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:22:42 crc kubenswrapper[4794]: I0216 17:22:42.403963 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 17:22:42 crc kubenswrapper[4794]: I0216 17:22:42.437804 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-r78sp"] Feb 16 17:22:42 crc kubenswrapper[4794]: I0216 17:22:42.603330 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2a3a7e64-dbe6-4fdb-94e1-83756c8a273a","Type":"ContainerStarted","Data":"c91070b600962c2cff667e5761252e17474e9edf08c4ab402fbf18f225cff0c4"} Feb 16 17:22:42 crc kubenswrapper[4794]: I0216 17:22:42.605336 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ddd9f53-b6ac-4624-92b4-a076ad62d8de","Type":"ContainerStarted","Data":"22477a207ed67d88b475741a1772c0f8c9cb8d095ec08b5046c0e0b0e83aafdf"} Feb 16 17:22:42 crc kubenswrapper[4794]: I0216 17:22:42.606866 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-r78sp" event={"ID":"c932199b-1077-4aa1-aa88-7867c5c84212","Type":"ContainerStarted","Data":"3685edbc37b084f6df55eacb53496f7a085a248e9d6ba1aacb2634c13d15d9a3"} Feb 16 17:22:42 crc kubenswrapper[4794]: I0216 17:22:42.607029 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848cf88cfc-k7qkz" podUID="f75a3156-6e40-4c41-b47d-0e0cda2882ba" containerName="dnsmasq-dns" containerID="cri-o://9075f7fef437bd0281c864892a8d228cc3fdc837fb39c37c2c25fea2896b1828" gracePeriod=10 Feb 16 17:22:42 crc kubenswrapper[4794]: I0216 17:22:42.689423 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6c7c9b8d66-vz9b5"] Feb 16 17:22:42 crc kubenswrapper[4794]: W0216 17:22:42.708962 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42a40424_c14f_4779_ac7c_d2c5828db304.slice/crio-fc46f50f54a71631dea62e1a3be766d07a569511971c7d6e0cf964a905c284da WatchSource:0}: Error finding container fc46f50f54a71631dea62e1a3be766d07a569511971c7d6e0cf964a905c284da: Status 404 returned error can't find the container with id fc46f50f54a71631dea62e1a3be766d07a569511971c7d6e0cf964a905c284da Feb 16 17:22:42 crc kubenswrapper[4794]: W0216 17:22:42.710415 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3ddbc68_5d23_420f_a844_c55759155260.slice/crio-1102e60ca1697705c26d4ad4a7d48484ae4c77431be2d97e106bc01e62d1cec1 WatchSource:0}: Error finding container 1102e60ca1697705c26d4ad4a7d48484ae4c77431be2d97e106bc01e62d1cec1: Status 404 returned error can't find the container with id 1102e60ca1697705c26d4ad4a7d48484ae4c77431be2d97e106bc01e62d1cec1 Feb 16 17:22:42 crc kubenswrapper[4794]: I0216 17:22:42.713845 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 17:22:43 crc kubenswrapper[4794]: I0216 17:22:43.333605 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-k7qkz" Feb 16 17:22:43 crc kubenswrapper[4794]: I0216 17:22:43.435910 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqdlt\" (UniqueName: \"kubernetes.io/projected/f75a3156-6e40-4c41-b47d-0e0cda2882ba-kube-api-access-pqdlt\") pod \"f75a3156-6e40-4c41-b47d-0e0cda2882ba\" (UID: \"f75a3156-6e40-4c41-b47d-0e0cda2882ba\") " Feb 16 17:22:43 crc kubenswrapper[4794]: I0216 17:22:43.436237 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f75a3156-6e40-4c41-b47d-0e0cda2882ba-dns-swift-storage-0\") pod \"f75a3156-6e40-4c41-b47d-0e0cda2882ba\" (UID: \"f75a3156-6e40-4c41-b47d-0e0cda2882ba\") " Feb 16 17:22:43 crc kubenswrapper[4794]: I0216 17:22:43.437017 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f75a3156-6e40-4c41-b47d-0e0cda2882ba-config\") pod \"f75a3156-6e40-4c41-b47d-0e0cda2882ba\" (UID: \"f75a3156-6e40-4c41-b47d-0e0cda2882ba\") " Feb 16 17:22:43 crc kubenswrapper[4794]: I0216 17:22:43.437194 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f75a3156-6e40-4c41-b47d-0e0cda2882ba-dns-svc\") pod \"f75a3156-6e40-4c41-b47d-0e0cda2882ba\" (UID: \"f75a3156-6e40-4c41-b47d-0e0cda2882ba\") " Feb 16 17:22:43 crc kubenswrapper[4794]: I0216 17:22:43.437281 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f75a3156-6e40-4c41-b47d-0e0cda2882ba-ovsdbserver-nb\") pod \"f75a3156-6e40-4c41-b47d-0e0cda2882ba\" (UID: \"f75a3156-6e40-4c41-b47d-0e0cda2882ba\") " Feb 16 17:22:43 crc kubenswrapper[4794]: I0216 17:22:43.437406 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f75a3156-6e40-4c41-b47d-0e0cda2882ba-ovsdbserver-sb\") pod \"f75a3156-6e40-4c41-b47d-0e0cda2882ba\" (UID: \"f75a3156-6e40-4c41-b47d-0e0cda2882ba\") " Feb 16 17:22:43 crc kubenswrapper[4794]: I0216 17:22:43.448808 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f75a3156-6e40-4c41-b47d-0e0cda2882ba-kube-api-access-pqdlt" (OuterVolumeSpecName: "kube-api-access-pqdlt") pod "f75a3156-6e40-4c41-b47d-0e0cda2882ba" (UID: "f75a3156-6e40-4c41-b47d-0e0cda2882ba"). InnerVolumeSpecName "kube-api-access-pqdlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:22:43 crc kubenswrapper[4794]: I0216 17:22:43.524822 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f75a3156-6e40-4c41-b47d-0e0cda2882ba-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f75a3156-6e40-4c41-b47d-0e0cda2882ba" (UID: "f75a3156-6e40-4c41-b47d-0e0cda2882ba"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:22:43 crc kubenswrapper[4794]: I0216 17:22:43.541498 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqdlt\" (UniqueName: \"kubernetes.io/projected/f75a3156-6e40-4c41-b47d-0e0cda2882ba-kube-api-access-pqdlt\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:43 crc kubenswrapper[4794]: I0216 17:22:43.541532 4794 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f75a3156-6e40-4c41-b47d-0e0cda2882ba-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:43 crc kubenswrapper[4794]: I0216 17:22:43.643813 4794 generic.go:334] "Generic (PLEG): container finished" podID="c932199b-1077-4aa1-aa88-7867c5c84212" containerID="c61f43a7593538088ee05f9232a915c955142186f9d16f1a1fbd078e2e9acd40" exitCode=0 Feb 16 17:22:43 crc kubenswrapper[4794]: I0216 17:22:43.643913 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-r78sp" event={"ID":"c932199b-1077-4aa1-aa88-7867c5c84212","Type":"ContainerDied","Data":"c61f43a7593538088ee05f9232a915c955142186f9d16f1a1fbd078e2e9acd40"} Feb 16 17:22:43 crc kubenswrapper[4794]: I0216 17:22:43.655366 4794 generic.go:334] "Generic (PLEG): container finished" podID="f75a3156-6e40-4c41-b47d-0e0cda2882ba" containerID="9075f7fef437bd0281c864892a8d228cc3fdc837fb39c37c2c25fea2896b1828" exitCode=0 Feb 16 17:22:43 crc kubenswrapper[4794]: I0216 17:22:43.655721 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-k7qkz" event={"ID":"f75a3156-6e40-4c41-b47d-0e0cda2882ba","Type":"ContainerDied","Data":"9075f7fef437bd0281c864892a8d228cc3fdc837fb39c37c2c25fea2896b1828"} Feb 16 17:22:43 crc kubenswrapper[4794]: I0216 17:22:43.655763 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-k7qkz" Feb 16 17:22:43 crc kubenswrapper[4794]: I0216 17:22:43.655800 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-k7qkz" event={"ID":"f75a3156-6e40-4c41-b47d-0e0cda2882ba","Type":"ContainerDied","Data":"3e41b9144866a869b0a2623a3646d201ce07a6e8e0990c331363f3b767ec78ea"} Feb 16 17:22:43 crc kubenswrapper[4794]: I0216 17:22:43.656435 4794 scope.go:117] "RemoveContainer" containerID="9075f7fef437bd0281c864892a8d228cc3fdc837fb39c37c2c25fea2896b1828" Feb 16 17:22:43 crc kubenswrapper[4794]: I0216 17:22:43.663618 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a3ddbc68-5d23-420f-a844-c55759155260","Type":"ContainerStarted","Data":"1102e60ca1697705c26d4ad4a7d48484ae4c77431be2d97e106bc01e62d1cec1"} Feb 16 17:22:43 crc kubenswrapper[4794]: I0216 17:22:43.674289 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ddd9f53-b6ac-4624-92b4-a076ad62d8de","Type":"ContainerStarted","Data":"68cb34ae47ee273c416e83f42d642c333007a14d80b0330f29a6a98f12df8e42"} Feb 16 17:22:43 crc kubenswrapper[4794]: I0216 17:22:43.679869 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c7c9b8d66-vz9b5" event={"ID":"42a40424-c14f-4779-ac7c-d2c5828db304","Type":"ContainerStarted","Data":"5dfcf1746b7ce8648186938e535fb2009f3d95df0b3df73b9a4ab6cf8f209dc9"} Feb 16 17:22:43 crc kubenswrapper[4794]: I0216 17:22:43.679926 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c7c9b8d66-vz9b5" event={"ID":"42a40424-c14f-4779-ac7c-d2c5828db304","Type":"ContainerStarted","Data":"fc46f50f54a71631dea62e1a3be766d07a569511971c7d6e0cf964a905c284da"} Feb 16 17:22:43 crc kubenswrapper[4794]: I0216 17:22:43.754968 4794 scope.go:117] "RemoveContainer" containerID="e60002b6a45a5e88cdd12e847a8212a47ddd722bec6cc649e5b91db0f9b5a2b1" Feb 16 17:22:43 crc kubenswrapper[4794]: I0216 17:22:43.909002 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f75a3156-6e40-4c41-b47d-0e0cda2882ba-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f75a3156-6e40-4c41-b47d-0e0cda2882ba" (UID: "f75a3156-6e40-4c41-b47d-0e0cda2882ba"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:22:43 crc kubenswrapper[4794]: I0216 17:22:43.925957 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f75a3156-6e40-4c41-b47d-0e0cda2882ba-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f75a3156-6e40-4c41-b47d-0e0cda2882ba" (UID: "f75a3156-6e40-4c41-b47d-0e0cda2882ba"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:22:43 crc kubenswrapper[4794]: I0216 17:22:43.936748 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f75a3156-6e40-4c41-b47d-0e0cda2882ba-config" (OuterVolumeSpecName: "config") pod "f75a3156-6e40-4c41-b47d-0e0cda2882ba" (UID: "f75a3156-6e40-4c41-b47d-0e0cda2882ba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:22:43 crc kubenswrapper[4794]: I0216 17:22:43.951784 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f75a3156-6e40-4c41-b47d-0e0cda2882ba-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f75a3156-6e40-4c41-b47d-0e0cda2882ba" (UID: "f75a3156-6e40-4c41-b47d-0e0cda2882ba"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:22:43 crc kubenswrapper[4794]: I0216 17:22:43.967614 4794 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f75a3156-6e40-4c41-b47d-0e0cda2882ba-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:43 crc kubenswrapper[4794]: I0216 17:22:43.967650 4794 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f75a3156-6e40-4c41-b47d-0e0cda2882ba-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:43 crc kubenswrapper[4794]: I0216 17:22:43.967664 4794 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f75a3156-6e40-4c41-b47d-0e0cda2882ba-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:43 crc kubenswrapper[4794]: I0216 17:22:43.967673 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f75a3156-6e40-4c41-b47d-0e0cda2882ba-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:44 crc kubenswrapper[4794]: I0216 17:22:44.179954 4794 scope.go:117] "RemoveContainer" containerID="9075f7fef437bd0281c864892a8d228cc3fdc837fb39c37c2c25fea2896b1828" Feb 16 17:22:44 crc kubenswrapper[4794]: E0216 17:22:44.180656 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9075f7fef437bd0281c864892a8d228cc3fdc837fb39c37c2c25fea2896b1828\": container with ID starting with 9075f7fef437bd0281c864892a8d228cc3fdc837fb39c37c2c25fea2896b1828 not found: ID does not exist" containerID="9075f7fef437bd0281c864892a8d228cc3fdc837fb39c37c2c25fea2896b1828" Feb 16 17:22:44 crc kubenswrapper[4794]: I0216 17:22:44.180730 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9075f7fef437bd0281c864892a8d228cc3fdc837fb39c37c2c25fea2896b1828"} err="failed to get container status \"9075f7fef437bd0281c864892a8d228cc3fdc837fb39c37c2c25fea2896b1828\": rpc error: code = NotFound desc = could not find container \"9075f7fef437bd0281c864892a8d228cc3fdc837fb39c37c2c25fea2896b1828\": container with ID starting with 9075f7fef437bd0281c864892a8d228cc3fdc837fb39c37c2c25fea2896b1828 not found: ID does not exist" Feb 16 17:22:44 crc kubenswrapper[4794]: I0216 17:22:44.180760 4794 scope.go:117] "RemoveContainer" containerID="e60002b6a45a5e88cdd12e847a8212a47ddd722bec6cc649e5b91db0f9b5a2b1" Feb 16 17:22:44 crc kubenswrapper[4794]: E0216 17:22:44.190192 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e60002b6a45a5e88cdd12e847a8212a47ddd722bec6cc649e5b91db0f9b5a2b1\": container with ID starting with e60002b6a45a5e88cdd12e847a8212a47ddd722bec6cc649e5b91db0f9b5a2b1 not found: ID does not exist" containerID="e60002b6a45a5e88cdd12e847a8212a47ddd722bec6cc649e5b91db0f9b5a2b1" Feb 16 17:22:44 crc kubenswrapper[4794]: I0216 17:22:44.190244 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e60002b6a45a5e88cdd12e847a8212a47ddd722bec6cc649e5b91db0f9b5a2b1"} err="failed to get container status \"e60002b6a45a5e88cdd12e847a8212a47ddd722bec6cc649e5b91db0f9b5a2b1\": rpc error: code = NotFound desc = could not find container \"e60002b6a45a5e88cdd12e847a8212a47ddd722bec6cc649e5b91db0f9b5a2b1\": container with ID starting with e60002b6a45a5e88cdd12e847a8212a47ddd722bec6cc649e5b91db0f9b5a2b1 not found: ID does not exist" Feb 16 17:22:44 crc kubenswrapper[4794]: I0216 17:22:44.235378 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 16 17:22:44 crc kubenswrapper[4794]: I0216 17:22:44.257633 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-k7qkz"] Feb 16 17:22:44 crc kubenswrapper[4794]: I0216 17:22:44.271786 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-k7qkz"] Feb 16 17:22:44 crc kubenswrapper[4794]: I0216 17:22:44.734711 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a3ddbc68-5d23-420f-a844-c55759155260","Type":"ContainerStarted","Data":"df9a6e065fb5627d63f0f8ec59dba882792d59a7d229e3380544dc340f4051b5"} Feb 16 17:22:44 crc kubenswrapper[4794]: I0216 17:22:44.740838 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ddd9f53-b6ac-4624-92b4-a076ad62d8de","Type":"ContainerStarted","Data":"b7b183697fa62a18c0922b1d7b0ce4e7591a68eadb850d74d1de2eba8b6349a2"} Feb 16 17:22:44 crc kubenswrapper[4794]: I0216 17:22:44.742920 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c7c9b8d66-vz9b5" event={"ID":"42a40424-c14f-4779-ac7c-d2c5828db304","Type":"ContainerStarted","Data":"ff6c4e4321597679ca4b201cbf4e536a6af328fe6d59efd526d3a62c228f2158"} Feb 16 17:22:44 crc kubenswrapper[4794]: I0216 17:22:44.744259 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6c7c9b8d66-vz9b5" Feb 16 17:22:44 crc kubenswrapper[4794]: I0216 17:22:44.744295 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6c7c9b8d66-vz9b5" Feb 16 17:22:44 crc kubenswrapper[4794]: I0216 17:22:44.780877 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6c7c9b8d66-vz9b5" podStartSLOduration=3.780858099 podStartE2EDuration="3.780858099s" podCreationTimestamp="2026-02-16 17:22:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:22:44.780259912 +0000 UTC m=+1390.728354569" watchObservedRunningTime="2026-02-16 17:22:44.780858099 +0000 UTC m=+1390.728952746" Feb 16 17:22:44 crc kubenswrapper[4794]: I0216 17:22:44.805563 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f75a3156-6e40-4c41-b47d-0e0cda2882ba" path="/var/lib/kubelet/pods/f75a3156-6e40-4c41-b47d-0e0cda2882ba/volumes" Feb 16 17:22:44 crc kubenswrapper[4794]: I0216 17:22:44.877007 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-99f86f5f6-sdjdr" Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.170936 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c659f6967-vsf27"] Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.171678 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-c659f6967-vsf27" podUID="20805f32-52bf-4449-90fd-8e83635f8154" containerName="neutron-api" containerID="cri-o://a07003dd6d04b6392ad6a95ed24662d7aed38806951188bf1495200e0a697d3e" gracePeriod=30 Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.171841 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-c659f6967-vsf27" podUID="20805f32-52bf-4449-90fd-8e83635f8154" containerName="neutron-httpd" containerID="cri-o://dfba6fb97b7e10eb69d8f50d615d60e352c844939be4b756403304dee500a66e" gracePeriod=30 Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.180626 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-c659f6967-vsf27" podUID="20805f32-52bf-4449-90fd-8e83635f8154" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.195:9696/\": EOF" Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.220176 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6cdff78ddf-hj4zf"] Feb 16 17:22:45 crc kubenswrapper[4794]: E0216 17:22:45.220670 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f75a3156-6e40-4c41-b47d-0e0cda2882ba" containerName="dnsmasq-dns" Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.220691 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="f75a3156-6e40-4c41-b47d-0e0cda2882ba" containerName="dnsmasq-dns" Feb 16 17:22:45 crc kubenswrapper[4794]: E0216 17:22:45.220719 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f75a3156-6e40-4c41-b47d-0e0cda2882ba" containerName="init" Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.220726 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="f75a3156-6e40-4c41-b47d-0e0cda2882ba" containerName="init" Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.220936 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="f75a3156-6e40-4c41-b47d-0e0cda2882ba" containerName="dnsmasq-dns" Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.222195 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cdff78ddf-hj4zf" Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.244595 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6cdff78ddf-hj4zf"] Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.323715 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7419e1b3-c58c-499d-bed5-5b8404f50c31-ovndb-tls-certs\") pod \"neutron-6cdff78ddf-hj4zf\" (UID: \"7419e1b3-c58c-499d-bed5-5b8404f50c31\") " pod="openstack/neutron-6cdff78ddf-hj4zf" Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.323902 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7419e1b3-c58c-499d-bed5-5b8404f50c31-config\") pod \"neutron-6cdff78ddf-hj4zf\" (UID: \"7419e1b3-c58c-499d-bed5-5b8404f50c31\") " pod="openstack/neutron-6cdff78ddf-hj4zf" Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.323965 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjbmp\" (UniqueName: \"kubernetes.io/projected/7419e1b3-c58c-499d-bed5-5b8404f50c31-kube-api-access-hjbmp\") pod \"neutron-6cdff78ddf-hj4zf\" (UID: \"7419e1b3-c58c-499d-bed5-5b8404f50c31\") " pod="openstack/neutron-6cdff78ddf-hj4zf" Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.324000 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7419e1b3-c58c-499d-bed5-5b8404f50c31-internal-tls-certs\") pod \"neutron-6cdff78ddf-hj4zf\" (UID: \"7419e1b3-c58c-499d-bed5-5b8404f50c31\") " pod="openstack/neutron-6cdff78ddf-hj4zf" Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.324102 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7419e1b3-c58c-499d-bed5-5b8404f50c31-public-tls-certs\") pod \"neutron-6cdff78ddf-hj4zf\" (UID: \"7419e1b3-c58c-499d-bed5-5b8404f50c31\") " pod="openstack/neutron-6cdff78ddf-hj4zf" Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.324259 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7419e1b3-c58c-499d-bed5-5b8404f50c31-combined-ca-bundle\") pod \"neutron-6cdff78ddf-hj4zf\" (UID: \"7419e1b3-c58c-499d-bed5-5b8404f50c31\") " pod="openstack/neutron-6cdff78ddf-hj4zf" Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.324470 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7419e1b3-c58c-499d-bed5-5b8404f50c31-httpd-config\") pod \"neutron-6cdff78ddf-hj4zf\" (UID: \"7419e1b3-c58c-499d-bed5-5b8404f50c31\") " pod="openstack/neutron-6cdff78ddf-hj4zf" Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.426552 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7419e1b3-c58c-499d-bed5-5b8404f50c31-config\") pod \"neutron-6cdff78ddf-hj4zf\" (UID: \"7419e1b3-c58c-499d-bed5-5b8404f50c31\") " pod="openstack/neutron-6cdff78ddf-hj4zf" Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.427004 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjbmp\" (UniqueName: \"kubernetes.io/projected/7419e1b3-c58c-499d-bed5-5b8404f50c31-kube-api-access-hjbmp\") pod \"neutron-6cdff78ddf-hj4zf\" (UID: \"7419e1b3-c58c-499d-bed5-5b8404f50c31\") " pod="openstack/neutron-6cdff78ddf-hj4zf" Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.427027 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7419e1b3-c58c-499d-bed5-5b8404f50c31-internal-tls-certs\") pod \"neutron-6cdff78ddf-hj4zf\" (UID: \"7419e1b3-c58c-499d-bed5-5b8404f50c31\") " pod="openstack/neutron-6cdff78ddf-hj4zf" Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.427072 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7419e1b3-c58c-499d-bed5-5b8404f50c31-public-tls-certs\") pod \"neutron-6cdff78ddf-hj4zf\" (UID: \"7419e1b3-c58c-499d-bed5-5b8404f50c31\") " pod="openstack/neutron-6cdff78ddf-hj4zf" Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.427162 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7419e1b3-c58c-499d-bed5-5b8404f50c31-combined-ca-bundle\") pod \"neutron-6cdff78ddf-hj4zf\" (UID: \"7419e1b3-c58c-499d-bed5-5b8404f50c31\") " pod="openstack/neutron-6cdff78ddf-hj4zf" Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.427195 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7419e1b3-c58c-499d-bed5-5b8404f50c31-httpd-config\") pod \"neutron-6cdff78ddf-hj4zf\" (UID: \"7419e1b3-c58c-499d-bed5-5b8404f50c31\") " pod="openstack/neutron-6cdff78ddf-hj4zf" Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.427249 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7419e1b3-c58c-499d-bed5-5b8404f50c31-ovndb-tls-certs\") pod \"neutron-6cdff78ddf-hj4zf\" (UID: \"7419e1b3-c58c-499d-bed5-5b8404f50c31\") " pod="openstack/neutron-6cdff78ddf-hj4zf" Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.433601 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7419e1b3-c58c-499d-bed5-5b8404f50c31-ovndb-tls-certs\") pod \"neutron-6cdff78ddf-hj4zf\" (UID: \"7419e1b3-c58c-499d-bed5-5b8404f50c31\") " pod="openstack/neutron-6cdff78ddf-hj4zf" Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.434647 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7419e1b3-c58c-499d-bed5-5b8404f50c31-combined-ca-bundle\") pod \"neutron-6cdff78ddf-hj4zf\" (UID: \"7419e1b3-c58c-499d-bed5-5b8404f50c31\") " pod="openstack/neutron-6cdff78ddf-hj4zf" Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.436148 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7419e1b3-c58c-499d-bed5-5b8404f50c31-internal-tls-certs\") pod \"neutron-6cdff78ddf-hj4zf\" (UID: \"7419e1b3-c58c-499d-bed5-5b8404f50c31\") " pod="openstack/neutron-6cdff78ddf-hj4zf" Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.436757 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7419e1b3-c58c-499d-bed5-5b8404f50c31-config\") pod \"neutron-6cdff78ddf-hj4zf\" (UID: \"7419e1b3-c58c-499d-bed5-5b8404f50c31\") " pod="openstack/neutron-6cdff78ddf-hj4zf" Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.436772 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7419e1b3-c58c-499d-bed5-5b8404f50c31-public-tls-certs\") pod \"neutron-6cdff78ddf-hj4zf\" (UID: \"7419e1b3-c58c-499d-bed5-5b8404f50c31\") " pod="openstack/neutron-6cdff78ddf-hj4zf" Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.437688 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7419e1b3-c58c-499d-bed5-5b8404f50c31-httpd-config\") pod \"neutron-6cdff78ddf-hj4zf\" (UID: \"7419e1b3-c58c-499d-bed5-5b8404f50c31\") " pod="openstack/neutron-6cdff78ddf-hj4zf" Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.455657 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjbmp\" (UniqueName: \"kubernetes.io/projected/7419e1b3-c58c-499d-bed5-5b8404f50c31-kube-api-access-hjbmp\") pod \"neutron-6cdff78ddf-hj4zf\" (UID: \"7419e1b3-c58c-499d-bed5-5b8404f50c31\") " pod="openstack/neutron-6cdff78ddf-hj4zf" Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.591619 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cdff78ddf-hj4zf" Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.772324 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="a3ddbc68-5d23-420f-a844-c55759155260" containerName="cinder-api-log" containerID="cri-o://df9a6e065fb5627d63f0f8ec59dba882792d59a7d229e3380544dc340f4051b5" gracePeriod=30 Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.772811 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a3ddbc68-5d23-420f-a844-c55759155260","Type":"ContainerStarted","Data":"201f8bd8e605ac419284e64c74d6ff0721a0638fba59d4096a40a4878d91a42b"} Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.772863 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.773131 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="a3ddbc68-5d23-420f-a844-c55759155260" containerName="cinder-api" containerID="cri-o://201f8bd8e605ac419284e64c74d6ff0721a0638fba59d4096a40a4878d91a42b" gracePeriod=30 Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.781216 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2a3a7e64-dbe6-4fdb-94e1-83756c8a273a","Type":"ContainerStarted","Data":"e04ae9a3d6fc02a14610d806806b30ba6520ba8db61425bcabc5c638507f84b3"} Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.857555 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.857527393 podStartE2EDuration="5.857527393s" podCreationTimestamp="2026-02-16 17:22:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:22:45.810816774 +0000 UTC m=+1391.758911431" watchObservedRunningTime="2026-02-16 17:22:45.857527393 +0000 UTC m=+1391.805622040" Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.868097 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-r78sp" event={"ID":"c932199b-1077-4aa1-aa88-7867c5c84212","Type":"ContainerStarted","Data":"fe46504ecb88e7e48e3acf8fb9b0e1f42b729b703c11863dd2c44da68e2cfa2e"} Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.874725 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-r78sp" Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.895886 4794 generic.go:334] "Generic (PLEG): container finished" podID="20805f32-52bf-4449-90fd-8e83635f8154" containerID="dfba6fb97b7e10eb69d8f50d615d60e352c844939be4b756403304dee500a66e" exitCode=0 Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.896154 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c659f6967-vsf27" event={"ID":"20805f32-52bf-4449-90fd-8e83635f8154","Type":"ContainerDied","Data":"dfba6fb97b7e10eb69d8f50d615d60e352c844939be4b756403304dee500a66e"} Feb 16 17:22:45 crc kubenswrapper[4794]: I0216 17:22:45.904203 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-r78sp" podStartSLOduration=5.90418118 podStartE2EDuration="5.90418118s" podCreationTimestamp="2026-02-16 17:22:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:22:45.889877426 +0000 UTC m=+1391.837972083" watchObservedRunningTime="2026-02-16 17:22:45.90418118 +0000 UTC m=+1391.852275827" Feb 16 17:22:46 crc kubenswrapper[4794]: I0216 17:22:46.340598 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6cdff78ddf-hj4zf"] Feb 16 17:22:46 crc kubenswrapper[4794]: I0216 17:22:46.919063 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cdff78ddf-hj4zf" event={"ID":"7419e1b3-c58c-499d-bed5-5b8404f50c31","Type":"ContainerStarted","Data":"268c2fafba457736e4705e0b5043c1924dbe98fb7d53fc5b886f7dda8d53bfa1"} Feb 16 17:22:46 crc kubenswrapper[4794]: I0216 17:22:46.919529 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cdff78ddf-hj4zf" event={"ID":"7419e1b3-c58c-499d-bed5-5b8404f50c31","Type":"ContainerStarted","Data":"f8c837c58d20be09cbef3eeb855abf5f01e4f50f780a024b948ca7ebe2a85938"} Feb 16 17:22:46 crc kubenswrapper[4794]: I0216 17:22:46.922797 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2a3a7e64-dbe6-4fdb-94e1-83756c8a273a","Type":"ContainerStarted","Data":"482fb8816f84e0436fcbb7b6e7cee0c98c46e8439949473d16deeb1a5e7e5683"} Feb 16 17:22:46 crc kubenswrapper[4794]: I0216 17:22:46.937692 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ddd9f53-b6ac-4624-92b4-a076ad62d8de","Type":"ContainerStarted","Data":"5fc05aaf20aad6db9c0337f839a3c35aef358c9173809b50b0e5fbfe3bfeed28"} Feb 16 17:22:46 crc kubenswrapper[4794]: I0216 17:22:46.948165 4794 generic.go:334] "Generic (PLEG): container finished" podID="a3ddbc68-5d23-420f-a844-c55759155260" containerID="df9a6e065fb5627d63f0f8ec59dba882792d59a7d229e3380544dc340f4051b5" exitCode=143 Feb 16 17:22:46 crc kubenswrapper[4794]: I0216 17:22:46.949172 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a3ddbc68-5d23-420f-a844-c55759155260","Type":"ContainerDied","Data":"df9a6e065fb5627d63f0f8ec59dba882792d59a7d229e3380544dc340f4051b5"} Feb 16 17:22:46 crc kubenswrapper[4794]: I0216 17:22:46.953576 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.084589003 podStartE2EDuration="6.953558815s" podCreationTimestamp="2026-02-16 17:22:40 +0000 UTC" firstStartedPulling="2026-02-16 17:22:42.419479625 +0000 UTC m=+1388.367574272" lastFinishedPulling="2026-02-16 17:22:43.288449437 +0000 UTC m=+1389.236544084" observedRunningTime="2026-02-16 17:22:46.949973854 +0000 UTC m=+1392.898068501" watchObservedRunningTime="2026-02-16 17:22:46.953558815 +0000 UTC m=+1392.901653462" Feb 16 17:22:47 crc kubenswrapper[4794]: I0216 17:22:47.150791 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-c659f6967-vsf27" podUID="20805f32-52bf-4449-90fd-8e83635f8154" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.195:9696/\": dial tcp 10.217.0.195:9696: connect: connection refused" Feb 16 17:22:47 crc kubenswrapper[4794]: I0216 17:22:47.960402 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cdff78ddf-hj4zf" event={"ID":"7419e1b3-c58c-499d-bed5-5b8404f50c31","Type":"ContainerStarted","Data":"3bdc7efa63d89e9533dc7ec57ed4e889056a64cde54d861563fd0fcb224e664c"} Feb 16 17:22:47 crc kubenswrapper[4794]: I0216 17:22:47.960829 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6cdff78ddf-hj4zf" Feb 16 17:22:47 crc kubenswrapper[4794]: I0216 17:22:47.962963 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ddd9f53-b6ac-4624-92b4-a076ad62d8de","Type":"ContainerStarted","Data":"d8ca38db2ddc3163a6c287eb50021e606fa8f95771f31edd7027c566317049fa"} Feb 16 17:22:47 crc kubenswrapper[4794]: I0216 17:22:47.992651 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6cdff78ddf-hj4zf" podStartSLOduration=2.992631909 podStartE2EDuration="2.992631909s" podCreationTimestamp="2026-02-16 17:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:22:47.987031431 +0000 UTC m=+1393.935126078" watchObservedRunningTime="2026-02-16 17:22:47.992631909 +0000 UTC m=+1393.940726556" Feb 16 17:22:48 crc kubenswrapper[4794]: I0216 17:22:48.011178 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.64705572 podStartE2EDuration="8.011161232s" podCreationTimestamp="2026-02-16 17:22:40 +0000 UTC" firstStartedPulling="2026-02-16 17:22:41.837009381 +0000 UTC m=+1387.785104028" lastFinishedPulling="2026-02-16 17:22:47.201114893 +0000 UTC m=+1393.149209540" observedRunningTime="2026-02-16 17:22:48.008157237 +0000 UTC m=+1393.956251884" watchObservedRunningTime="2026-02-16 17:22:48.011161232 +0000 UTC m=+1393.959255879" Feb 16 17:22:48 crc kubenswrapper[4794]: I0216 17:22:48.973657 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 17:22:49 crc kubenswrapper[4794]: I0216 17:22:49.115663 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5544448f6b-g648r" Feb 16 17:22:49 crc kubenswrapper[4794]: I0216 17:22:49.484923 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5544448f6b-g648r" Feb 16 17:22:51 crc kubenswrapper[4794]: I0216 17:22:51.010342 4794 generic.go:334] "Generic (PLEG): container finished" podID="20805f32-52bf-4449-90fd-8e83635f8154" containerID="a07003dd6d04b6392ad6a95ed24662d7aed38806951188bf1495200e0a697d3e" exitCode=0 Feb 16 17:22:51 crc kubenswrapper[4794]: I0216 17:22:51.010851 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c659f6967-vsf27" event={"ID":"20805f32-52bf-4449-90fd-8e83635f8154","Type":"ContainerDied","Data":"a07003dd6d04b6392ad6a95ed24662d7aed38806951188bf1495200e0a697d3e"} Feb 16 17:22:51 crc kubenswrapper[4794]: I0216 17:22:51.151517 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c659f6967-vsf27" Feb 16 17:22:51 crc kubenswrapper[4794]: I0216 17:22:51.278375 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/20805f32-52bf-4449-90fd-8e83635f8154-config\") pod \"20805f32-52bf-4449-90fd-8e83635f8154\" (UID: \"20805f32-52bf-4449-90fd-8e83635f8154\") " Feb 16 17:22:51 crc kubenswrapper[4794]: I0216 17:22:51.278454 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20805f32-52bf-4449-90fd-8e83635f8154-public-tls-certs\") pod \"20805f32-52bf-4449-90fd-8e83635f8154\" (UID: \"20805f32-52bf-4449-90fd-8e83635f8154\") " Feb 16 17:22:51 crc kubenswrapper[4794]: I0216 17:22:51.278522 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29rk8\" (UniqueName: \"kubernetes.io/projected/20805f32-52bf-4449-90fd-8e83635f8154-kube-api-access-29rk8\") pod \"20805f32-52bf-4449-90fd-8e83635f8154\" (UID: \"20805f32-52bf-4449-90fd-8e83635f8154\") " Feb 16 17:22:51 crc kubenswrapper[4794]: I0216 17:22:51.278551 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20805f32-52bf-4449-90fd-8e83635f8154-internal-tls-certs\") pod \"20805f32-52bf-4449-90fd-8e83635f8154\" (UID: \"20805f32-52bf-4449-90fd-8e83635f8154\") " Feb 16 17:22:51 crc kubenswrapper[4794]: I0216 17:22:51.278572 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20805f32-52bf-4449-90fd-8e83635f8154-combined-ca-bundle\") pod \"20805f32-52bf-4449-90fd-8e83635f8154\" (UID: \"20805f32-52bf-4449-90fd-8e83635f8154\") " Feb 16 17:22:51 crc kubenswrapper[4794]: I0216 17:22:51.278601 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/20805f32-52bf-4449-90fd-8e83635f8154-ovndb-tls-certs\") pod \"20805f32-52bf-4449-90fd-8e83635f8154\" (UID: \"20805f32-52bf-4449-90fd-8e83635f8154\") " Feb 16 17:22:51 crc kubenswrapper[4794]: I0216 17:22:51.278642 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/20805f32-52bf-4449-90fd-8e83635f8154-httpd-config\") pod \"20805f32-52bf-4449-90fd-8e83635f8154\" (UID: \"20805f32-52bf-4449-90fd-8e83635f8154\") " Feb 16 17:22:51 crc kubenswrapper[4794]: I0216 17:22:51.297684 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20805f32-52bf-4449-90fd-8e83635f8154-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "20805f32-52bf-4449-90fd-8e83635f8154" (UID: "20805f32-52bf-4449-90fd-8e83635f8154"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:51 crc kubenswrapper[4794]: I0216 17:22:51.298034 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20805f32-52bf-4449-90fd-8e83635f8154-kube-api-access-29rk8" (OuterVolumeSpecName: "kube-api-access-29rk8") pod "20805f32-52bf-4449-90fd-8e83635f8154" (UID: "20805f32-52bf-4449-90fd-8e83635f8154"). InnerVolumeSpecName "kube-api-access-29rk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:22:51 crc kubenswrapper[4794]: I0216 17:22:51.382733 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29rk8\" (UniqueName: \"kubernetes.io/projected/20805f32-52bf-4449-90fd-8e83635f8154-kube-api-access-29rk8\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:51 crc kubenswrapper[4794]: I0216 17:22:51.382766 4794 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/20805f32-52bf-4449-90fd-8e83635f8154-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:51 crc kubenswrapper[4794]: I0216 17:22:51.384595 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20805f32-52bf-4449-90fd-8e83635f8154-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "20805f32-52bf-4449-90fd-8e83635f8154" (UID: "20805f32-52bf-4449-90fd-8e83635f8154"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:51 crc kubenswrapper[4794]: I0216 17:22:51.388697 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 17:22:51 crc kubenswrapper[4794]: I0216 17:22:51.402499 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20805f32-52bf-4449-90fd-8e83635f8154-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "20805f32-52bf-4449-90fd-8e83635f8154" (UID: "20805f32-52bf-4449-90fd-8e83635f8154"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:51 crc kubenswrapper[4794]: I0216 17:22:51.412115 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20805f32-52bf-4449-90fd-8e83635f8154-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "20805f32-52bf-4449-90fd-8e83635f8154" (UID: "20805f32-52bf-4449-90fd-8e83635f8154"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:51 crc kubenswrapper[4794]: I0216 17:22:51.431433 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-r78sp" Feb 16 17:22:51 crc kubenswrapper[4794]: I0216 17:22:51.463520 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20805f32-52bf-4449-90fd-8e83635f8154-config" (OuterVolumeSpecName: "config") pod "20805f32-52bf-4449-90fd-8e83635f8154" (UID: "20805f32-52bf-4449-90fd-8e83635f8154"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:51 crc kubenswrapper[4794]: I0216 17:22:51.487721 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/20805f32-52bf-4449-90fd-8e83635f8154-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:51 crc kubenswrapper[4794]: I0216 17:22:51.487761 4794 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20805f32-52bf-4449-90fd-8e83635f8154-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:51 crc kubenswrapper[4794]: I0216 17:22:51.487773 4794 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20805f32-52bf-4449-90fd-8e83635f8154-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:51 crc kubenswrapper[4794]: I0216 17:22:51.487785 4794 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/20805f32-52bf-4449-90fd-8e83635f8154-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:51 crc kubenswrapper[4794]: I0216 17:22:51.503668 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20805f32-52bf-4449-90fd-8e83635f8154-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20805f32-52bf-4449-90fd-8e83635f8154" (UID: "20805f32-52bf-4449-90fd-8e83635f8154"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:51 crc kubenswrapper[4794]: I0216 17:22:51.552903 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-bxq86"] Feb 16 17:22:51 crc kubenswrapper[4794]: I0216 17:22:51.553186 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-bxq86" podUID="a0a2ba29-1ca7-4b10-9f24-5810b4e27296" containerName="dnsmasq-dns" containerID="cri-o://22ab93557e6b5b080a320d16a6c25285734d1893f3818562159a1838e2b62e67" gracePeriod=10 Feb 16 17:22:51 crc kubenswrapper[4794]: I0216 17:22:51.597231 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20805f32-52bf-4449-90fd-8e83635f8154-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:51 crc kubenswrapper[4794]: I0216 17:22:51.840320 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 16 17:22:52 crc kubenswrapper[4794]: I0216 17:22:52.031836 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c659f6967-vsf27" event={"ID":"20805f32-52bf-4449-90fd-8e83635f8154","Type":"ContainerDied","Data":"1a982d82083335fe814998413022d818815014621912fcf221278afcc2aba732"} Feb 16 17:22:52 crc kubenswrapper[4794]: I0216 17:22:52.031909 4794 scope.go:117] "RemoveContainer" containerID="dfba6fb97b7e10eb69d8f50d615d60e352c844939be4b756403304dee500a66e" Feb 16 17:22:52 crc kubenswrapper[4794]: I0216 17:22:52.032095 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c659f6967-vsf27" Feb 16 17:22:52 crc kubenswrapper[4794]: I0216 17:22:52.038110 4794 generic.go:334] "Generic (PLEG): container finished" podID="a0a2ba29-1ca7-4b10-9f24-5810b4e27296" containerID="22ab93557e6b5b080a320d16a6c25285734d1893f3818562159a1838e2b62e67" exitCode=0 Feb 16 17:22:52 crc kubenswrapper[4794]: I0216 17:22:52.038257 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-bxq86" event={"ID":"a0a2ba29-1ca7-4b10-9f24-5810b4e27296","Type":"ContainerDied","Data":"22ab93557e6b5b080a320d16a6c25285734d1893f3818562159a1838e2b62e67"} Feb 16 17:22:52 crc kubenswrapper[4794]: I0216 17:22:52.110842 4794 scope.go:117] "RemoveContainer" containerID="a07003dd6d04b6392ad6a95ed24662d7aed38806951188bf1495200e0a697d3e" Feb 16 17:22:52 crc kubenswrapper[4794]: I0216 17:22:52.110875 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 17:22:52 crc kubenswrapper[4794]: I0216 17:22:52.136687 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c659f6967-vsf27"] Feb 16 17:22:52 crc kubenswrapper[4794]: I0216 17:22:52.179831 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-c659f6967-vsf27"] Feb 16 17:22:52 crc kubenswrapper[4794]: I0216 17:22:52.448798 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-bxq86" Feb 16 17:22:52 crc kubenswrapper[4794]: I0216 17:22:52.536134 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0a2ba29-1ca7-4b10-9f24-5810b4e27296-ovsdbserver-sb\") pod \"a0a2ba29-1ca7-4b10-9f24-5810b4e27296\" (UID: \"a0a2ba29-1ca7-4b10-9f24-5810b4e27296\") " Feb 16 17:22:52 crc kubenswrapper[4794]: I0216 17:22:52.536250 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a0a2ba29-1ca7-4b10-9f24-5810b4e27296-dns-swift-storage-0\") pod \"a0a2ba29-1ca7-4b10-9f24-5810b4e27296\" (UID: \"a0a2ba29-1ca7-4b10-9f24-5810b4e27296\") " Feb 16 17:22:52 crc kubenswrapper[4794]: I0216 17:22:52.536347 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0a2ba29-1ca7-4b10-9f24-5810b4e27296-config\") pod \"a0a2ba29-1ca7-4b10-9f24-5810b4e27296\" (UID: \"a0a2ba29-1ca7-4b10-9f24-5810b4e27296\") " Feb 16 17:22:52 crc kubenswrapper[4794]: I0216 17:22:52.536466 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0a2ba29-1ca7-4b10-9f24-5810b4e27296-ovsdbserver-nb\") pod \"a0a2ba29-1ca7-4b10-9f24-5810b4e27296\" (UID: \"a0a2ba29-1ca7-4b10-9f24-5810b4e27296\") " Feb 16 17:22:52 crc kubenswrapper[4794]: I0216 17:22:52.536496 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqjfz\" (UniqueName: \"kubernetes.io/projected/a0a2ba29-1ca7-4b10-9f24-5810b4e27296-kube-api-access-hqjfz\") pod \"a0a2ba29-1ca7-4b10-9f24-5810b4e27296\" (UID: \"a0a2ba29-1ca7-4b10-9f24-5810b4e27296\") " Feb 16 17:22:52 crc kubenswrapper[4794]: I0216 17:22:52.536510 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0a2ba29-1ca7-4b10-9f24-5810b4e27296-dns-svc\") pod \"a0a2ba29-1ca7-4b10-9f24-5810b4e27296\" (UID: \"a0a2ba29-1ca7-4b10-9f24-5810b4e27296\") " Feb 16 17:22:52 crc kubenswrapper[4794]: I0216 17:22:52.575692 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0a2ba29-1ca7-4b10-9f24-5810b4e27296-kube-api-access-hqjfz" (OuterVolumeSpecName: "kube-api-access-hqjfz") pod "a0a2ba29-1ca7-4b10-9f24-5810b4e27296" (UID: "a0a2ba29-1ca7-4b10-9f24-5810b4e27296"). InnerVolumeSpecName "kube-api-access-hqjfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:22:52 crc kubenswrapper[4794]: I0216 17:22:52.639143 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqjfz\" (UniqueName: \"kubernetes.io/projected/a0a2ba29-1ca7-4b10-9f24-5810b4e27296-kube-api-access-hqjfz\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:52 crc kubenswrapper[4794]: I0216 17:22:52.643972 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0a2ba29-1ca7-4b10-9f24-5810b4e27296-config" (OuterVolumeSpecName: "config") pod "a0a2ba29-1ca7-4b10-9f24-5810b4e27296" (UID: "a0a2ba29-1ca7-4b10-9f24-5810b4e27296"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:22:52 crc kubenswrapper[4794]: I0216 17:22:52.649928 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0a2ba29-1ca7-4b10-9f24-5810b4e27296-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a0a2ba29-1ca7-4b10-9f24-5810b4e27296" (UID: "a0a2ba29-1ca7-4b10-9f24-5810b4e27296"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:22:52 crc kubenswrapper[4794]: I0216 17:22:52.666719 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0a2ba29-1ca7-4b10-9f24-5810b4e27296-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a0a2ba29-1ca7-4b10-9f24-5810b4e27296" (UID: "a0a2ba29-1ca7-4b10-9f24-5810b4e27296"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:22:52 crc kubenswrapper[4794]: I0216 17:22:52.688802 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0a2ba29-1ca7-4b10-9f24-5810b4e27296-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a0a2ba29-1ca7-4b10-9f24-5810b4e27296" (UID: "a0a2ba29-1ca7-4b10-9f24-5810b4e27296"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:22:52 crc kubenswrapper[4794]: I0216 17:22:52.708750 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0a2ba29-1ca7-4b10-9f24-5810b4e27296-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a0a2ba29-1ca7-4b10-9f24-5810b4e27296" (UID: "a0a2ba29-1ca7-4b10-9f24-5810b4e27296"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:22:52 crc kubenswrapper[4794]: I0216 17:22:52.741846 4794 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a0a2ba29-1ca7-4b10-9f24-5810b4e27296-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:52 crc kubenswrapper[4794]: I0216 17:22:52.741883 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0a2ba29-1ca7-4b10-9f24-5810b4e27296-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:52 crc kubenswrapper[4794]: I0216 17:22:52.741893 4794 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0a2ba29-1ca7-4b10-9f24-5810b4e27296-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:52 crc kubenswrapper[4794]: I0216 17:22:52.741904 4794 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0a2ba29-1ca7-4b10-9f24-5810b4e27296-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:52 crc kubenswrapper[4794]: I0216 17:22:52.741912 4794 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0a2ba29-1ca7-4b10-9f24-5810b4e27296-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:52 crc kubenswrapper[4794]: I0216 17:22:52.805034 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20805f32-52bf-4449-90fd-8e83635f8154" path="/var/lib/kubelet/pods/20805f32-52bf-4449-90fd-8e83635f8154/volumes" Feb 16 17:22:53 crc kubenswrapper[4794]: I0216 17:22:53.051259 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-bxq86" event={"ID":"a0a2ba29-1ca7-4b10-9f24-5810b4e27296","Type":"ContainerDied","Data":"bb4c5a21c6516f61e89aa83b46a15b1d2b63c00c0afd69a3e0c8aaf1ddd1a330"} Feb 16 17:22:53 crc kubenswrapper[4794]: I0216 17:22:53.051313 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-bxq86" Feb 16 17:22:53 crc kubenswrapper[4794]: I0216 17:22:53.051338 4794 scope.go:117] "RemoveContainer" containerID="22ab93557e6b5b080a320d16a6c25285734d1893f3818562159a1838e2b62e67" Feb 16 17:22:53 crc kubenswrapper[4794]: I0216 17:22:53.052123 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2a3a7e64-dbe6-4fdb-94e1-83756c8a273a" containerName="cinder-scheduler" containerID="cri-o://e04ae9a3d6fc02a14610d806806b30ba6520ba8db61425bcabc5c638507f84b3" gracePeriod=30 Feb 16 17:22:53 crc kubenswrapper[4794]: I0216 17:22:53.052242 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2a3a7e64-dbe6-4fdb-94e1-83756c8a273a" containerName="probe" containerID="cri-o://482fb8816f84e0436fcbb7b6e7cee0c98c46e8439949473d16deeb1a5e7e5683" gracePeriod=30 Feb 16 17:22:53 crc kubenswrapper[4794]: I0216 17:22:53.077756 4794 scope.go:117] "RemoveContainer" containerID="8a9203a7a5fab91c4951810d4dfba524f991e23a7c384f1706fb1634b26b3f6c" Feb 16 17:22:53 crc kubenswrapper[4794]: I0216 17:22:53.087401 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-bxq86"] Feb 16 17:22:53 crc kubenswrapper[4794]: I0216 17:22:53.113023 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-bxq86"] Feb 16 17:22:53 crc kubenswrapper[4794]: I0216 17:22:53.824752 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-57b87468-bqjtk" Feb 16 17:22:53 crc kubenswrapper[4794]: I0216 17:22:53.827535 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-57b87468-bqjtk" Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.097982 4794 generic.go:334] "Generic (PLEG): container finished" podID="2a3a7e64-dbe6-4fdb-94e1-83756c8a273a" containerID="482fb8816f84e0436fcbb7b6e7cee0c98c46e8439949473d16deeb1a5e7e5683" exitCode=0 Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.099191 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2a3a7e64-dbe6-4fdb-94e1-83756c8a273a","Type":"ContainerDied","Data":"482fb8816f84e0436fcbb7b6e7cee0c98c46e8439949473d16deeb1a5e7e5683"} Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.120129 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-57f99b44dd-9kw4m"] Feb 16 17:22:54 crc kubenswrapper[4794]: E0216 17:22:54.120608 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0a2ba29-1ca7-4b10-9f24-5810b4e27296" containerName="dnsmasq-dns" Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.120620 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0a2ba29-1ca7-4b10-9f24-5810b4e27296" containerName="dnsmasq-dns" Feb 16 17:22:54 crc kubenswrapper[4794]: E0216 17:22:54.120637 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0a2ba29-1ca7-4b10-9f24-5810b4e27296" containerName="init" Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.120644 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0a2ba29-1ca7-4b10-9f24-5810b4e27296" containerName="init" Feb 16 17:22:54 crc kubenswrapper[4794]: E0216 17:22:54.120662 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20805f32-52bf-4449-90fd-8e83635f8154" containerName="neutron-httpd" Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.120669 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="20805f32-52bf-4449-90fd-8e83635f8154" containerName="neutron-httpd" Feb 16 17:22:54 crc kubenswrapper[4794]: E0216 17:22:54.120688 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20805f32-52bf-4449-90fd-8e83635f8154" containerName="neutron-api" Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.120693 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="20805f32-52bf-4449-90fd-8e83635f8154" containerName="neutron-api" Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.128573 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0a2ba29-1ca7-4b10-9f24-5810b4e27296" containerName="dnsmasq-dns" Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.128624 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="20805f32-52bf-4449-90fd-8e83635f8154" containerName="neutron-api" Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.128651 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="20805f32-52bf-4449-90fd-8e83635f8154" containerName="neutron-httpd" Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.129967 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-57f99b44dd-9kw4m" Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.146595 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-57f99b44dd-9kw4m"] Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.192030 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e70af0d8-dad3-4bab-bfea-e82fef6b308e-internal-tls-certs\") pod \"placement-57f99b44dd-9kw4m\" (UID: \"e70af0d8-dad3-4bab-bfea-e82fef6b308e\") " pod="openstack/placement-57f99b44dd-9kw4m" Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.192099 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e70af0d8-dad3-4bab-bfea-e82fef6b308e-scripts\") pod \"placement-57f99b44dd-9kw4m\" (UID: \"e70af0d8-dad3-4bab-bfea-e82fef6b308e\") " pod="openstack/placement-57f99b44dd-9kw4m" Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.192133 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e70af0d8-dad3-4bab-bfea-e82fef6b308e-config-data\") pod \"placement-57f99b44dd-9kw4m\" (UID: \"e70af0d8-dad3-4bab-bfea-e82fef6b308e\") " pod="openstack/placement-57f99b44dd-9kw4m" Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.192246 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e70af0d8-dad3-4bab-bfea-e82fef6b308e-public-tls-certs\") pod \"placement-57f99b44dd-9kw4m\" (UID: \"e70af0d8-dad3-4bab-bfea-e82fef6b308e\") " pod="openstack/placement-57f99b44dd-9kw4m" Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.192346 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e70af0d8-dad3-4bab-bfea-e82fef6b308e-combined-ca-bundle\") pod \"placement-57f99b44dd-9kw4m\" (UID: \"e70af0d8-dad3-4bab-bfea-e82fef6b308e\") " pod="openstack/placement-57f99b44dd-9kw4m" Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.192373 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwjsk\" (UniqueName: \"kubernetes.io/projected/e70af0d8-dad3-4bab-bfea-e82fef6b308e-kube-api-access-nwjsk\") pod \"placement-57f99b44dd-9kw4m\" (UID: \"e70af0d8-dad3-4bab-bfea-e82fef6b308e\") " pod="openstack/placement-57f99b44dd-9kw4m" Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.192403 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e70af0d8-dad3-4bab-bfea-e82fef6b308e-logs\") pod \"placement-57f99b44dd-9kw4m\" (UID: \"e70af0d8-dad3-4bab-bfea-e82fef6b308e\") " pod="openstack/placement-57f99b44dd-9kw4m" Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.271818 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6c7c9b8d66-vz9b5" Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.294115 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e70af0d8-dad3-4bab-bfea-e82fef6b308e-public-tls-certs\") pod \"placement-57f99b44dd-9kw4m\" (UID: \"e70af0d8-dad3-4bab-bfea-e82fef6b308e\") " pod="openstack/placement-57f99b44dd-9kw4m" Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.294201 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e70af0d8-dad3-4bab-bfea-e82fef6b308e-combined-ca-bundle\") pod \"placement-57f99b44dd-9kw4m\" (UID: \"e70af0d8-dad3-4bab-bfea-e82fef6b308e\") " pod="openstack/placement-57f99b44dd-9kw4m" Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.294225 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwjsk\" (UniqueName: \"kubernetes.io/projected/e70af0d8-dad3-4bab-bfea-e82fef6b308e-kube-api-access-nwjsk\") pod \"placement-57f99b44dd-9kw4m\" (UID: \"e70af0d8-dad3-4bab-bfea-e82fef6b308e\") " pod="openstack/placement-57f99b44dd-9kw4m" Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.294252 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e70af0d8-dad3-4bab-bfea-e82fef6b308e-logs\") pod \"placement-57f99b44dd-9kw4m\" (UID: \"e70af0d8-dad3-4bab-bfea-e82fef6b308e\") " pod="openstack/placement-57f99b44dd-9kw4m" Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.294346 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e70af0d8-dad3-4bab-bfea-e82fef6b308e-internal-tls-certs\") pod \"placement-57f99b44dd-9kw4m\" (UID: \"e70af0d8-dad3-4bab-bfea-e82fef6b308e\") " pod="openstack/placement-57f99b44dd-9kw4m" Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.294372 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e70af0d8-dad3-4bab-bfea-e82fef6b308e-scripts\") pod \"placement-57f99b44dd-9kw4m\" (UID: \"e70af0d8-dad3-4bab-bfea-e82fef6b308e\") " pod="openstack/placement-57f99b44dd-9kw4m" Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.294393 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e70af0d8-dad3-4bab-bfea-e82fef6b308e-config-data\") pod \"placement-57f99b44dd-9kw4m\" (UID: \"e70af0d8-dad3-4bab-bfea-e82fef6b308e\") " pod="openstack/placement-57f99b44dd-9kw4m" Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.301333 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e70af0d8-dad3-4bab-bfea-e82fef6b308e-config-data\") pod \"placement-57f99b44dd-9kw4m\" (UID: \"e70af0d8-dad3-4bab-bfea-e82fef6b308e\") " pod="openstack/placement-57f99b44dd-9kw4m" Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.301356 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e70af0d8-dad3-4bab-bfea-e82fef6b308e-logs\") pod \"placement-57f99b44dd-9kw4m\" (UID: \"e70af0d8-dad3-4bab-bfea-e82fef6b308e\") " pod="openstack/placement-57f99b44dd-9kw4m" Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.305761 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e70af0d8-dad3-4bab-bfea-e82fef6b308e-public-tls-certs\") pod \"placement-57f99b44dd-9kw4m\" (UID: \"e70af0d8-dad3-4bab-bfea-e82fef6b308e\") " pod="openstack/placement-57f99b44dd-9kw4m" Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.306205 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e70af0d8-dad3-4bab-bfea-e82fef6b308e-combined-ca-bundle\") pod \"placement-57f99b44dd-9kw4m\" (UID: \"e70af0d8-dad3-4bab-bfea-e82fef6b308e\") " pod="openstack/placement-57f99b44dd-9kw4m" Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.306557 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e70af0d8-dad3-4bab-bfea-e82fef6b308e-internal-tls-certs\") pod \"placement-57f99b44dd-9kw4m\" (UID: \"e70af0d8-dad3-4bab-bfea-e82fef6b308e\") " pod="openstack/placement-57f99b44dd-9kw4m" Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.315797 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwjsk\" (UniqueName: \"kubernetes.io/projected/e70af0d8-dad3-4bab-bfea-e82fef6b308e-kube-api-access-nwjsk\") pod \"placement-57f99b44dd-9kw4m\" (UID: \"e70af0d8-dad3-4bab-bfea-e82fef6b308e\") " pod="openstack/placement-57f99b44dd-9kw4m" Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.316880 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e70af0d8-dad3-4bab-bfea-e82fef6b308e-scripts\") pod \"placement-57f99b44dd-9kw4m\" (UID: \"e70af0d8-dad3-4bab-bfea-e82fef6b308e\") " pod="openstack/placement-57f99b44dd-9kw4m" Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.456293 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-57f99b44dd-9kw4m" Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.550661 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6c7c9b8d66-vz9b5" Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.700865 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5544448f6b-g648r"] Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.701394 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5544448f6b-g648r" podUID="483c093a-519b-46a6-87c0-a4b43efc587e" containerName="barbican-api-log" containerID="cri-o://c7cd0a91ba9b29109548552b7d31997ae242b82a47a51e1549967df7d96246fe" gracePeriod=30 Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.701932 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5544448f6b-g648r" podUID="483c093a-519b-46a6-87c0-a4b43efc587e" containerName="barbican-api" containerID="cri-o://26bc69357d6eb66460b7d582b9d3ce706bda07258ddf7e91852f21e6e928ccc0" gracePeriod=30 Feb 16 17:22:54 crc kubenswrapper[4794]: I0216 17:22:54.858654 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0a2ba29-1ca7-4b10-9f24-5810b4e27296" path="/var/lib/kubelet/pods/a0a2ba29-1ca7-4b10-9f24-5810b4e27296/volumes" Feb 16 17:22:55 crc kubenswrapper[4794]: I0216 17:22:55.122289 4794 generic.go:334] "Generic (PLEG): container finished" podID="483c093a-519b-46a6-87c0-a4b43efc587e" containerID="c7cd0a91ba9b29109548552b7d31997ae242b82a47a51e1549967df7d96246fe" exitCode=143 Feb 16 17:22:55 crc kubenswrapper[4794]: I0216 17:22:55.122572 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5544448f6b-g648r" event={"ID":"483c093a-519b-46a6-87c0-a4b43efc587e","Type":"ContainerDied","Data":"c7cd0a91ba9b29109548552b7d31997ae242b82a47a51e1549967df7d96246fe"} Feb 16 17:22:55 crc kubenswrapper[4794]: I0216 17:22:55.124161 4794 generic.go:334] "Generic (PLEG): container finished" podID="2a3a7e64-dbe6-4fdb-94e1-83756c8a273a" containerID="e04ae9a3d6fc02a14610d806806b30ba6520ba8db61425bcabc5c638507f84b3" exitCode=0 Feb 16 17:22:55 crc kubenswrapper[4794]: I0216 17:22:55.125176 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2a3a7e64-dbe6-4fdb-94e1-83756c8a273a","Type":"ContainerDied","Data":"e04ae9a3d6fc02a14610d806806b30ba6520ba8db61425bcabc5c638507f84b3"} Feb 16 17:22:55 crc kubenswrapper[4794]: I0216 17:22:55.168536 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-57f99b44dd-9kw4m"] Feb 16 17:22:55 crc kubenswrapper[4794]: I0216 17:22:55.516264 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 17:22:55 crc kubenswrapper[4794]: I0216 17:22:55.565858 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a3a7e64-dbe6-4fdb-94e1-83756c8a273a-config-data-custom\") pod \"2a3a7e64-dbe6-4fdb-94e1-83756c8a273a\" (UID: \"2a3a7e64-dbe6-4fdb-94e1-83756c8a273a\") " Feb 16 17:22:55 crc kubenswrapper[4794]: I0216 17:22:55.565987 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a3a7e64-dbe6-4fdb-94e1-83756c8a273a-combined-ca-bundle\") pod \"2a3a7e64-dbe6-4fdb-94e1-83756c8a273a\" (UID: \"2a3a7e64-dbe6-4fdb-94e1-83756c8a273a\") " Feb 16 17:22:55 crc kubenswrapper[4794]: I0216 17:22:55.566077 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a3a7e64-dbe6-4fdb-94e1-83756c8a273a-config-data\") pod \"2a3a7e64-dbe6-4fdb-94e1-83756c8a273a\" (UID: \"2a3a7e64-dbe6-4fdb-94e1-83756c8a273a\") " Feb 16 17:22:55 crc kubenswrapper[4794]: I0216 17:22:55.566225 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2a3a7e64-dbe6-4fdb-94e1-83756c8a273a-etc-machine-id\") pod \"2a3a7e64-dbe6-4fdb-94e1-83756c8a273a\" (UID: \"2a3a7e64-dbe6-4fdb-94e1-83756c8a273a\") " Feb 16 17:22:55 crc kubenswrapper[4794]: I0216 17:22:55.566361 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2xlg\" (UniqueName: \"kubernetes.io/projected/2a3a7e64-dbe6-4fdb-94e1-83756c8a273a-kube-api-access-c2xlg\") pod \"2a3a7e64-dbe6-4fdb-94e1-83756c8a273a\" (UID: \"2a3a7e64-dbe6-4fdb-94e1-83756c8a273a\") " Feb 16 17:22:55 crc kubenswrapper[4794]: I0216 17:22:55.566507 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a3a7e64-dbe6-4fdb-94e1-83756c8a273a-scripts\") pod \"2a3a7e64-dbe6-4fdb-94e1-83756c8a273a\" (UID: \"2a3a7e64-dbe6-4fdb-94e1-83756c8a273a\") " Feb 16 17:22:55 crc kubenswrapper[4794]: I0216 17:22:55.568584 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a3a7e64-dbe6-4fdb-94e1-83756c8a273a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2a3a7e64-dbe6-4fdb-94e1-83756c8a273a" (UID: "2a3a7e64-dbe6-4fdb-94e1-83756c8a273a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:22:55 crc kubenswrapper[4794]: I0216 17:22:55.581683 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a3a7e64-dbe6-4fdb-94e1-83756c8a273a-scripts" (OuterVolumeSpecName: "scripts") pod "2a3a7e64-dbe6-4fdb-94e1-83756c8a273a" (UID: "2a3a7e64-dbe6-4fdb-94e1-83756c8a273a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:55 crc kubenswrapper[4794]: I0216 17:22:55.582701 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a3a7e64-dbe6-4fdb-94e1-83756c8a273a-kube-api-access-c2xlg" (OuterVolumeSpecName: "kube-api-access-c2xlg") pod "2a3a7e64-dbe6-4fdb-94e1-83756c8a273a" (UID: "2a3a7e64-dbe6-4fdb-94e1-83756c8a273a"). InnerVolumeSpecName "kube-api-access-c2xlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:22:55 crc kubenswrapper[4794]: I0216 17:22:55.583737 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a3a7e64-dbe6-4fdb-94e1-83756c8a273a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2a3a7e64-dbe6-4fdb-94e1-83756c8a273a" (UID: "2a3a7e64-dbe6-4fdb-94e1-83756c8a273a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:55 crc kubenswrapper[4794]: I0216 17:22:55.674891 4794 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a3a7e64-dbe6-4fdb-94e1-83756c8a273a-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:55 crc kubenswrapper[4794]: I0216 17:22:55.674917 4794 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2a3a7e64-dbe6-4fdb-94e1-83756c8a273a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:55 crc kubenswrapper[4794]: I0216 17:22:55.674927 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2xlg\" (UniqueName: \"kubernetes.io/projected/2a3a7e64-dbe6-4fdb-94e1-83756c8a273a-kube-api-access-c2xlg\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:55 crc kubenswrapper[4794]: I0216 17:22:55.674937 4794 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a3a7e64-dbe6-4fdb-94e1-83756c8a273a-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:55 crc kubenswrapper[4794]: I0216 17:22:55.718702 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a3a7e64-dbe6-4fdb-94e1-83756c8a273a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a3a7e64-dbe6-4fdb-94e1-83756c8a273a" (UID: "2a3a7e64-dbe6-4fdb-94e1-83756c8a273a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:55 crc kubenswrapper[4794]: I0216 17:22:55.776471 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a3a7e64-dbe6-4fdb-94e1-83756c8a273a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:55 crc kubenswrapper[4794]: I0216 17:22:55.899064 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a3a7e64-dbe6-4fdb-94e1-83756c8a273a-config-data" (OuterVolumeSpecName: "config-data") pod "2a3a7e64-dbe6-4fdb-94e1-83756c8a273a" (UID: "2a3a7e64-dbe6-4fdb-94e1-83756c8a273a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:55 crc kubenswrapper[4794]: I0216 17:22:55.923719 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 16 17:22:55 crc kubenswrapper[4794]: I0216 17:22:55.980792 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a3a7e64-dbe6-4fdb-94e1-83756c8a273a-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:56 crc kubenswrapper[4794]: I0216 17:22:56.179710 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-57f99b44dd-9kw4m" event={"ID":"e70af0d8-dad3-4bab-bfea-e82fef6b308e","Type":"ContainerStarted","Data":"b5097dd4fdbfd45395cd9d40031731384e909b83384bf4b5b3432e6db0fee422"} Feb 16 17:22:56 crc kubenswrapper[4794]: I0216 17:22:56.179999 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-57f99b44dd-9kw4m" event={"ID":"e70af0d8-dad3-4bab-bfea-e82fef6b308e","Type":"ContainerStarted","Data":"e7a22d6aa1e27bbe9ee45f4ef0206bbe3a43f533609a0a75a285af2b70b5efd8"} Feb 16 17:22:56 crc kubenswrapper[4794]: I0216 17:22:56.201541 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2a3a7e64-dbe6-4fdb-94e1-83756c8a273a","Type":"ContainerDied","Data":"c91070b600962c2cff667e5761252e17474e9edf08c4ab402fbf18f225cff0c4"} Feb 16 17:22:56 crc kubenswrapper[4794]: I0216 17:22:56.201598 4794 scope.go:117] "RemoveContainer" containerID="482fb8816f84e0436fcbb7b6e7cee0c98c46e8439949473d16deeb1a5e7e5683" Feb 16 17:22:56 crc kubenswrapper[4794]: I0216 17:22:56.201801 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 17:22:56 crc kubenswrapper[4794]: I0216 17:22:56.260683 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 17:22:56 crc kubenswrapper[4794]: I0216 17:22:56.279526 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 17:22:56 crc kubenswrapper[4794]: I0216 17:22:56.298380 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 17:22:56 crc kubenswrapper[4794]: E0216 17:22:56.299004 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a3a7e64-dbe6-4fdb-94e1-83756c8a273a" containerName="probe" Feb 16 17:22:56 crc kubenswrapper[4794]: I0216 17:22:56.299030 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a3a7e64-dbe6-4fdb-94e1-83756c8a273a" containerName="probe" Feb 16 17:22:56 crc kubenswrapper[4794]: E0216 17:22:56.299096 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a3a7e64-dbe6-4fdb-94e1-83756c8a273a" containerName="cinder-scheduler" Feb 16 17:22:56 crc kubenswrapper[4794]: I0216 17:22:56.299107 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a3a7e64-dbe6-4fdb-94e1-83756c8a273a" containerName="cinder-scheduler" Feb 16 17:22:56 crc kubenswrapper[4794]: I0216 17:22:56.299422 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a3a7e64-dbe6-4fdb-94e1-83756c8a273a" containerName="probe" Feb 16 17:22:56 crc kubenswrapper[4794]: I0216 17:22:56.299445 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a3a7e64-dbe6-4fdb-94e1-83756c8a273a" containerName="cinder-scheduler" Feb 16 17:22:56 crc kubenswrapper[4794]: I0216 17:22:56.300990 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 17:22:56 crc kubenswrapper[4794]: I0216 17:22:56.303005 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 16 17:22:56 crc kubenswrapper[4794]: I0216 17:22:56.327512 4794 scope.go:117] "RemoveContainer" containerID="e04ae9a3d6fc02a14610d806806b30ba6520ba8db61425bcabc5c638507f84b3" Feb 16 17:22:56 crc kubenswrapper[4794]: I0216 17:22:56.348400 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 17:22:56 crc kubenswrapper[4794]: I0216 17:22:56.408840 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3342e2cd-2d8f-4dee-be8e-86c60e81ba81-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3342e2cd-2d8f-4dee-be8e-86c60e81ba81\") " pod="openstack/cinder-scheduler-0" Feb 16 17:22:56 crc kubenswrapper[4794]: I0216 17:22:56.408921 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwhjz\" (UniqueName: \"kubernetes.io/projected/3342e2cd-2d8f-4dee-be8e-86c60e81ba81-kube-api-access-vwhjz\") pod \"cinder-scheduler-0\" (UID: \"3342e2cd-2d8f-4dee-be8e-86c60e81ba81\") " pod="openstack/cinder-scheduler-0" Feb 16 17:22:56 crc kubenswrapper[4794]: I0216 17:22:56.409088 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3342e2cd-2d8f-4dee-be8e-86c60e81ba81-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3342e2cd-2d8f-4dee-be8e-86c60e81ba81\") " pod="openstack/cinder-scheduler-0" Feb 16 17:22:56 crc kubenswrapper[4794]: I0216 17:22:56.409138 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3342e2cd-2d8f-4dee-be8e-86c60e81ba81-scripts\") pod \"cinder-scheduler-0\" (UID: \"3342e2cd-2d8f-4dee-be8e-86c60e81ba81\") " pod="openstack/cinder-scheduler-0" Feb 16 17:22:56 crc kubenswrapper[4794]: I0216 17:22:56.409229 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3342e2cd-2d8f-4dee-be8e-86c60e81ba81-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3342e2cd-2d8f-4dee-be8e-86c60e81ba81\") " pod="openstack/cinder-scheduler-0" Feb 16 17:22:56 crc kubenswrapper[4794]: I0216 17:22:56.409297 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3342e2cd-2d8f-4dee-be8e-86c60e81ba81-config-data\") pod \"cinder-scheduler-0\" (UID: \"3342e2cd-2d8f-4dee-be8e-86c60e81ba81\") " pod="openstack/cinder-scheduler-0" Feb 16 17:22:56 crc kubenswrapper[4794]: I0216 17:22:56.511749 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3342e2cd-2d8f-4dee-be8e-86c60e81ba81-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3342e2cd-2d8f-4dee-be8e-86c60e81ba81\") " pod="openstack/cinder-scheduler-0" Feb 16 17:22:56 crc kubenswrapper[4794]: I0216 17:22:56.511827 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwhjz\" (UniqueName: \"kubernetes.io/projected/3342e2cd-2d8f-4dee-be8e-86c60e81ba81-kube-api-access-vwhjz\") pod \"cinder-scheduler-0\" (UID: \"3342e2cd-2d8f-4dee-be8e-86c60e81ba81\") " pod="openstack/cinder-scheduler-0" Feb 16 17:22:56 crc kubenswrapper[4794]: I0216 17:22:56.511910 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3342e2cd-2d8f-4dee-be8e-86c60e81ba81-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3342e2cd-2d8f-4dee-be8e-86c60e81ba81\") " pod="openstack/cinder-scheduler-0" Feb 16 17:22:56 crc kubenswrapper[4794]: I0216 17:22:56.511930 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3342e2cd-2d8f-4dee-be8e-86c60e81ba81-scripts\") pod \"cinder-scheduler-0\" (UID: \"3342e2cd-2d8f-4dee-be8e-86c60e81ba81\") " pod="openstack/cinder-scheduler-0" Feb 16 17:22:56 crc kubenswrapper[4794]: I0216 17:22:56.511948 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3342e2cd-2d8f-4dee-be8e-86c60e81ba81-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3342e2cd-2d8f-4dee-be8e-86c60e81ba81\") " pod="openstack/cinder-scheduler-0" Feb 16 17:22:56 crc kubenswrapper[4794]: I0216 17:22:56.511976 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3342e2cd-2d8f-4dee-be8e-86c60e81ba81-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3342e2cd-2d8f-4dee-be8e-86c60e81ba81\") " pod="openstack/cinder-scheduler-0" Feb 16 17:22:56 crc kubenswrapper[4794]: I0216 17:22:56.512795 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3342e2cd-2d8f-4dee-be8e-86c60e81ba81-config-data\") pod \"cinder-scheduler-0\" (UID: \"3342e2cd-2d8f-4dee-be8e-86c60e81ba81\") " pod="openstack/cinder-scheduler-0" Feb 16 17:22:56 crc kubenswrapper[4794]: I0216 17:22:56.517723 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3342e2cd-2d8f-4dee-be8e-86c60e81ba81-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3342e2cd-2d8f-4dee-be8e-86c60e81ba81\") " pod="openstack/cinder-scheduler-0" Feb 16 17:22:56 crc kubenswrapper[4794]: I0216 17:22:56.518466 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3342e2cd-2d8f-4dee-be8e-86c60e81ba81-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3342e2cd-2d8f-4dee-be8e-86c60e81ba81\") " pod="openstack/cinder-scheduler-0" Feb 16 17:22:56 crc kubenswrapper[4794]: I0216 17:22:56.520563 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3342e2cd-2d8f-4dee-be8e-86c60e81ba81-config-data\") pod \"cinder-scheduler-0\" (UID: \"3342e2cd-2d8f-4dee-be8e-86c60e81ba81\") " pod="openstack/cinder-scheduler-0" Feb 16 17:22:56 crc kubenswrapper[4794]: I0216 17:22:56.539096 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwhjz\" (UniqueName: \"kubernetes.io/projected/3342e2cd-2d8f-4dee-be8e-86c60e81ba81-kube-api-access-vwhjz\") pod \"cinder-scheduler-0\" (UID: \"3342e2cd-2d8f-4dee-be8e-86c60e81ba81\") " pod="openstack/cinder-scheduler-0" Feb 16 17:22:56 crc kubenswrapper[4794]: I0216 17:22:56.560864 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3342e2cd-2d8f-4dee-be8e-86c60e81ba81-scripts\") pod \"cinder-scheduler-0\" (UID: \"3342e2cd-2d8f-4dee-be8e-86c60e81ba81\") " pod="openstack/cinder-scheduler-0" Feb 16 17:22:56 crc kubenswrapper[4794]: I0216 17:22:56.640120 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 17:22:56 crc kubenswrapper[4794]: I0216 17:22:56.810729 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a3a7e64-dbe6-4fdb-94e1-83756c8a273a" path="/var/lib/kubelet/pods/2a3a7e64-dbe6-4fdb-94e1-83756c8a273a/volumes" Feb 16 17:22:57 crc kubenswrapper[4794]: I0216 17:22:57.214614 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-57f99b44dd-9kw4m" event={"ID":"e70af0d8-dad3-4bab-bfea-e82fef6b308e","Type":"ContainerStarted","Data":"9ba9902c3af62306ee42c89b68999bfcd36ea982ba044875aa2d409c9b3083dc"} Feb 16 17:22:57 crc kubenswrapper[4794]: I0216 17:22:57.214987 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-57f99b44dd-9kw4m" Feb 16 17:22:57 crc kubenswrapper[4794]: I0216 17:22:57.215004 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-57f99b44dd-9kw4m" Feb 16 17:22:57 crc kubenswrapper[4794]: I0216 17:22:57.239041 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 17:22:57 crc kubenswrapper[4794]: I0216 17:22:57.244606 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-57f99b44dd-9kw4m" podStartSLOduration=3.244582036 podStartE2EDuration="3.244582036s" podCreationTimestamp="2026-02-16 17:22:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:22:57.241097277 +0000 UTC m=+1403.189191944" watchObservedRunningTime="2026-02-16 17:22:57.244582036 +0000 UTC m=+1403.192676693" Feb 16 17:22:58 crc kubenswrapper[4794]: I0216 17:22:58.243382 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3342e2cd-2d8f-4dee-be8e-86c60e81ba81","Type":"ContainerStarted","Data":"5450eac0c44879a84d42a04248795db40d5d0e587d15d0db140b7b8b50085040"} Feb 16 17:22:58 crc kubenswrapper[4794]: I0216 17:22:58.264574 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5544448f6b-g648r" podUID="483c093a-519b-46a6-87c0-a4b43efc587e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.201:9311/healthcheck\": read tcp 10.217.0.2:35454->10.217.0.201:9311: read: connection reset by peer" Feb 16 17:22:58 crc kubenswrapper[4794]: I0216 17:22:58.265103 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5544448f6b-g648r" podUID="483c093a-519b-46a6-87c0-a4b43efc587e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.201:9311/healthcheck\": read tcp 10.217.0.2:35462->10.217.0.201:9311: read: connection reset by peer" Feb 16 17:22:58 crc kubenswrapper[4794]: I0216 17:22:58.646801 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5bfdb47d5f-nhr7b" Feb 16 17:22:58 crc kubenswrapper[4794]: I0216 17:22:58.880721 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5544448f6b-g648r" Feb 16 17:22:58 crc kubenswrapper[4794]: I0216 17:22:58.993431 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/483c093a-519b-46a6-87c0-a4b43efc587e-config-data\") pod \"483c093a-519b-46a6-87c0-a4b43efc587e\" (UID: \"483c093a-519b-46a6-87c0-a4b43efc587e\") " Feb 16 17:22:58 crc kubenswrapper[4794]: I0216 17:22:58.993534 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/483c093a-519b-46a6-87c0-a4b43efc587e-combined-ca-bundle\") pod \"483c093a-519b-46a6-87c0-a4b43efc587e\" (UID: \"483c093a-519b-46a6-87c0-a4b43efc587e\") " Feb 16 17:22:58 crc kubenswrapper[4794]: I0216 17:22:58.993593 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb6wx\" (UniqueName: \"kubernetes.io/projected/483c093a-519b-46a6-87c0-a4b43efc587e-kube-api-access-mb6wx\") pod \"483c093a-519b-46a6-87c0-a4b43efc587e\" (UID: \"483c093a-519b-46a6-87c0-a4b43efc587e\") " Feb 16 17:22:58 crc kubenswrapper[4794]: I0216 17:22:58.993669 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/483c093a-519b-46a6-87c0-a4b43efc587e-config-data-custom\") pod \"483c093a-519b-46a6-87c0-a4b43efc587e\" (UID: \"483c093a-519b-46a6-87c0-a4b43efc587e\") " Feb 16 17:22:58 crc kubenswrapper[4794]: I0216 17:22:58.993735 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/483c093a-519b-46a6-87c0-a4b43efc587e-logs\") pod \"483c093a-519b-46a6-87c0-a4b43efc587e\" (UID: \"483c093a-519b-46a6-87c0-a4b43efc587e\") " Feb 16 17:22:58 crc kubenswrapper[4794]: I0216 17:22:58.994942 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/483c093a-519b-46a6-87c0-a4b43efc587e-logs" (OuterVolumeSpecName: "logs") pod "483c093a-519b-46a6-87c0-a4b43efc587e" (UID: "483c093a-519b-46a6-87c0-a4b43efc587e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:22:59 crc kubenswrapper[4794]: I0216 17:22:59.003266 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/483c093a-519b-46a6-87c0-a4b43efc587e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "483c093a-519b-46a6-87c0-a4b43efc587e" (UID: "483c093a-519b-46a6-87c0-a4b43efc587e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:59 crc kubenswrapper[4794]: I0216 17:22:59.004399 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/483c093a-519b-46a6-87c0-a4b43efc587e-kube-api-access-mb6wx" (OuterVolumeSpecName: "kube-api-access-mb6wx") pod "483c093a-519b-46a6-87c0-a4b43efc587e" (UID: "483c093a-519b-46a6-87c0-a4b43efc587e"). InnerVolumeSpecName "kube-api-access-mb6wx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:22:59 crc kubenswrapper[4794]: I0216 17:22:59.029540 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/483c093a-519b-46a6-87c0-a4b43efc587e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "483c093a-519b-46a6-87c0-a4b43efc587e" (UID: "483c093a-519b-46a6-87c0-a4b43efc587e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:59 crc kubenswrapper[4794]: I0216 17:22:59.089709 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/483c093a-519b-46a6-87c0-a4b43efc587e-config-data" (OuterVolumeSpecName: "config-data") pod "483c093a-519b-46a6-87c0-a4b43efc587e" (UID: "483c093a-519b-46a6-87c0-a4b43efc587e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:59 crc kubenswrapper[4794]: I0216 17:22:59.098472 4794 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/483c093a-519b-46a6-87c0-a4b43efc587e-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:59 crc kubenswrapper[4794]: I0216 17:22:59.098518 4794 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/483c093a-519b-46a6-87c0-a4b43efc587e-logs\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:59 crc kubenswrapper[4794]: I0216 17:22:59.098531 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/483c093a-519b-46a6-87c0-a4b43efc587e-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:59 crc kubenswrapper[4794]: I0216 17:22:59.098542 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/483c093a-519b-46a6-87c0-a4b43efc587e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:59 crc kubenswrapper[4794]: I0216 17:22:59.098554 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb6wx\" (UniqueName: \"kubernetes.io/projected/483c093a-519b-46a6-87c0-a4b43efc587e-kube-api-access-mb6wx\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:59 crc kubenswrapper[4794]: I0216 17:22:59.255930 4794 generic.go:334] "Generic (PLEG): container finished" podID="483c093a-519b-46a6-87c0-a4b43efc587e" containerID="26bc69357d6eb66460b7d582b9d3ce706bda07258ddf7e91852f21e6e928ccc0" exitCode=0 Feb 16 17:22:59 crc kubenswrapper[4794]: I0216 17:22:59.256095 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5544448f6b-g648r" event={"ID":"483c093a-519b-46a6-87c0-a4b43efc587e","Type":"ContainerDied","Data":"26bc69357d6eb66460b7d582b9d3ce706bda07258ddf7e91852f21e6e928ccc0"} Feb 16 17:22:59 crc kubenswrapper[4794]: I0216 17:22:59.256384 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5544448f6b-g648r" event={"ID":"483c093a-519b-46a6-87c0-a4b43efc587e","Type":"ContainerDied","Data":"036b321232f9713e98e85b71df988337d930763300082f61d3fd2c7a623ecc42"} Feb 16 17:22:59 crc kubenswrapper[4794]: I0216 17:22:59.256413 4794 scope.go:117] "RemoveContainer" containerID="26bc69357d6eb66460b7d582b9d3ce706bda07258ddf7e91852f21e6e928ccc0" Feb 16 17:22:59 crc kubenswrapper[4794]: I0216 17:22:59.256163 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5544448f6b-g648r" Feb 16 17:22:59 crc kubenswrapper[4794]: I0216 17:22:59.263138 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3342e2cd-2d8f-4dee-be8e-86c60e81ba81","Type":"ContainerStarted","Data":"dc12e385f9d5f7117c3c8a3ec64af031f840845d942f1656b0e8bcee1c13414d"} Feb 16 17:22:59 crc kubenswrapper[4794]: I0216 17:22:59.263171 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3342e2cd-2d8f-4dee-be8e-86c60e81ba81","Type":"ContainerStarted","Data":"75d822b36a6a5e3518dea717e03669513c2ea3a8edbe3e3d65ab7aa3219aeb40"} Feb 16 17:22:59 crc kubenswrapper[4794]: I0216 17:22:59.281540 4794 scope.go:117] "RemoveContainer" containerID="c7cd0a91ba9b29109548552b7d31997ae242b82a47a51e1549967df7d96246fe" Feb 16 17:22:59 crc kubenswrapper[4794]: I0216 17:22:59.345768 4794 scope.go:117] "RemoveContainer" containerID="26bc69357d6eb66460b7d582b9d3ce706bda07258ddf7e91852f21e6e928ccc0" Feb 16 17:22:59 crc kubenswrapper[4794]: I0216 17:22:59.346243 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.346220356 podStartE2EDuration="3.346220356s" podCreationTimestamp="2026-02-16 17:22:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:22:59.307247435 +0000 UTC m=+1405.255342102" watchObservedRunningTime="2026-02-16 17:22:59.346220356 +0000 UTC m=+1405.294315003" Feb 16 17:22:59 crc kubenswrapper[4794]: E0216 17:22:59.348995 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26bc69357d6eb66460b7d582b9d3ce706bda07258ddf7e91852f21e6e928ccc0\": container with ID starting with 26bc69357d6eb66460b7d582b9d3ce706bda07258ddf7e91852f21e6e928ccc0 not found: ID does not exist" containerID="26bc69357d6eb66460b7d582b9d3ce706bda07258ddf7e91852f21e6e928ccc0" Feb 16 17:22:59 crc kubenswrapper[4794]: I0216 17:22:59.349038 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26bc69357d6eb66460b7d582b9d3ce706bda07258ddf7e91852f21e6e928ccc0"} err="failed to get container status \"26bc69357d6eb66460b7d582b9d3ce706bda07258ddf7e91852f21e6e928ccc0\": rpc error: code = NotFound desc = could not find container \"26bc69357d6eb66460b7d582b9d3ce706bda07258ddf7e91852f21e6e928ccc0\": container with ID starting with 26bc69357d6eb66460b7d582b9d3ce706bda07258ddf7e91852f21e6e928ccc0 not found: ID does not exist" Feb 16 17:22:59 crc kubenswrapper[4794]: I0216 17:22:59.349070 4794 scope.go:117] "RemoveContainer" containerID="c7cd0a91ba9b29109548552b7d31997ae242b82a47a51e1549967df7d96246fe" Feb 16 17:22:59 crc kubenswrapper[4794]: E0216 17:22:59.349384 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7cd0a91ba9b29109548552b7d31997ae242b82a47a51e1549967df7d96246fe\": container with ID starting with c7cd0a91ba9b29109548552b7d31997ae242b82a47a51e1549967df7d96246fe not found: ID does not exist" containerID="c7cd0a91ba9b29109548552b7d31997ae242b82a47a51e1549967df7d96246fe" Feb 16 17:22:59 crc kubenswrapper[4794]: I0216 17:22:59.349407 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7cd0a91ba9b29109548552b7d31997ae242b82a47a51e1549967df7d96246fe"} err="failed to get container status \"c7cd0a91ba9b29109548552b7d31997ae242b82a47a51e1549967df7d96246fe\": rpc error: code = NotFound desc = could not find container \"c7cd0a91ba9b29109548552b7d31997ae242b82a47a51e1549967df7d96246fe\": container with ID starting with c7cd0a91ba9b29109548552b7d31997ae242b82a47a51e1549967df7d96246fe not found: ID does not exist" Feb 16 17:22:59 crc kubenswrapper[4794]: I0216 17:22:59.359023 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5544448f6b-g648r"] Feb 16 17:22:59 crc kubenswrapper[4794]: I0216 17:22:59.371943 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5544448f6b-g648r"] Feb 16 17:23:00 crc kubenswrapper[4794]: I0216 17:23:00.819369 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="483c093a-519b-46a6-87c0-a4b43efc587e" path="/var/lib/kubelet/pods/483c093a-519b-46a6-87c0-a4b43efc587e/volumes" Feb 16 17:23:01 crc kubenswrapper[4794]: I0216 17:23:01.642665 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 17:23:02 crc kubenswrapper[4794]: I0216 17:23:02.018390 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 16 17:23:02 crc kubenswrapper[4794]: E0216 17:23:02.018991 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="483c093a-519b-46a6-87c0-a4b43efc587e" containerName="barbican-api" Feb 16 17:23:02 crc kubenswrapper[4794]: I0216 17:23:02.019010 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="483c093a-519b-46a6-87c0-a4b43efc587e" containerName="barbican-api" Feb 16 17:23:02 crc kubenswrapper[4794]: E0216 17:23:02.019051 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="483c093a-519b-46a6-87c0-a4b43efc587e" containerName="barbican-api-log" Feb 16 17:23:02 crc kubenswrapper[4794]: I0216 17:23:02.019060 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="483c093a-519b-46a6-87c0-a4b43efc587e" containerName="barbican-api-log" Feb 16 17:23:02 crc kubenswrapper[4794]: I0216 17:23:02.019601 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="483c093a-519b-46a6-87c0-a4b43efc587e" containerName="barbican-api" Feb 16 17:23:02 crc kubenswrapper[4794]: I0216 17:23:02.019650 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="483c093a-519b-46a6-87c0-a4b43efc587e" containerName="barbican-api-log" Feb 16 17:23:02 crc kubenswrapper[4794]: I0216 17:23:02.020766 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 17:23:02 crc kubenswrapper[4794]: I0216 17:23:02.023411 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 16 17:23:02 crc kubenswrapper[4794]: I0216 17:23:02.023555 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-kvhvz" Feb 16 17:23:02 crc kubenswrapper[4794]: I0216 17:23:02.023613 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 16 17:23:02 crc kubenswrapper[4794]: I0216 17:23:02.033424 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 17:23:02 crc kubenswrapper[4794]: I0216 17:23:02.066904 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11c6449f-ae59-4210-9c59-bafcbb116ed8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"11c6449f-ae59-4210-9c59-bafcbb116ed8\") " pod="openstack/openstackclient" Feb 16 17:23:02 crc kubenswrapper[4794]: I0216 17:23:02.066966 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/11c6449f-ae59-4210-9c59-bafcbb116ed8-openstack-config-secret\") pod \"openstackclient\" (UID: \"11c6449f-ae59-4210-9c59-bafcbb116ed8\") " pod="openstack/openstackclient" Feb 16 17:23:02 crc kubenswrapper[4794]: I0216 17:23:02.067044 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/11c6449f-ae59-4210-9c59-bafcbb116ed8-openstack-config\") pod \"openstackclient\" (UID: \"11c6449f-ae59-4210-9c59-bafcbb116ed8\") " pod="openstack/openstackclient" Feb 16 17:23:02 crc kubenswrapper[4794]: I0216 17:23:02.067247 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq4bq\" (UniqueName: \"kubernetes.io/projected/11c6449f-ae59-4210-9c59-bafcbb116ed8-kube-api-access-wq4bq\") pod \"openstackclient\" (UID: \"11c6449f-ae59-4210-9c59-bafcbb116ed8\") " pod="openstack/openstackclient" Feb 16 17:23:02 crc kubenswrapper[4794]: I0216 17:23:02.169045 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11c6449f-ae59-4210-9c59-bafcbb116ed8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"11c6449f-ae59-4210-9c59-bafcbb116ed8\") " pod="openstack/openstackclient" Feb 16 17:23:02 crc kubenswrapper[4794]: I0216 17:23:02.169413 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/11c6449f-ae59-4210-9c59-bafcbb116ed8-openstack-config-secret\") pod \"openstackclient\" (UID: \"11c6449f-ae59-4210-9c59-bafcbb116ed8\") " pod="openstack/openstackclient" Feb 16 17:23:02 crc kubenswrapper[4794]: I0216 17:23:02.169450 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/11c6449f-ae59-4210-9c59-bafcbb116ed8-openstack-config\") pod \"openstackclient\" (UID: \"11c6449f-ae59-4210-9c59-bafcbb116ed8\") " pod="openstack/openstackclient" Feb 16 17:23:02 crc kubenswrapper[4794]: I0216 17:23:02.169545 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq4bq\" (UniqueName: \"kubernetes.io/projected/11c6449f-ae59-4210-9c59-bafcbb116ed8-kube-api-access-wq4bq\") pod \"openstackclient\" (UID: \"11c6449f-ae59-4210-9c59-bafcbb116ed8\") " pod="openstack/openstackclient" Feb 16 17:23:02 crc kubenswrapper[4794]: I0216 17:23:02.170494 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/11c6449f-ae59-4210-9c59-bafcbb116ed8-openstack-config\") pod \"openstackclient\" (UID: \"11c6449f-ae59-4210-9c59-bafcbb116ed8\") " pod="openstack/openstackclient" Feb 16 17:23:02 crc kubenswrapper[4794]: I0216 17:23:02.179131 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/11c6449f-ae59-4210-9c59-bafcbb116ed8-openstack-config-secret\") pod \"openstackclient\" (UID: \"11c6449f-ae59-4210-9c59-bafcbb116ed8\") " pod="openstack/openstackclient" Feb 16 17:23:02 crc kubenswrapper[4794]: I0216 17:23:02.191032 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11c6449f-ae59-4210-9c59-bafcbb116ed8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"11c6449f-ae59-4210-9c59-bafcbb116ed8\") " pod="openstack/openstackclient" Feb 16 17:23:02 crc kubenswrapper[4794]: I0216 17:23:02.193686 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq4bq\" (UniqueName: \"kubernetes.io/projected/11c6449f-ae59-4210-9c59-bafcbb116ed8-kube-api-access-wq4bq\") pod \"openstackclient\" (UID: \"11c6449f-ae59-4210-9c59-bafcbb116ed8\") " pod="openstack/openstackclient" Feb 16 17:23:02 crc kubenswrapper[4794]: I0216 17:23:02.341911 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 17:23:02 crc kubenswrapper[4794]: I0216 17:23:02.904874 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 17:23:03 crc kubenswrapper[4794]: I0216 17:23:03.312425 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"11c6449f-ae59-4210-9c59-bafcbb116ed8","Type":"ContainerStarted","Data":"30671953d1199a1563e806073c98358f092f6b7cea981e4e22ea8c656fa3fb6b"} Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.503907 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-599bd89595-29q2j"] Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.505717 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-599bd89595-29q2j" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.510615 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-s27hr" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.510889 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.523327 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.526028 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-599bd89595-29q2j"] Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.540934 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1082952-332b-4c48-b37b-52c919f87f0f-config-data\") pod \"heat-engine-599bd89595-29q2j\" (UID: \"f1082952-332b-4c48-b37b-52c919f87f0f\") " pod="openstack/heat-engine-599bd89595-29q2j" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.541003 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1082952-332b-4c48-b37b-52c919f87f0f-config-data-custom\") pod \"heat-engine-599bd89595-29q2j\" (UID: \"f1082952-332b-4c48-b37b-52c919f87f0f\") " pod="openstack/heat-engine-599bd89595-29q2j" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.541095 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1082952-332b-4c48-b37b-52c919f87f0f-combined-ca-bundle\") pod \"heat-engine-599bd89595-29q2j\" (UID: \"f1082952-332b-4c48-b37b-52c919f87f0f\") " pod="openstack/heat-engine-599bd89595-29q2j" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.541123 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgb6f\" (UniqueName: \"kubernetes.io/projected/f1082952-332b-4c48-b37b-52c919f87f0f-kube-api-access-xgb6f\") pod \"heat-engine-599bd89595-29q2j\" (UID: \"f1082952-332b-4c48-b37b-52c919f87f0f\") " pod="openstack/heat-engine-599bd89595-29q2j" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.595202 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-4pz6j"] Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.605585 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-4pz6j" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.637259 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-4pz6j"] Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.646071 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb9be534-e864-414a-8dcd-6c9457f6f0bc-config\") pod \"dnsmasq-dns-688b9f5b49-4pz6j\" (UID: \"fb9be534-e864-414a-8dcd-6c9457f6f0bc\") " pod="openstack/dnsmasq-dns-688b9f5b49-4pz6j" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.646147 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1082952-332b-4c48-b37b-52c919f87f0f-config-data\") pod \"heat-engine-599bd89595-29q2j\" (UID: \"f1082952-332b-4c48-b37b-52c919f87f0f\") " pod="openstack/heat-engine-599bd89595-29q2j" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.646208 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1082952-332b-4c48-b37b-52c919f87f0f-config-data-custom\") pod \"heat-engine-599bd89595-29q2j\" (UID: \"f1082952-332b-4c48-b37b-52c919f87f0f\") " pod="openstack/heat-engine-599bd89595-29q2j" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.646231 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fb9be534-e864-414a-8dcd-6c9457f6f0bc-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-4pz6j\" (UID: \"fb9be534-e864-414a-8dcd-6c9457f6f0bc\") " pod="openstack/dnsmasq-dns-688b9f5b49-4pz6j" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.646360 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdfd2\" (UniqueName: \"kubernetes.io/projected/fb9be534-e864-414a-8dcd-6c9457f6f0bc-kube-api-access-zdfd2\") pod \"dnsmasq-dns-688b9f5b49-4pz6j\" (UID: \"fb9be534-e864-414a-8dcd-6c9457f6f0bc\") " pod="openstack/dnsmasq-dns-688b9f5b49-4pz6j" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.646393 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1082952-332b-4c48-b37b-52c919f87f0f-combined-ca-bundle\") pod \"heat-engine-599bd89595-29q2j\" (UID: \"f1082952-332b-4c48-b37b-52c919f87f0f\") " pod="openstack/heat-engine-599bd89595-29q2j" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.646426 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgb6f\" (UniqueName: \"kubernetes.io/projected/f1082952-332b-4c48-b37b-52c919f87f0f-kube-api-access-xgb6f\") pod \"heat-engine-599bd89595-29q2j\" (UID: \"f1082952-332b-4c48-b37b-52c919f87f0f\") " pod="openstack/heat-engine-599bd89595-29q2j" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.646469 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb9be534-e864-414a-8dcd-6c9457f6f0bc-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-4pz6j\" (UID: \"fb9be534-e864-414a-8dcd-6c9457f6f0bc\") " pod="openstack/dnsmasq-dns-688b9f5b49-4pz6j" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.646493 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb9be534-e864-414a-8dcd-6c9457f6f0bc-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-4pz6j\" (UID: \"fb9be534-e864-414a-8dcd-6c9457f6f0bc\") " pod="openstack/dnsmasq-dns-688b9f5b49-4pz6j" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.646624 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb9be534-e864-414a-8dcd-6c9457f6f0bc-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-4pz6j\" (UID: \"fb9be534-e864-414a-8dcd-6c9457f6f0bc\") " pod="openstack/dnsmasq-dns-688b9f5b49-4pz6j" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.655099 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1082952-332b-4c48-b37b-52c919f87f0f-config-data-custom\") pod \"heat-engine-599bd89595-29q2j\" (UID: \"f1082952-332b-4c48-b37b-52c919f87f0f\") " pod="openstack/heat-engine-599bd89595-29q2j" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.665739 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1082952-332b-4c48-b37b-52c919f87f0f-config-data\") pod \"heat-engine-599bd89595-29q2j\" (UID: \"f1082952-332b-4c48-b37b-52c919f87f0f\") " pod="openstack/heat-engine-599bd89595-29q2j" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.684595 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1082952-332b-4c48-b37b-52c919f87f0f-combined-ca-bundle\") pod \"heat-engine-599bd89595-29q2j\" (UID: \"f1082952-332b-4c48-b37b-52c919f87f0f\") " pod="openstack/heat-engine-599bd89595-29q2j" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.691447 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgb6f\" (UniqueName: \"kubernetes.io/projected/f1082952-332b-4c48-b37b-52c919f87f0f-kube-api-access-xgb6f\") pod \"heat-engine-599bd89595-29q2j\" (UID: \"f1082952-332b-4c48-b37b-52c919f87f0f\") " pod="openstack/heat-engine-599bd89595-29q2j" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.767940 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdfd2\" (UniqueName: \"kubernetes.io/projected/fb9be534-e864-414a-8dcd-6c9457f6f0bc-kube-api-access-zdfd2\") pod \"dnsmasq-dns-688b9f5b49-4pz6j\" (UID: \"fb9be534-e864-414a-8dcd-6c9457f6f0bc\") " pod="openstack/dnsmasq-dns-688b9f5b49-4pz6j" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.768054 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb9be534-e864-414a-8dcd-6c9457f6f0bc-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-4pz6j\" (UID: \"fb9be534-e864-414a-8dcd-6c9457f6f0bc\") " pod="openstack/dnsmasq-dns-688b9f5b49-4pz6j" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.768085 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb9be534-e864-414a-8dcd-6c9457f6f0bc-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-4pz6j\" (UID: \"fb9be534-e864-414a-8dcd-6c9457f6f0bc\") " pod="openstack/dnsmasq-dns-688b9f5b49-4pz6j" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.768236 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb9be534-e864-414a-8dcd-6c9457f6f0bc-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-4pz6j\" (UID: \"fb9be534-e864-414a-8dcd-6c9457f6f0bc\") " pod="openstack/dnsmasq-dns-688b9f5b49-4pz6j" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.768321 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb9be534-e864-414a-8dcd-6c9457f6f0bc-config\") pod \"dnsmasq-dns-688b9f5b49-4pz6j\" (UID: \"fb9be534-e864-414a-8dcd-6c9457f6f0bc\") " pod="openstack/dnsmasq-dns-688b9f5b49-4pz6j" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.768418 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fb9be534-e864-414a-8dcd-6c9457f6f0bc-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-4pz6j\" (UID: \"fb9be534-e864-414a-8dcd-6c9457f6f0bc\") " pod="openstack/dnsmasq-dns-688b9f5b49-4pz6j" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.782078 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb9be534-e864-414a-8dcd-6c9457f6f0bc-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-4pz6j\" (UID: \"fb9be534-e864-414a-8dcd-6c9457f6f0bc\") " pod="openstack/dnsmasq-dns-688b9f5b49-4pz6j" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.792867 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb9be534-e864-414a-8dcd-6c9457f6f0bc-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-4pz6j\" (UID: \"fb9be534-e864-414a-8dcd-6c9457f6f0bc\") " pod="openstack/dnsmasq-dns-688b9f5b49-4pz6j" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.829572 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb9be534-e864-414a-8dcd-6c9457f6f0bc-config\") pod \"dnsmasq-dns-688b9f5b49-4pz6j\" (UID: \"fb9be534-e864-414a-8dcd-6c9457f6f0bc\") " pod="openstack/dnsmasq-dns-688b9f5b49-4pz6j" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.840152 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb9be534-e864-414a-8dcd-6c9457f6f0bc-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-4pz6j\" (UID: \"fb9be534-e864-414a-8dcd-6c9457f6f0bc\") " pod="openstack/dnsmasq-dns-688b9f5b49-4pz6j" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.848476 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fb9be534-e864-414a-8dcd-6c9457f6f0bc-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-4pz6j\" (UID: \"fb9be534-e864-414a-8dcd-6c9457f6f0bc\") " pod="openstack/dnsmasq-dns-688b9f5b49-4pz6j" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.852073 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-599bd89595-29q2j" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.875796 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-67dd4d6c5f-84d7v"] Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.877501 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-67dd4d6c5f-84d7v" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.882596 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.891258 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdfd2\" (UniqueName: \"kubernetes.io/projected/fb9be534-e864-414a-8dcd-6c9457f6f0bc-kube-api-access-zdfd2\") pod \"dnsmasq-dns-688b9f5b49-4pz6j\" (UID: \"fb9be534-e864-414a-8dcd-6c9457f6f0bc\") " pod="openstack/dnsmasq-dns-688b9f5b49-4pz6j" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.929712 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6df69d99d9-jgbc2"] Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.932410 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6df69d99d9-jgbc2" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.943429 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.973497 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-67dd4d6c5f-84d7v"] Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.976905 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgmwp\" (UniqueName: \"kubernetes.io/projected/678592d4-5921-4bc3-bdc9-d47b36ffba37-kube-api-access-zgmwp\") pod \"heat-api-6df69d99d9-jgbc2\" (UID: \"678592d4-5921-4bc3-bdc9-d47b36ffba37\") " pod="openstack/heat-api-6df69d99d9-jgbc2" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.976954 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d-config-data-custom\") pod \"heat-cfnapi-67dd4d6c5f-84d7v\" (UID: \"f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d\") " pod="openstack/heat-cfnapi-67dd4d6c5f-84d7v" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.976978 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d-config-data\") pod \"heat-cfnapi-67dd4d6c5f-84d7v\" (UID: \"f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d\") " pod="openstack/heat-cfnapi-67dd4d6c5f-84d7v" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.977027 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d-combined-ca-bundle\") pod \"heat-cfnapi-67dd4d6c5f-84d7v\" (UID: \"f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d\") " pod="openstack/heat-cfnapi-67dd4d6c5f-84d7v" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.977154 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/678592d4-5921-4bc3-bdc9-d47b36ffba37-config-data-custom\") pod \"heat-api-6df69d99d9-jgbc2\" (UID: \"678592d4-5921-4bc3-bdc9-d47b36ffba37\") " pod="openstack/heat-api-6df69d99d9-jgbc2" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.977189 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrr99\" (UniqueName: \"kubernetes.io/projected/f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d-kube-api-access-nrr99\") pod \"heat-cfnapi-67dd4d6c5f-84d7v\" (UID: \"f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d\") " pod="openstack/heat-cfnapi-67dd4d6c5f-84d7v" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.977224 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/678592d4-5921-4bc3-bdc9-d47b36ffba37-config-data\") pod \"heat-api-6df69d99d9-jgbc2\" (UID: \"678592d4-5921-4bc3-bdc9-d47b36ffba37\") " pod="openstack/heat-api-6df69d99d9-jgbc2" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.977255 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/678592d4-5921-4bc3-bdc9-d47b36ffba37-combined-ca-bundle\") pod \"heat-api-6df69d99d9-jgbc2\" (UID: \"678592d4-5921-4bc3-bdc9-d47b36ffba37\") " pod="openstack/heat-api-6df69d99d9-jgbc2" Feb 16 17:23:04 crc kubenswrapper[4794]: I0216 17:23:04.993279 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6df69d99d9-jgbc2"] Feb 16 17:23:05 crc kubenswrapper[4794]: I0216 17:23:05.078817 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/678592d4-5921-4bc3-bdc9-d47b36ffba37-config-data-custom\") pod \"heat-api-6df69d99d9-jgbc2\" (UID: \"678592d4-5921-4bc3-bdc9-d47b36ffba37\") " pod="openstack/heat-api-6df69d99d9-jgbc2" Feb 16 17:23:05 crc kubenswrapper[4794]: I0216 17:23:05.078878 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrr99\" (UniqueName: \"kubernetes.io/projected/f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d-kube-api-access-nrr99\") pod \"heat-cfnapi-67dd4d6c5f-84d7v\" (UID: \"f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d\") " pod="openstack/heat-cfnapi-67dd4d6c5f-84d7v" Feb 16 17:23:05 crc kubenswrapper[4794]: I0216 17:23:05.078910 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/678592d4-5921-4bc3-bdc9-d47b36ffba37-config-data\") pod \"heat-api-6df69d99d9-jgbc2\" (UID: \"678592d4-5921-4bc3-bdc9-d47b36ffba37\") " pod="openstack/heat-api-6df69d99d9-jgbc2" Feb 16 17:23:05 crc kubenswrapper[4794]: I0216 17:23:05.078942 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/678592d4-5921-4bc3-bdc9-d47b36ffba37-combined-ca-bundle\") pod \"heat-api-6df69d99d9-jgbc2\" (UID: \"678592d4-5921-4bc3-bdc9-d47b36ffba37\") " pod="openstack/heat-api-6df69d99d9-jgbc2" Feb 16 17:23:05 crc kubenswrapper[4794]: I0216 17:23:05.079014 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgmwp\" (UniqueName: \"kubernetes.io/projected/678592d4-5921-4bc3-bdc9-d47b36ffba37-kube-api-access-zgmwp\") pod \"heat-api-6df69d99d9-jgbc2\" (UID: \"678592d4-5921-4bc3-bdc9-d47b36ffba37\") " pod="openstack/heat-api-6df69d99d9-jgbc2" Feb 16 17:23:05 crc kubenswrapper[4794]: I0216 17:23:05.079036 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d-config-data-custom\") pod \"heat-cfnapi-67dd4d6c5f-84d7v\" (UID: \"f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d\") " pod="openstack/heat-cfnapi-67dd4d6c5f-84d7v" Feb 16 17:23:05 crc kubenswrapper[4794]: I0216 17:23:05.079054 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d-config-data\") pod \"heat-cfnapi-67dd4d6c5f-84d7v\" (UID: \"f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d\") " pod="openstack/heat-cfnapi-67dd4d6c5f-84d7v" Feb 16 17:23:05 crc kubenswrapper[4794]: I0216 17:23:05.079091 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d-combined-ca-bundle\") pod \"heat-cfnapi-67dd4d6c5f-84d7v\" (UID: \"f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d\") " pod="openstack/heat-cfnapi-67dd4d6c5f-84d7v" Feb 16 17:23:05 crc kubenswrapper[4794]: I0216 17:23:05.092256 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/678592d4-5921-4bc3-bdc9-d47b36ffba37-config-data\") pod \"heat-api-6df69d99d9-jgbc2\" (UID: \"678592d4-5921-4bc3-bdc9-d47b36ffba37\") " pod="openstack/heat-api-6df69d99d9-jgbc2" Feb 16 17:23:05 crc kubenswrapper[4794]: I0216 17:23:05.094189 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d-config-data\") pod \"heat-cfnapi-67dd4d6c5f-84d7v\" (UID: \"f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d\") " pod="openstack/heat-cfnapi-67dd4d6c5f-84d7v" Feb 16 17:23:05 crc kubenswrapper[4794]: I0216 17:23:05.096252 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-4pz6j" Feb 16 17:23:05 crc kubenswrapper[4794]: I0216 17:23:05.096592 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/678592d4-5921-4bc3-bdc9-d47b36ffba37-config-data-custom\") pod \"heat-api-6df69d99d9-jgbc2\" (UID: \"678592d4-5921-4bc3-bdc9-d47b36ffba37\") " pod="openstack/heat-api-6df69d99d9-jgbc2" Feb 16 17:23:05 crc kubenswrapper[4794]: I0216 17:23:05.110105 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/678592d4-5921-4bc3-bdc9-d47b36ffba37-combined-ca-bundle\") pod \"heat-api-6df69d99d9-jgbc2\" (UID: \"678592d4-5921-4bc3-bdc9-d47b36ffba37\") " pod="openstack/heat-api-6df69d99d9-jgbc2" Feb 16 17:23:05 crc kubenswrapper[4794]: I0216 17:23:05.114684 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d-config-data-custom\") pod \"heat-cfnapi-67dd4d6c5f-84d7v\" (UID: \"f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d\") " pod="openstack/heat-cfnapi-67dd4d6c5f-84d7v" Feb 16 17:23:05 crc kubenswrapper[4794]: I0216 17:23:05.116137 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d-combined-ca-bundle\") pod \"heat-cfnapi-67dd4d6c5f-84d7v\" (UID: \"f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d\") " pod="openstack/heat-cfnapi-67dd4d6c5f-84d7v" Feb 16 17:23:05 crc kubenswrapper[4794]: I0216 17:23:05.128943 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgmwp\" (UniqueName: \"kubernetes.io/projected/678592d4-5921-4bc3-bdc9-d47b36ffba37-kube-api-access-zgmwp\") pod \"heat-api-6df69d99d9-jgbc2\" (UID: \"678592d4-5921-4bc3-bdc9-d47b36ffba37\") " pod="openstack/heat-api-6df69d99d9-jgbc2" Feb 16 17:23:05 crc kubenswrapper[4794]: I0216 17:23:05.129341 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrr99\" (UniqueName: \"kubernetes.io/projected/f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d-kube-api-access-nrr99\") pod \"heat-cfnapi-67dd4d6c5f-84d7v\" (UID: \"f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d\") " pod="openstack/heat-cfnapi-67dd4d6c5f-84d7v" Feb 16 17:23:05 crc kubenswrapper[4794]: I0216 17:23:05.136151 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6df69d99d9-jgbc2" Feb 16 17:23:05 crc kubenswrapper[4794]: I0216 17:23:05.233889 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-67dd4d6c5f-84d7v" Feb 16 17:23:05 crc kubenswrapper[4794]: I0216 17:23:05.619471 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-599bd89595-29q2j"] Feb 16 17:23:05 crc kubenswrapper[4794]: W0216 17:23:05.639641 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1082952_332b_4c48_b37b_52c919f87f0f.slice/crio-f968b5c3e3c7f091d2891e3310e9b9e08282c503b94efa1791dbceacf77339b6 WatchSource:0}: Error finding container f968b5c3e3c7f091d2891e3310e9b9e08282c503b94efa1791dbceacf77339b6: Status 404 returned error can't find the container with id f968b5c3e3c7f091d2891e3310e9b9e08282c503b94efa1791dbceacf77339b6 Feb 16 17:23:05 crc kubenswrapper[4794]: I0216 17:23:05.865009 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-4pz6j"] Feb 16 17:23:06 crc kubenswrapper[4794]: I0216 17:23:06.122081 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6df69d99d9-jgbc2"] Feb 16 17:23:06 crc kubenswrapper[4794]: I0216 17:23:06.150113 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-67dd4d6c5f-84d7v"] Feb 16 17:23:06 crc kubenswrapper[4794]: I0216 17:23:06.375393 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6df69d99d9-jgbc2" event={"ID":"678592d4-5921-4bc3-bdc9-d47b36ffba37","Type":"ContainerStarted","Data":"0876de444455ca21b42df3630f295022f846104fc11761e9b45cb8473280e91e"} Feb 16 17:23:06 crc kubenswrapper[4794]: I0216 17:23:06.379876 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-4pz6j" event={"ID":"fb9be534-e864-414a-8dcd-6c9457f6f0bc","Type":"ContainerStarted","Data":"2f9a52264662941bf1ae701a008247ee70c257c8840bd22422657d8b15faeb55"} Feb 16 17:23:06 crc kubenswrapper[4794]: I0216 17:23:06.379916 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-4pz6j" event={"ID":"fb9be534-e864-414a-8dcd-6c9457f6f0bc","Type":"ContainerStarted","Data":"672d404498dbe189f44ab1a5f7b8057cfa93ced2d96270c1a240d21472c3607f"} Feb 16 17:23:06 crc kubenswrapper[4794]: I0216 17:23:06.388832 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-599bd89595-29q2j" event={"ID":"f1082952-332b-4c48-b37b-52c919f87f0f","Type":"ContainerStarted","Data":"bfb3f2ff3e912556f3ae81e4ead3c17ca392e240787708cd7bf47fb132d69f7b"} Feb 16 17:23:06 crc kubenswrapper[4794]: I0216 17:23:06.388874 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-599bd89595-29q2j" event={"ID":"f1082952-332b-4c48-b37b-52c919f87f0f","Type":"ContainerStarted","Data":"f968b5c3e3c7f091d2891e3310e9b9e08282c503b94efa1791dbceacf77339b6"} Feb 16 17:23:06 crc kubenswrapper[4794]: I0216 17:23:06.390123 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-599bd89595-29q2j" Feb 16 17:23:06 crc kubenswrapper[4794]: I0216 17:23:06.419759 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-67dd4d6c5f-84d7v" event={"ID":"f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d","Type":"ContainerStarted","Data":"dbc94f8b441130089bd538bcd87c65bbddce4ad11b8b7ebeaabfc6aa6c088dba"} Feb 16 17:23:06 crc kubenswrapper[4794]: I0216 17:23:06.475849 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-599bd89595-29q2j" podStartSLOduration=2.4758283580000002 podStartE2EDuration="2.475828358s" podCreationTimestamp="2026-02-16 17:23:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:23:06.455223136 +0000 UTC m=+1412.403317773" watchObservedRunningTime="2026-02-16 17:23:06.475828358 +0000 UTC m=+1412.423923005" Feb 16 17:23:07 crc kubenswrapper[4794]: I0216 17:23:07.109320 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 16 17:23:07 crc kubenswrapper[4794]: I0216 17:23:07.436888 4794 generic.go:334] "Generic (PLEG): container finished" podID="fb9be534-e864-414a-8dcd-6c9457f6f0bc" containerID="2f9a52264662941bf1ae701a008247ee70c257c8840bd22422657d8b15faeb55" exitCode=0 Feb 16 17:23:07 crc kubenswrapper[4794]: I0216 17:23:07.438014 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-4pz6j" event={"ID":"fb9be534-e864-414a-8dcd-6c9457f6f0bc","Type":"ContainerDied","Data":"2f9a52264662941bf1ae701a008247ee70c257c8840bd22422657d8b15faeb55"} Feb 16 17:23:07 crc kubenswrapper[4794]: I0216 17:23:07.438039 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-4pz6j" event={"ID":"fb9be534-e864-414a-8dcd-6c9457f6f0bc","Type":"ContainerStarted","Data":"5f4596ab47307f80837b2a4ca53f5e88933927c6306098f0ab7184132bd7c176"} Feb 16 17:23:07 crc kubenswrapper[4794]: I0216 17:23:07.438076 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688b9f5b49-4pz6j" Feb 16 17:23:07 crc kubenswrapper[4794]: I0216 17:23:07.465985 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688b9f5b49-4pz6j" podStartSLOduration=3.4659628 podStartE2EDuration="3.4659628s" podCreationTimestamp="2026-02-16 17:23:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:23:07.4606491 +0000 UTC m=+1413.408743757" watchObservedRunningTime="2026-02-16 17:23:07.4659628 +0000 UTC m=+1413.414057437" Feb 16 17:23:10 crc kubenswrapper[4794]: I0216 17:23:10.481594 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6df69d99d9-jgbc2" event={"ID":"678592d4-5921-4bc3-bdc9-d47b36ffba37","Type":"ContainerStarted","Data":"bdcbee2f070a73c1f074e7323388703fc457c7eb6c68b2bb52e4fc93498634db"} Feb 16 17:23:10 crc kubenswrapper[4794]: I0216 17:23:10.485197 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6df69d99d9-jgbc2" Feb 16 17:23:10 crc kubenswrapper[4794]: I0216 17:23:10.488414 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-67dd4d6c5f-84d7v" event={"ID":"f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d","Type":"ContainerStarted","Data":"0faed2d1e1da7b0286d9ec595c654882f7ae55963df2cf9314a32b6a8189506c"} Feb 16 17:23:10 crc kubenswrapper[4794]: I0216 17:23:10.488964 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-67dd4d6c5f-84d7v" Feb 16 17:23:10 crc kubenswrapper[4794]: I0216 17:23:10.509033 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6df69d99d9-jgbc2" podStartSLOduration=3.448200509 podStartE2EDuration="6.509010977s" podCreationTimestamp="2026-02-16 17:23:04 +0000 UTC" firstStartedPulling="2026-02-16 17:23:06.122383161 +0000 UTC m=+1412.070477818" lastFinishedPulling="2026-02-16 17:23:09.183193639 +0000 UTC m=+1415.131288286" observedRunningTime="2026-02-16 17:23:10.505788206 +0000 UTC m=+1416.453882863" watchObservedRunningTime="2026-02-16 17:23:10.509010977 +0000 UTC m=+1416.457105624" Feb 16 17:23:10 crc kubenswrapper[4794]: I0216 17:23:10.534789 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-67dd4d6c5f-84d7v" podStartSLOduration=3.549259652 podStartE2EDuration="6.534766724s" podCreationTimestamp="2026-02-16 17:23:04 +0000 UTC" firstStartedPulling="2026-02-16 17:23:06.181926232 +0000 UTC m=+1412.130020879" lastFinishedPulling="2026-02-16 17:23:09.167433304 +0000 UTC m=+1415.115527951" observedRunningTime="2026-02-16 17:23:10.524503704 +0000 UTC m=+1416.472598351" watchObservedRunningTime="2026-02-16 17:23:10.534766724 +0000 UTC m=+1416.482861371" Feb 16 17:23:11 crc kubenswrapper[4794]: I0216 17:23:11.015151 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 17:23:12 crc kubenswrapper[4794]: I0216 17:23:12.479534 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6cb67474dc-d4tmw"] Feb 16 17:23:12 crc kubenswrapper[4794]: I0216 17:23:12.488851 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6cb67474dc-d4tmw" Feb 16 17:23:12 crc kubenswrapper[4794]: I0216 17:23:12.492392 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 16 17:23:12 crc kubenswrapper[4794]: I0216 17:23:12.492759 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 16 17:23:12 crc kubenswrapper[4794]: I0216 17:23:12.494816 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 16 17:23:12 crc kubenswrapper[4794]: I0216 17:23:12.583108 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6cb67474dc-d4tmw"] Feb 16 17:23:12 crc kubenswrapper[4794]: I0216 17:23:12.633540 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd56173e-c7f0-4309-97a9-4bd89c7704f3-internal-tls-certs\") pod \"swift-proxy-6cb67474dc-d4tmw\" (UID: \"cd56173e-c7f0-4309-97a9-4bd89c7704f3\") " pod="openstack/swift-proxy-6cb67474dc-d4tmw" Feb 16 17:23:12 crc kubenswrapper[4794]: I0216 17:23:12.634883 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd56173e-c7f0-4309-97a9-4bd89c7704f3-combined-ca-bundle\") pod \"swift-proxy-6cb67474dc-d4tmw\" (UID: \"cd56173e-c7f0-4309-97a9-4bd89c7704f3\") " pod="openstack/swift-proxy-6cb67474dc-d4tmw" Feb 16 17:23:12 crc kubenswrapper[4794]: I0216 17:23:12.634961 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4phx5\" (UniqueName: \"kubernetes.io/projected/cd56173e-c7f0-4309-97a9-4bd89c7704f3-kube-api-access-4phx5\") pod \"swift-proxy-6cb67474dc-d4tmw\" (UID: \"cd56173e-c7f0-4309-97a9-4bd89c7704f3\") " pod="openstack/swift-proxy-6cb67474dc-d4tmw" Feb 16 17:23:12 crc kubenswrapper[4794]: I0216 17:23:12.635005 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd56173e-c7f0-4309-97a9-4bd89c7704f3-public-tls-certs\") pod \"swift-proxy-6cb67474dc-d4tmw\" (UID: \"cd56173e-c7f0-4309-97a9-4bd89c7704f3\") " pod="openstack/swift-proxy-6cb67474dc-d4tmw" Feb 16 17:23:12 crc kubenswrapper[4794]: I0216 17:23:12.635121 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd56173e-c7f0-4309-97a9-4bd89c7704f3-run-httpd\") pod \"swift-proxy-6cb67474dc-d4tmw\" (UID: \"cd56173e-c7f0-4309-97a9-4bd89c7704f3\") " pod="openstack/swift-proxy-6cb67474dc-d4tmw" Feb 16 17:23:12 crc kubenswrapper[4794]: I0216 17:23:12.635171 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cd56173e-c7f0-4309-97a9-4bd89c7704f3-etc-swift\") pod \"swift-proxy-6cb67474dc-d4tmw\" (UID: \"cd56173e-c7f0-4309-97a9-4bd89c7704f3\") " pod="openstack/swift-proxy-6cb67474dc-d4tmw" Feb 16 17:23:12 crc kubenswrapper[4794]: I0216 17:23:12.635197 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd56173e-c7f0-4309-97a9-4bd89c7704f3-log-httpd\") pod \"swift-proxy-6cb67474dc-d4tmw\" (UID: \"cd56173e-c7f0-4309-97a9-4bd89c7704f3\") " pod="openstack/swift-proxy-6cb67474dc-d4tmw" Feb 16 17:23:12 crc kubenswrapper[4794]: I0216 17:23:12.636644 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd56173e-c7f0-4309-97a9-4bd89c7704f3-config-data\") pod \"swift-proxy-6cb67474dc-d4tmw\" (UID: \"cd56173e-c7f0-4309-97a9-4bd89c7704f3\") " pod="openstack/swift-proxy-6cb67474dc-d4tmw" Feb 16 17:23:12 crc kubenswrapper[4794]: I0216 17:23:12.739321 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd56173e-c7f0-4309-97a9-4bd89c7704f3-internal-tls-certs\") pod \"swift-proxy-6cb67474dc-d4tmw\" (UID: \"cd56173e-c7f0-4309-97a9-4bd89c7704f3\") " pod="openstack/swift-proxy-6cb67474dc-d4tmw" Feb 16 17:23:12 crc kubenswrapper[4794]: I0216 17:23:12.739466 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd56173e-c7f0-4309-97a9-4bd89c7704f3-combined-ca-bundle\") pod \"swift-proxy-6cb67474dc-d4tmw\" (UID: \"cd56173e-c7f0-4309-97a9-4bd89c7704f3\") " pod="openstack/swift-proxy-6cb67474dc-d4tmw" Feb 16 17:23:12 crc kubenswrapper[4794]: I0216 17:23:12.739495 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4phx5\" (UniqueName: \"kubernetes.io/projected/cd56173e-c7f0-4309-97a9-4bd89c7704f3-kube-api-access-4phx5\") pod \"swift-proxy-6cb67474dc-d4tmw\" (UID: \"cd56173e-c7f0-4309-97a9-4bd89c7704f3\") " pod="openstack/swift-proxy-6cb67474dc-d4tmw" Feb 16 17:23:12 crc kubenswrapper[4794]: I0216 17:23:12.739518 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd56173e-c7f0-4309-97a9-4bd89c7704f3-public-tls-certs\") pod \"swift-proxy-6cb67474dc-d4tmw\" (UID: \"cd56173e-c7f0-4309-97a9-4bd89c7704f3\") " pod="openstack/swift-proxy-6cb67474dc-d4tmw" Feb 16 17:23:12 crc kubenswrapper[4794]: I0216 17:23:12.739560 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd56173e-c7f0-4309-97a9-4bd89c7704f3-run-httpd\") pod \"swift-proxy-6cb67474dc-d4tmw\" (UID: \"cd56173e-c7f0-4309-97a9-4bd89c7704f3\") " pod="openstack/swift-proxy-6cb67474dc-d4tmw" Feb 16 17:23:12 crc kubenswrapper[4794]: I0216 17:23:12.739788 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cd56173e-c7f0-4309-97a9-4bd89c7704f3-etc-swift\") pod \"swift-proxy-6cb67474dc-d4tmw\" (UID: \"cd56173e-c7f0-4309-97a9-4bd89c7704f3\") " pod="openstack/swift-proxy-6cb67474dc-d4tmw" Feb 16 17:23:12 crc kubenswrapper[4794]: I0216 17:23:12.739816 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd56173e-c7f0-4309-97a9-4bd89c7704f3-log-httpd\") pod \"swift-proxy-6cb67474dc-d4tmw\" (UID: \"cd56173e-c7f0-4309-97a9-4bd89c7704f3\") " pod="openstack/swift-proxy-6cb67474dc-d4tmw" Feb 16 17:23:12 crc kubenswrapper[4794]: I0216 17:23:12.740507 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd56173e-c7f0-4309-97a9-4bd89c7704f3-log-httpd\") pod \"swift-proxy-6cb67474dc-d4tmw\" (UID: \"cd56173e-c7f0-4309-97a9-4bd89c7704f3\") " pod="openstack/swift-proxy-6cb67474dc-d4tmw" Feb 16 17:23:12 crc kubenswrapper[4794]: I0216 17:23:12.740753 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd56173e-c7f0-4309-97a9-4bd89c7704f3-run-httpd\") pod \"swift-proxy-6cb67474dc-d4tmw\" (UID: \"cd56173e-c7f0-4309-97a9-4bd89c7704f3\") " pod="openstack/swift-proxy-6cb67474dc-d4tmw" Feb 16 17:23:12 crc kubenswrapper[4794]: I0216 17:23:12.740783 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd56173e-c7f0-4309-97a9-4bd89c7704f3-config-data\") pod \"swift-proxy-6cb67474dc-d4tmw\" (UID: \"cd56173e-c7f0-4309-97a9-4bd89c7704f3\") " pod="openstack/swift-proxy-6cb67474dc-d4tmw" Feb 16 17:23:12 crc kubenswrapper[4794]: I0216 17:23:12.747549 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd56173e-c7f0-4309-97a9-4bd89c7704f3-internal-tls-certs\") pod \"swift-proxy-6cb67474dc-d4tmw\" (UID: \"cd56173e-c7f0-4309-97a9-4bd89c7704f3\") " pod="openstack/swift-proxy-6cb67474dc-d4tmw" Feb 16 17:23:12 crc kubenswrapper[4794]: I0216 17:23:12.750880 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd56173e-c7f0-4309-97a9-4bd89c7704f3-combined-ca-bundle\") pod \"swift-proxy-6cb67474dc-d4tmw\" (UID: \"cd56173e-c7f0-4309-97a9-4bd89c7704f3\") " pod="openstack/swift-proxy-6cb67474dc-d4tmw" Feb 16 17:23:12 crc kubenswrapper[4794]: I0216 17:23:12.766599 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd56173e-c7f0-4309-97a9-4bd89c7704f3-config-data\") pod \"swift-proxy-6cb67474dc-d4tmw\" (UID: \"cd56173e-c7f0-4309-97a9-4bd89c7704f3\") " pod="openstack/swift-proxy-6cb67474dc-d4tmw" Feb 16 17:23:12 crc kubenswrapper[4794]: I0216 17:23:12.773852 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cd56173e-c7f0-4309-97a9-4bd89c7704f3-etc-swift\") pod \"swift-proxy-6cb67474dc-d4tmw\" (UID: \"cd56173e-c7f0-4309-97a9-4bd89c7704f3\") " pod="openstack/swift-proxy-6cb67474dc-d4tmw" Feb 16 17:23:12 crc kubenswrapper[4794]: I0216 17:23:12.774897 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd56173e-c7f0-4309-97a9-4bd89c7704f3-public-tls-certs\") pod \"swift-proxy-6cb67474dc-d4tmw\" (UID: \"cd56173e-c7f0-4309-97a9-4bd89c7704f3\") " pod="openstack/swift-proxy-6cb67474dc-d4tmw" Feb 16 17:23:12 crc kubenswrapper[4794]: I0216 17:23:12.779336 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4phx5\" (UniqueName: \"kubernetes.io/projected/cd56173e-c7f0-4309-97a9-4bd89c7704f3-kube-api-access-4phx5\") pod \"swift-proxy-6cb67474dc-d4tmw\" (UID: \"cd56173e-c7f0-4309-97a9-4bd89c7704f3\") " pod="openstack/swift-proxy-6cb67474dc-d4tmw" Feb 16 17:23:12 crc kubenswrapper[4794]: I0216 17:23:12.814789 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6cb67474dc-d4tmw" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.594683 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-547586545-c5624"] Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.596739 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-547586545-c5624" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.617352 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-547586545-c5624"] Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.640651 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-54fcd965cd-jvdzj"] Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.642205 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-54fcd965cd-jvdzj" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.656534 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-54fbcd866f-568dg"] Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.658077 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-54fbcd866f-568dg" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.684988 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-54fbcd866f-568dg"] Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.701001 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-54fcd965cd-jvdzj"] Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.774582 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knwkv\" (UniqueName: \"kubernetes.io/projected/d1755f51-5efd-43e0-902e-c2c1b6760350-kube-api-access-knwkv\") pod \"heat-api-54fcd965cd-jvdzj\" (UID: \"d1755f51-5efd-43e0-902e-c2c1b6760350\") " pod="openstack/heat-api-54fcd965cd-jvdzj" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.774653 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1755f51-5efd-43e0-902e-c2c1b6760350-combined-ca-bundle\") pod \"heat-api-54fcd965cd-jvdzj\" (UID: \"d1755f51-5efd-43e0-902e-c2c1b6760350\") " pod="openstack/heat-api-54fcd965cd-jvdzj" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.774674 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0403b0e-4120-4eb9-b7ed-dcfafb224d46-combined-ca-bundle\") pod \"heat-engine-547586545-c5624\" (UID: \"c0403b0e-4120-4eb9-b7ed-dcfafb224d46\") " pod="openstack/heat-engine-547586545-c5624" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.774708 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07460b16-5cea-4a16-8389-dc1d3e7c3ee8-config-data-custom\") pod \"heat-cfnapi-54fbcd866f-568dg\" (UID: \"07460b16-5cea-4a16-8389-dc1d3e7c3ee8\") " pod="openstack/heat-cfnapi-54fbcd866f-568dg" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.774750 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1755f51-5efd-43e0-902e-c2c1b6760350-config-data-custom\") pod \"heat-api-54fcd965cd-jvdzj\" (UID: \"d1755f51-5efd-43e0-902e-c2c1b6760350\") " pod="openstack/heat-api-54fcd965cd-jvdzj" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.774768 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0403b0e-4120-4eb9-b7ed-dcfafb224d46-config-data-custom\") pod \"heat-engine-547586545-c5624\" (UID: \"c0403b0e-4120-4eb9-b7ed-dcfafb224d46\") " pod="openstack/heat-engine-547586545-c5624" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.774815 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6dqp\" (UniqueName: \"kubernetes.io/projected/07460b16-5cea-4a16-8389-dc1d3e7c3ee8-kube-api-access-f6dqp\") pod \"heat-cfnapi-54fbcd866f-568dg\" (UID: \"07460b16-5cea-4a16-8389-dc1d3e7c3ee8\") " pod="openstack/heat-cfnapi-54fbcd866f-568dg" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.774850 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07460b16-5cea-4a16-8389-dc1d3e7c3ee8-config-data\") pod \"heat-cfnapi-54fbcd866f-568dg\" (UID: \"07460b16-5cea-4a16-8389-dc1d3e7c3ee8\") " pod="openstack/heat-cfnapi-54fbcd866f-568dg" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.774901 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0403b0e-4120-4eb9-b7ed-dcfafb224d46-config-data\") pod \"heat-engine-547586545-c5624\" (UID: \"c0403b0e-4120-4eb9-b7ed-dcfafb224d46\") " pod="openstack/heat-engine-547586545-c5624" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.774933 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07460b16-5cea-4a16-8389-dc1d3e7c3ee8-combined-ca-bundle\") pod \"heat-cfnapi-54fbcd866f-568dg\" (UID: \"07460b16-5cea-4a16-8389-dc1d3e7c3ee8\") " pod="openstack/heat-cfnapi-54fbcd866f-568dg" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.774952 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1755f51-5efd-43e0-902e-c2c1b6760350-config-data\") pod \"heat-api-54fcd965cd-jvdzj\" (UID: \"d1755f51-5efd-43e0-902e-c2c1b6760350\") " pod="openstack/heat-api-54fcd965cd-jvdzj" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.775009 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s8qt\" (UniqueName: \"kubernetes.io/projected/c0403b0e-4120-4eb9-b7ed-dcfafb224d46-kube-api-access-6s8qt\") pod \"heat-engine-547586545-c5624\" (UID: \"c0403b0e-4120-4eb9-b7ed-dcfafb224d46\") " pod="openstack/heat-engine-547586545-c5624" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.879892 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6dqp\" (UniqueName: \"kubernetes.io/projected/07460b16-5cea-4a16-8389-dc1d3e7c3ee8-kube-api-access-f6dqp\") pod \"heat-cfnapi-54fbcd866f-568dg\" (UID: \"07460b16-5cea-4a16-8389-dc1d3e7c3ee8\") " pod="openstack/heat-cfnapi-54fbcd866f-568dg" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.880969 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07460b16-5cea-4a16-8389-dc1d3e7c3ee8-config-data\") pod \"heat-cfnapi-54fbcd866f-568dg\" (UID: \"07460b16-5cea-4a16-8389-dc1d3e7c3ee8\") " pod="openstack/heat-cfnapi-54fbcd866f-568dg" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.884529 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0403b0e-4120-4eb9-b7ed-dcfafb224d46-config-data\") pod \"heat-engine-547586545-c5624\" (UID: \"c0403b0e-4120-4eb9-b7ed-dcfafb224d46\") " pod="openstack/heat-engine-547586545-c5624" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.884651 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07460b16-5cea-4a16-8389-dc1d3e7c3ee8-combined-ca-bundle\") pod \"heat-cfnapi-54fbcd866f-568dg\" (UID: \"07460b16-5cea-4a16-8389-dc1d3e7c3ee8\") " pod="openstack/heat-cfnapi-54fbcd866f-568dg" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.884697 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1755f51-5efd-43e0-902e-c2c1b6760350-config-data\") pod \"heat-api-54fcd965cd-jvdzj\" (UID: \"d1755f51-5efd-43e0-902e-c2c1b6760350\") " pod="openstack/heat-api-54fcd965cd-jvdzj" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.884960 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s8qt\" (UniqueName: \"kubernetes.io/projected/c0403b0e-4120-4eb9-b7ed-dcfafb224d46-kube-api-access-6s8qt\") pod \"heat-engine-547586545-c5624\" (UID: \"c0403b0e-4120-4eb9-b7ed-dcfafb224d46\") " pod="openstack/heat-engine-547586545-c5624" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.885052 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knwkv\" (UniqueName: \"kubernetes.io/projected/d1755f51-5efd-43e0-902e-c2c1b6760350-kube-api-access-knwkv\") pod \"heat-api-54fcd965cd-jvdzj\" (UID: \"d1755f51-5efd-43e0-902e-c2c1b6760350\") " pod="openstack/heat-api-54fcd965cd-jvdzj" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.885163 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1755f51-5efd-43e0-902e-c2c1b6760350-combined-ca-bundle\") pod \"heat-api-54fcd965cd-jvdzj\" (UID: \"d1755f51-5efd-43e0-902e-c2c1b6760350\") " pod="openstack/heat-api-54fcd965cd-jvdzj" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.885200 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0403b0e-4120-4eb9-b7ed-dcfafb224d46-combined-ca-bundle\") pod \"heat-engine-547586545-c5624\" (UID: \"c0403b0e-4120-4eb9-b7ed-dcfafb224d46\") " pod="openstack/heat-engine-547586545-c5624" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.885276 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07460b16-5cea-4a16-8389-dc1d3e7c3ee8-config-data-custom\") pod \"heat-cfnapi-54fbcd866f-568dg\" (UID: \"07460b16-5cea-4a16-8389-dc1d3e7c3ee8\") " pod="openstack/heat-cfnapi-54fbcd866f-568dg" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.885386 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1755f51-5efd-43e0-902e-c2c1b6760350-config-data-custom\") pod \"heat-api-54fcd965cd-jvdzj\" (UID: \"d1755f51-5efd-43e0-902e-c2c1b6760350\") " pod="openstack/heat-api-54fcd965cd-jvdzj" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.885419 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0403b0e-4120-4eb9-b7ed-dcfafb224d46-config-data-custom\") pod \"heat-engine-547586545-c5624\" (UID: \"c0403b0e-4120-4eb9-b7ed-dcfafb224d46\") " pod="openstack/heat-engine-547586545-c5624" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.893266 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07460b16-5cea-4a16-8389-dc1d3e7c3ee8-config-data\") pod \"heat-cfnapi-54fbcd866f-568dg\" (UID: \"07460b16-5cea-4a16-8389-dc1d3e7c3ee8\") " pod="openstack/heat-cfnapi-54fbcd866f-568dg" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.904111 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1755f51-5efd-43e0-902e-c2c1b6760350-combined-ca-bundle\") pod \"heat-api-54fcd965cd-jvdzj\" (UID: \"d1755f51-5efd-43e0-902e-c2c1b6760350\") " pod="openstack/heat-api-54fcd965cd-jvdzj" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.904838 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07460b16-5cea-4a16-8389-dc1d3e7c3ee8-combined-ca-bundle\") pod \"heat-cfnapi-54fbcd866f-568dg\" (UID: \"07460b16-5cea-4a16-8389-dc1d3e7c3ee8\") " pod="openstack/heat-cfnapi-54fbcd866f-568dg" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.905269 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07460b16-5cea-4a16-8389-dc1d3e7c3ee8-config-data-custom\") pod \"heat-cfnapi-54fbcd866f-568dg\" (UID: \"07460b16-5cea-4a16-8389-dc1d3e7c3ee8\") " pod="openstack/heat-cfnapi-54fbcd866f-568dg" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.913524 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0403b0e-4120-4eb9-b7ed-dcfafb224d46-config-data\") pod \"heat-engine-547586545-c5624\" (UID: \"c0403b0e-4120-4eb9-b7ed-dcfafb224d46\") " pod="openstack/heat-engine-547586545-c5624" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.916248 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1755f51-5efd-43e0-902e-c2c1b6760350-config-data\") pod \"heat-api-54fcd965cd-jvdzj\" (UID: \"d1755f51-5efd-43e0-902e-c2c1b6760350\") " pod="openstack/heat-api-54fcd965cd-jvdzj" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.918556 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0403b0e-4120-4eb9-b7ed-dcfafb224d46-combined-ca-bundle\") pod \"heat-engine-547586545-c5624\" (UID: \"c0403b0e-4120-4eb9-b7ed-dcfafb224d46\") " pod="openstack/heat-engine-547586545-c5624" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.929649 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1755f51-5efd-43e0-902e-c2c1b6760350-config-data-custom\") pod \"heat-api-54fcd965cd-jvdzj\" (UID: \"d1755f51-5efd-43e0-902e-c2c1b6760350\") " pod="openstack/heat-api-54fcd965cd-jvdzj" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.934359 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knwkv\" (UniqueName: \"kubernetes.io/projected/d1755f51-5efd-43e0-902e-c2c1b6760350-kube-api-access-knwkv\") pod \"heat-api-54fcd965cd-jvdzj\" (UID: \"d1755f51-5efd-43e0-902e-c2c1b6760350\") " pod="openstack/heat-api-54fcd965cd-jvdzj" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.947081 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6dqp\" (UniqueName: \"kubernetes.io/projected/07460b16-5cea-4a16-8389-dc1d3e7c3ee8-kube-api-access-f6dqp\") pod \"heat-cfnapi-54fbcd866f-568dg\" (UID: \"07460b16-5cea-4a16-8389-dc1d3e7c3ee8\") " pod="openstack/heat-cfnapi-54fbcd866f-568dg" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.959353 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s8qt\" (UniqueName: \"kubernetes.io/projected/c0403b0e-4120-4eb9-b7ed-dcfafb224d46-kube-api-access-6s8qt\") pod \"heat-engine-547586545-c5624\" (UID: \"c0403b0e-4120-4eb9-b7ed-dcfafb224d46\") " pod="openstack/heat-engine-547586545-c5624" Feb 16 17:23:13 crc kubenswrapper[4794]: I0216 17:23:13.987542 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0403b0e-4120-4eb9-b7ed-dcfafb224d46-config-data-custom\") pod \"heat-engine-547586545-c5624\" (UID: \"c0403b0e-4120-4eb9-b7ed-dcfafb224d46\") " pod="openstack/heat-engine-547586545-c5624" Feb 16 17:23:14 crc kubenswrapper[4794]: I0216 17:23:14.029069 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-54fcd965cd-jvdzj" Feb 16 17:23:14 crc kubenswrapper[4794]: I0216 17:23:14.065335 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-54fbcd866f-568dg" Feb 16 17:23:14 crc kubenswrapper[4794]: I0216 17:23:14.134487 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6cb67474dc-d4tmw"] Feb 16 17:23:14 crc kubenswrapper[4794]: I0216 17:23:14.244538 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-547586545-c5624" Feb 16 17:23:15 crc kubenswrapper[4794]: I0216 17:23:15.098462 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688b9f5b49-4pz6j" Feb 16 17:23:15 crc kubenswrapper[4794]: I0216 17:23:15.162236 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-r78sp"] Feb 16 17:23:15 crc kubenswrapper[4794]: I0216 17:23:15.162485 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-r78sp" podUID="c932199b-1077-4aa1-aa88-7867c5c84212" containerName="dnsmasq-dns" containerID="cri-o://fe46504ecb88e7e48e3acf8fb9b0e1f42b729b703c11863dd2c44da68e2cfa2e" gracePeriod=10 Feb 16 17:23:15 crc kubenswrapper[4794]: I0216 17:23:15.589693 4794 generic.go:334] "Generic (PLEG): container finished" podID="c932199b-1077-4aa1-aa88-7867c5c84212" containerID="fe46504ecb88e7e48e3acf8fb9b0e1f42b729b703c11863dd2c44da68e2cfa2e" exitCode=0 Feb 16 17:23:15 crc kubenswrapper[4794]: I0216 17:23:15.589741 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-r78sp" event={"ID":"c932199b-1077-4aa1-aa88-7867c5c84212","Type":"ContainerDied","Data":"fe46504ecb88e7e48e3acf8fb9b0e1f42b729b703c11863dd2c44da68e2cfa2e"} Feb 16 17:23:15 crc kubenswrapper[4794]: I0216 17:23:15.615285 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6cdff78ddf-hj4zf" Feb 16 17:23:15 crc kubenswrapper[4794]: I0216 17:23:15.704595 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-99f86f5f6-sdjdr"] Feb 16 17:23:15 crc kubenswrapper[4794]: I0216 17:23:15.705726 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-99f86f5f6-sdjdr" podUID="5b69fea3-061c-40bb-86ff-ca8af8587049" containerName="neutron-api" containerID="cri-o://12db5c04ecc3f1a679a59c218185982a095aeb876b9954d19b4c4aecd06fef40" gracePeriod=30 Feb 16 17:23:15 crc kubenswrapper[4794]: I0216 17:23:15.706281 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-99f86f5f6-sdjdr" podUID="5b69fea3-061c-40bb-86ff-ca8af8587049" containerName="neutron-httpd" containerID="cri-o://e72c1c52f98c2b1baaab6d99b99add46e1dd0d4a019fedd86f26bdd1e4265a79" gracePeriod=30 Feb 16 17:23:16 crc kubenswrapper[4794]: E0216 17:23:16.208524 4794 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3ddbc68_5d23_420f_a844_c55759155260.slice/crio-conmon-201f8bd8e605ac419284e64c74d6ff0721a0638fba59d4096a40a4878d91a42b.scope\": RecentStats: unable to find data in memory cache]" Feb 16 17:23:16 crc kubenswrapper[4794]: I0216 17:23:16.431384 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6578955fd5-r78sp" podUID="c932199b-1077-4aa1-aa88-7867c5c84212" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.204:5353: connect: connection refused" Feb 16 17:23:16 crc kubenswrapper[4794]: I0216 17:23:16.587182 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="a3ddbc68-5d23-420f-a844-c55759155260" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.205:8776/healthcheck\": dial tcp 10.217.0.205:8776: connect: connection refused" Feb 16 17:23:16 crc kubenswrapper[4794]: I0216 17:23:16.625897 4794 generic.go:334] "Generic (PLEG): container finished" podID="a3ddbc68-5d23-420f-a844-c55759155260" containerID="201f8bd8e605ac419284e64c74d6ff0721a0638fba59d4096a40a4878d91a42b" exitCode=137 Feb 16 17:23:16 crc kubenswrapper[4794]: I0216 17:23:16.625967 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a3ddbc68-5d23-420f-a844-c55759155260","Type":"ContainerDied","Data":"201f8bd8e605ac419284e64c74d6ff0721a0638fba59d4096a40a4878d91a42b"} Feb 16 17:23:16 crc kubenswrapper[4794]: I0216 17:23:16.628610 4794 generic.go:334] "Generic (PLEG): container finished" podID="5b69fea3-061c-40bb-86ff-ca8af8587049" containerID="e72c1c52f98c2b1baaab6d99b99add46e1dd0d4a019fedd86f26bdd1e4265a79" exitCode=0 Feb 16 17:23:16 crc kubenswrapper[4794]: I0216 17:23:16.628653 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-99f86f5f6-sdjdr" event={"ID":"5b69fea3-061c-40bb-86ff-ca8af8587049","Type":"ContainerDied","Data":"e72c1c52f98c2b1baaab6d99b99add46e1dd0d4a019fedd86f26bdd1e4265a79"} Feb 16 17:23:16 crc kubenswrapper[4794]: I0216 17:23:16.818446 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:23:16 crc kubenswrapper[4794]: I0216 17:23:16.818752 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5ddd9f53-b6ac-4624-92b4-a076ad62d8de" containerName="ceilometer-central-agent" containerID="cri-o://68cb34ae47ee273c416e83f42d642c333007a14d80b0330f29a6a98f12df8e42" gracePeriod=30 Feb 16 17:23:16 crc kubenswrapper[4794]: I0216 17:23:16.819164 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5ddd9f53-b6ac-4624-92b4-a076ad62d8de" containerName="proxy-httpd" containerID="cri-o://d8ca38db2ddc3163a6c287eb50021e606fa8f95771f31edd7027c566317049fa" gracePeriod=30 Feb 16 17:23:16 crc kubenswrapper[4794]: I0216 17:23:16.819224 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5ddd9f53-b6ac-4624-92b4-a076ad62d8de" containerName="sg-core" containerID="cri-o://5fc05aaf20aad6db9c0337f839a3c35aef358c9173809b50b0e5fbfe3bfeed28" gracePeriod=30 Feb 16 17:23:16 crc kubenswrapper[4794]: I0216 17:23:16.819271 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5ddd9f53-b6ac-4624-92b4-a076ad62d8de" containerName="ceilometer-notification-agent" containerID="cri-o://b7b183697fa62a18c0922b1d7b0ce4e7591a68eadb850d74d1de2eba8b6349a2" gracePeriod=30 Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.147759 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6df69d99d9-jgbc2"] Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.149543 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-6df69d99d9-jgbc2" podUID="678592d4-5921-4bc3-bdc9-d47b36ffba37" containerName="heat-api" containerID="cri-o://bdcbee2f070a73c1f074e7323388703fc457c7eb6c68b2bb52e4fc93498634db" gracePeriod=60 Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.165511 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-67dd4d6c5f-84d7v"] Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.165733 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-67dd4d6c5f-84d7v" podUID="f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d" containerName="heat-cfnapi" containerID="cri-o://0faed2d1e1da7b0286d9ec595c654882f7ae55963df2cf9314a32b6a8189506c" gracePeriod=60 Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.202167 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-868454c84d-mwnsk"] Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.203908 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-868454c84d-mwnsk" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.204169 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-6df69d99d9-jgbc2" podUID="678592d4-5921-4bc3-bdc9-d47b36ffba37" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.214:8004/healthcheck\": EOF" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.204186 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-67dd4d6c5f-84d7v" podUID="f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.213:8000/healthcheck\": EOF" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.204946 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-cfnapi-67dd4d6c5f-84d7v" podUID="f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.213:8000/healthcheck\": EOF" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.205737 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-api-6df69d99d9-jgbc2" podUID="678592d4-5921-4bc3-bdc9-d47b36ffba37" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.214:8004/healthcheck\": EOF" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.211316 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-849cbf9447-6chxp"] Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.212640 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.212782 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-849cbf9447-6chxp" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.220531 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.221336 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.221433 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.266029 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-849cbf9447-6chxp"] Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.286639 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-868454c84d-mwnsk"] Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.288510 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57584011-2a08-4edd-a53a-fa54541cfc82-config-data\") pod \"heat-api-868454c84d-mwnsk\" (UID: \"57584011-2a08-4edd-a53a-fa54541cfc82\") " pod="openstack/heat-api-868454c84d-mwnsk" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.288576 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ccd75b14-da33-40cb-ace9-fae71c629d01-config-data-custom\") pod \"heat-cfnapi-849cbf9447-6chxp\" (UID: \"ccd75b14-da33-40cb-ace9-fae71c629d01\") " pod="openstack/heat-cfnapi-849cbf9447-6chxp" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.288628 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m89wr\" (UniqueName: \"kubernetes.io/projected/ccd75b14-da33-40cb-ace9-fae71c629d01-kube-api-access-m89wr\") pod \"heat-cfnapi-849cbf9447-6chxp\" (UID: \"ccd75b14-da33-40cb-ace9-fae71c629d01\") " pod="openstack/heat-cfnapi-849cbf9447-6chxp" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.288686 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccd75b14-da33-40cb-ace9-fae71c629d01-config-data\") pod \"heat-cfnapi-849cbf9447-6chxp\" (UID: \"ccd75b14-da33-40cb-ace9-fae71c629d01\") " pod="openstack/heat-cfnapi-849cbf9447-6chxp" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.288737 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs7pp\" (UniqueName: \"kubernetes.io/projected/57584011-2a08-4edd-a53a-fa54541cfc82-kube-api-access-rs7pp\") pod \"heat-api-868454c84d-mwnsk\" (UID: \"57584011-2a08-4edd-a53a-fa54541cfc82\") " pod="openstack/heat-api-868454c84d-mwnsk" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.288767 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccd75b14-da33-40cb-ace9-fae71c629d01-combined-ca-bundle\") pod \"heat-cfnapi-849cbf9447-6chxp\" (UID: \"ccd75b14-da33-40cb-ace9-fae71c629d01\") " pod="openstack/heat-cfnapi-849cbf9447-6chxp" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.288850 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57584011-2a08-4edd-a53a-fa54541cfc82-combined-ca-bundle\") pod \"heat-api-868454c84d-mwnsk\" (UID: \"57584011-2a08-4edd-a53a-fa54541cfc82\") " pod="openstack/heat-api-868454c84d-mwnsk" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.288974 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57584011-2a08-4edd-a53a-fa54541cfc82-public-tls-certs\") pod \"heat-api-868454c84d-mwnsk\" (UID: \"57584011-2a08-4edd-a53a-fa54541cfc82\") " pod="openstack/heat-api-868454c84d-mwnsk" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.289006 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccd75b14-da33-40cb-ace9-fae71c629d01-internal-tls-certs\") pod \"heat-cfnapi-849cbf9447-6chxp\" (UID: \"ccd75b14-da33-40cb-ace9-fae71c629d01\") " pod="openstack/heat-cfnapi-849cbf9447-6chxp" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.289026 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57584011-2a08-4edd-a53a-fa54541cfc82-internal-tls-certs\") pod \"heat-api-868454c84d-mwnsk\" (UID: \"57584011-2a08-4edd-a53a-fa54541cfc82\") " pod="openstack/heat-api-868454c84d-mwnsk" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.289054 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57584011-2a08-4edd-a53a-fa54541cfc82-config-data-custom\") pod \"heat-api-868454c84d-mwnsk\" (UID: \"57584011-2a08-4edd-a53a-fa54541cfc82\") " pod="openstack/heat-api-868454c84d-mwnsk" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.289092 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccd75b14-da33-40cb-ace9-fae71c629d01-public-tls-certs\") pod \"heat-cfnapi-849cbf9447-6chxp\" (UID: \"ccd75b14-da33-40cb-ace9-fae71c629d01\") " pod="openstack/heat-cfnapi-849cbf9447-6chxp" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.391018 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57584011-2a08-4edd-a53a-fa54541cfc82-public-tls-certs\") pod \"heat-api-868454c84d-mwnsk\" (UID: \"57584011-2a08-4edd-a53a-fa54541cfc82\") " pod="openstack/heat-api-868454c84d-mwnsk" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.391082 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccd75b14-da33-40cb-ace9-fae71c629d01-internal-tls-certs\") pod \"heat-cfnapi-849cbf9447-6chxp\" (UID: \"ccd75b14-da33-40cb-ace9-fae71c629d01\") " pod="openstack/heat-cfnapi-849cbf9447-6chxp" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.391106 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57584011-2a08-4edd-a53a-fa54541cfc82-internal-tls-certs\") pod \"heat-api-868454c84d-mwnsk\" (UID: \"57584011-2a08-4edd-a53a-fa54541cfc82\") " pod="openstack/heat-api-868454c84d-mwnsk" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.391133 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57584011-2a08-4edd-a53a-fa54541cfc82-config-data-custom\") pod \"heat-api-868454c84d-mwnsk\" (UID: \"57584011-2a08-4edd-a53a-fa54541cfc82\") " pod="openstack/heat-api-868454c84d-mwnsk" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.391161 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccd75b14-da33-40cb-ace9-fae71c629d01-public-tls-certs\") pod \"heat-cfnapi-849cbf9447-6chxp\" (UID: \"ccd75b14-da33-40cb-ace9-fae71c629d01\") " pod="openstack/heat-cfnapi-849cbf9447-6chxp" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.391252 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57584011-2a08-4edd-a53a-fa54541cfc82-config-data\") pod \"heat-api-868454c84d-mwnsk\" (UID: \"57584011-2a08-4edd-a53a-fa54541cfc82\") " pod="openstack/heat-api-868454c84d-mwnsk" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.391283 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ccd75b14-da33-40cb-ace9-fae71c629d01-config-data-custom\") pod \"heat-cfnapi-849cbf9447-6chxp\" (UID: \"ccd75b14-da33-40cb-ace9-fae71c629d01\") " pod="openstack/heat-cfnapi-849cbf9447-6chxp" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.392436 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m89wr\" (UniqueName: \"kubernetes.io/projected/ccd75b14-da33-40cb-ace9-fae71c629d01-kube-api-access-m89wr\") pod \"heat-cfnapi-849cbf9447-6chxp\" (UID: \"ccd75b14-da33-40cb-ace9-fae71c629d01\") " pod="openstack/heat-cfnapi-849cbf9447-6chxp" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.392505 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccd75b14-da33-40cb-ace9-fae71c629d01-config-data\") pod \"heat-cfnapi-849cbf9447-6chxp\" (UID: \"ccd75b14-da33-40cb-ace9-fae71c629d01\") " pod="openstack/heat-cfnapi-849cbf9447-6chxp" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.392562 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs7pp\" (UniqueName: \"kubernetes.io/projected/57584011-2a08-4edd-a53a-fa54541cfc82-kube-api-access-rs7pp\") pod \"heat-api-868454c84d-mwnsk\" (UID: \"57584011-2a08-4edd-a53a-fa54541cfc82\") " pod="openstack/heat-api-868454c84d-mwnsk" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.392594 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccd75b14-da33-40cb-ace9-fae71c629d01-combined-ca-bundle\") pod \"heat-cfnapi-849cbf9447-6chxp\" (UID: \"ccd75b14-da33-40cb-ace9-fae71c629d01\") " pod="openstack/heat-cfnapi-849cbf9447-6chxp" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.392679 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57584011-2a08-4edd-a53a-fa54541cfc82-combined-ca-bundle\") pod \"heat-api-868454c84d-mwnsk\" (UID: \"57584011-2a08-4edd-a53a-fa54541cfc82\") " pod="openstack/heat-api-868454c84d-mwnsk" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.400933 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57584011-2a08-4edd-a53a-fa54541cfc82-config-data\") pod \"heat-api-868454c84d-mwnsk\" (UID: \"57584011-2a08-4edd-a53a-fa54541cfc82\") " pod="openstack/heat-api-868454c84d-mwnsk" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.401220 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57584011-2a08-4edd-a53a-fa54541cfc82-public-tls-certs\") pod \"heat-api-868454c84d-mwnsk\" (UID: \"57584011-2a08-4edd-a53a-fa54541cfc82\") " pod="openstack/heat-api-868454c84d-mwnsk" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.401223 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccd75b14-da33-40cb-ace9-fae71c629d01-public-tls-certs\") pod \"heat-cfnapi-849cbf9447-6chxp\" (UID: \"ccd75b14-da33-40cb-ace9-fae71c629d01\") " pod="openstack/heat-cfnapi-849cbf9447-6chxp" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.401896 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ccd75b14-da33-40cb-ace9-fae71c629d01-config-data-custom\") pod \"heat-cfnapi-849cbf9447-6chxp\" (UID: \"ccd75b14-da33-40cb-ace9-fae71c629d01\") " pod="openstack/heat-cfnapi-849cbf9447-6chxp" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.402057 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccd75b14-da33-40cb-ace9-fae71c629d01-internal-tls-certs\") pod \"heat-cfnapi-849cbf9447-6chxp\" (UID: \"ccd75b14-da33-40cb-ace9-fae71c629d01\") " pod="openstack/heat-cfnapi-849cbf9447-6chxp" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.402459 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57584011-2a08-4edd-a53a-fa54541cfc82-combined-ca-bundle\") pod \"heat-api-868454c84d-mwnsk\" (UID: \"57584011-2a08-4edd-a53a-fa54541cfc82\") " pod="openstack/heat-api-868454c84d-mwnsk" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.403022 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57584011-2a08-4edd-a53a-fa54541cfc82-internal-tls-certs\") pod \"heat-api-868454c84d-mwnsk\" (UID: \"57584011-2a08-4edd-a53a-fa54541cfc82\") " pod="openstack/heat-api-868454c84d-mwnsk" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.403991 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57584011-2a08-4edd-a53a-fa54541cfc82-config-data-custom\") pod \"heat-api-868454c84d-mwnsk\" (UID: \"57584011-2a08-4edd-a53a-fa54541cfc82\") " pod="openstack/heat-api-868454c84d-mwnsk" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.417428 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccd75b14-da33-40cb-ace9-fae71c629d01-combined-ca-bundle\") pod \"heat-cfnapi-849cbf9447-6chxp\" (UID: \"ccd75b14-da33-40cb-ace9-fae71c629d01\") " pod="openstack/heat-cfnapi-849cbf9447-6chxp" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.417663 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m89wr\" (UniqueName: \"kubernetes.io/projected/ccd75b14-da33-40cb-ace9-fae71c629d01-kube-api-access-m89wr\") pod \"heat-cfnapi-849cbf9447-6chxp\" (UID: \"ccd75b14-da33-40cb-ace9-fae71c629d01\") " pod="openstack/heat-cfnapi-849cbf9447-6chxp" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.418638 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs7pp\" (UniqueName: \"kubernetes.io/projected/57584011-2a08-4edd-a53a-fa54541cfc82-kube-api-access-rs7pp\") pod \"heat-api-868454c84d-mwnsk\" (UID: \"57584011-2a08-4edd-a53a-fa54541cfc82\") " pod="openstack/heat-api-868454c84d-mwnsk" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.429593 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccd75b14-da33-40cb-ace9-fae71c629d01-config-data\") pod \"heat-cfnapi-849cbf9447-6chxp\" (UID: \"ccd75b14-da33-40cb-ace9-fae71c629d01\") " pod="openstack/heat-cfnapi-849cbf9447-6chxp" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.528138 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-868454c84d-mwnsk" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.586943 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-849cbf9447-6chxp" Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.649635 4794 generic.go:334] "Generic (PLEG): container finished" podID="5ddd9f53-b6ac-4624-92b4-a076ad62d8de" containerID="d8ca38db2ddc3163a6c287eb50021e606fa8f95771f31edd7027c566317049fa" exitCode=0 Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.649663 4794 generic.go:334] "Generic (PLEG): container finished" podID="5ddd9f53-b6ac-4624-92b4-a076ad62d8de" containerID="5fc05aaf20aad6db9c0337f839a3c35aef358c9173809b50b0e5fbfe3bfeed28" exitCode=2 Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.649671 4794 generic.go:334] "Generic (PLEG): container finished" podID="5ddd9f53-b6ac-4624-92b4-a076ad62d8de" containerID="68cb34ae47ee273c416e83f42d642c333007a14d80b0330f29a6a98f12df8e42" exitCode=0 Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.649694 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ddd9f53-b6ac-4624-92b4-a076ad62d8de","Type":"ContainerDied","Data":"d8ca38db2ddc3163a6c287eb50021e606fa8f95771f31edd7027c566317049fa"} Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.649723 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ddd9f53-b6ac-4624-92b4-a076ad62d8de","Type":"ContainerDied","Data":"5fc05aaf20aad6db9c0337f839a3c35aef358c9173809b50b0e5fbfe3bfeed28"} Feb 16 17:23:17 crc kubenswrapper[4794]: I0216 17:23:17.649737 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ddd9f53-b6ac-4624-92b4-a076ad62d8de","Type":"ContainerDied","Data":"68cb34ae47ee273c416e83f42d642c333007a14d80b0330f29a6a98f12df8e42"} Feb 16 17:23:20 crc kubenswrapper[4794]: I0216 17:23:20.140367 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:23:20 crc kubenswrapper[4794]: I0216 17:23:20.140674 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:23:20 crc kubenswrapper[4794]: I0216 17:23:20.687382 4794 generic.go:334] "Generic (PLEG): container finished" podID="5b69fea3-061c-40bb-86ff-ca8af8587049" containerID="12db5c04ecc3f1a679a59c218185982a095aeb876b9954d19b4c4aecd06fef40" exitCode=0 Feb 16 17:23:20 crc kubenswrapper[4794]: I0216 17:23:20.687440 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-99f86f5f6-sdjdr" event={"ID":"5b69fea3-061c-40bb-86ff-ca8af8587049","Type":"ContainerDied","Data":"12db5c04ecc3f1a679a59c218185982a095aeb876b9954d19b4c4aecd06fef40"} Feb 16 17:23:21 crc kubenswrapper[4794]: I0216 17:23:21.425828 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6578955fd5-r78sp" podUID="c932199b-1077-4aa1-aa88-7867c5c84212" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.204:5353: connect: connection refused" Feb 16 17:23:21 crc kubenswrapper[4794]: I0216 17:23:21.570720 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-67dd4d6c5f-84d7v" podUID="f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.213:8000/healthcheck\": read tcp 10.217.0.2:41858->10.217.0.213:8000: read: connection reset by peer" Feb 16 17:23:21 crc kubenswrapper[4794]: I0216 17:23:21.571372 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-67dd4d6c5f-84d7v" podUID="f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.213:8000/healthcheck\": dial tcp 10.217.0.213:8000: connect: connection refused" Feb 16 17:23:21 crc kubenswrapper[4794]: I0216 17:23:21.587277 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="a3ddbc68-5d23-420f-a844-c55759155260" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.205:8776/healthcheck\": dial tcp 10.217.0.205:8776: connect: connection refused" Feb 16 17:23:21 crc kubenswrapper[4794]: I0216 17:23:21.706290 4794 generic.go:334] "Generic (PLEG): container finished" podID="f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d" containerID="0faed2d1e1da7b0286d9ec595c654882f7ae55963df2cf9314a32b6a8189506c" exitCode=0 Feb 16 17:23:21 crc kubenswrapper[4794]: I0216 17:23:21.706423 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-67dd4d6c5f-84d7v" event={"ID":"f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d","Type":"ContainerDied","Data":"0faed2d1e1da7b0286d9ec595c654882f7ae55963df2cf9314a32b6a8189506c"} Feb 16 17:23:21 crc kubenswrapper[4794]: I0216 17:23:21.710943 4794 generic.go:334] "Generic (PLEG): container finished" podID="5ddd9f53-b6ac-4624-92b4-a076ad62d8de" containerID="b7b183697fa62a18c0922b1d7b0ce4e7591a68eadb850d74d1de2eba8b6349a2" exitCode=0 Feb 16 17:23:21 crc kubenswrapper[4794]: I0216 17:23:21.710981 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ddd9f53-b6ac-4624-92b4-a076ad62d8de","Type":"ContainerDied","Data":"b7b183697fa62a18c0922b1d7b0ce4e7591a68eadb850d74d1de2eba8b6349a2"} Feb 16 17:23:21 crc kubenswrapper[4794]: I0216 17:23:21.748110 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-6df69d99d9-jgbc2" podUID="678592d4-5921-4bc3-bdc9-d47b36ffba37" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.214:8004/healthcheck\": read tcp 10.217.0.2:50236->10.217.0.214:8004: read: connection reset by peer" Feb 16 17:23:21 crc kubenswrapper[4794]: I0216 17:23:21.748814 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-6df69d99d9-jgbc2" podUID="678592d4-5921-4bc3-bdc9-d47b36ffba37" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.214:8004/healthcheck\": dial tcp 10.217.0.214:8004: connect: connection refused" Feb 16 17:23:22 crc kubenswrapper[4794]: I0216 17:23:22.728503 4794 generic.go:334] "Generic (PLEG): container finished" podID="678592d4-5921-4bc3-bdc9-d47b36ffba37" containerID="bdcbee2f070a73c1f074e7323388703fc457c7eb6c68b2bb52e4fc93498634db" exitCode=0 Feb 16 17:23:22 crc kubenswrapper[4794]: I0216 17:23:22.728559 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6df69d99d9-jgbc2" event={"ID":"678592d4-5921-4bc3-bdc9-d47b36ffba37","Type":"ContainerDied","Data":"bdcbee2f070a73c1f074e7323388703fc457c7eb6c68b2bb52e4fc93498634db"} Feb 16 17:23:23 crc kubenswrapper[4794]: I0216 17:23:23.770316 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-67dd4d6c5f-84d7v" Feb 16 17:23:23 crc kubenswrapper[4794]: I0216 17:23:23.782068 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-67dd4d6c5f-84d7v" event={"ID":"f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d","Type":"ContainerDied","Data":"dbc94f8b441130089bd538bcd87c65bbddce4ad11b8b7ebeaabfc6aa6c088dba"} Feb 16 17:23:23 crc kubenswrapper[4794]: I0216 17:23:23.782133 4794 scope.go:117] "RemoveContainer" containerID="0faed2d1e1da7b0286d9ec595c654882f7ae55963df2cf9314a32b6a8189506c" Feb 16 17:23:23 crc kubenswrapper[4794]: I0216 17:23:23.789908 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6cb67474dc-d4tmw" event={"ID":"cd56173e-c7f0-4309-97a9-4bd89c7704f3","Type":"ContainerStarted","Data":"2f7bd459153af95e3dfa62f17677c10896da1bacb637254cd2f7f473b9929724"} Feb 16 17:23:23 crc kubenswrapper[4794]: I0216 17:23:23.879358 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d-config-data-custom\") pod \"f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d\" (UID: \"f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d\") " Feb 16 17:23:23 crc kubenswrapper[4794]: I0216 17:23:23.879569 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrr99\" (UniqueName: \"kubernetes.io/projected/f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d-kube-api-access-nrr99\") pod \"f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d\" (UID: \"f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d\") " Feb 16 17:23:23 crc kubenswrapper[4794]: I0216 17:23:23.879737 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d-config-data\") pod \"f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d\" (UID: \"f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d\") " Feb 16 17:23:23 crc kubenswrapper[4794]: I0216 17:23:23.879798 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d-combined-ca-bundle\") pod \"f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d\" (UID: \"f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d\") " Feb 16 17:23:23 crc kubenswrapper[4794]: I0216 17:23:23.886865 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d-kube-api-access-nrr99" (OuterVolumeSpecName: "kube-api-access-nrr99") pod "f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d" (UID: "f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d"). InnerVolumeSpecName "kube-api-access-nrr99". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:23:23 crc kubenswrapper[4794]: I0216 17:23:23.899533 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d" (UID: "f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:23 crc kubenswrapper[4794]: I0216 17:23:23.982819 4794 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:23 crc kubenswrapper[4794]: I0216 17:23:23.982841 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrr99\" (UniqueName: \"kubernetes.io/projected/f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d-kube-api-access-nrr99\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.014838 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d" (UID: "f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.088428 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.147368 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d-config-data" (OuterVolumeSpecName: "config-data") pod "f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d" (UID: "f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.202677 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.290416 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6df69d99d9-jgbc2" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.302013 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-r78sp" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.304904 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/678592d4-5921-4bc3-bdc9-d47b36ffba37-config-data-custom\") pod \"678592d4-5921-4bc3-bdc9-d47b36ffba37\" (UID: \"678592d4-5921-4bc3-bdc9-d47b36ffba37\") " Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.305000 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/678592d4-5921-4bc3-bdc9-d47b36ffba37-combined-ca-bundle\") pod \"678592d4-5921-4bc3-bdc9-d47b36ffba37\" (UID: \"678592d4-5921-4bc3-bdc9-d47b36ffba37\") " Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.305162 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/678592d4-5921-4bc3-bdc9-d47b36ffba37-config-data\") pod \"678592d4-5921-4bc3-bdc9-d47b36ffba37\" (UID: \"678592d4-5921-4bc3-bdc9-d47b36ffba37\") " Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.305313 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgmwp\" (UniqueName: \"kubernetes.io/projected/678592d4-5921-4bc3-bdc9-d47b36ffba37-kube-api-access-zgmwp\") pod \"678592d4-5921-4bc3-bdc9-d47b36ffba37\" (UID: \"678592d4-5921-4bc3-bdc9-d47b36ffba37\") " Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.315679 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/678592d4-5921-4bc3-bdc9-d47b36ffba37-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "678592d4-5921-4bc3-bdc9-d47b36ffba37" (UID: "678592d4-5921-4bc3-bdc9-d47b36ffba37"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.320798 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/678592d4-5921-4bc3-bdc9-d47b36ffba37-kube-api-access-zgmwp" (OuterVolumeSpecName: "kube-api-access-zgmwp") pod "678592d4-5921-4bc3-bdc9-d47b36ffba37" (UID: "678592d4-5921-4bc3-bdc9-d47b36ffba37"). InnerVolumeSpecName "kube-api-access-zgmwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.327973 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.409297 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6rh\" (UniqueName: \"kubernetes.io/projected/a3ddbc68-5d23-420f-a844-c55759155260-kube-api-access-ks6rh\") pod \"a3ddbc68-5d23-420f-a844-c55759155260\" (UID: \"a3ddbc68-5d23-420f-a844-c55759155260\") " Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.417289 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3ddbc68-5d23-420f-a844-c55759155260-logs\") pod \"a3ddbc68-5d23-420f-a844-c55759155260\" (UID: \"a3ddbc68-5d23-420f-a844-c55759155260\") " Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.417344 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c932199b-1077-4aa1-aa88-7867c5c84212-dns-swift-storage-0\") pod \"c932199b-1077-4aa1-aa88-7867c5c84212\" (UID: \"c932199b-1077-4aa1-aa88-7867c5c84212\") " Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.417382 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c932199b-1077-4aa1-aa88-7867c5c84212-ovsdbserver-sb\") pod \"c932199b-1077-4aa1-aa88-7867c5c84212\" (UID: \"c932199b-1077-4aa1-aa88-7867c5c84212\") " Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.417555 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3ddbc68-5d23-420f-a844-c55759155260-scripts\") pod \"a3ddbc68-5d23-420f-a844-c55759155260\" (UID: \"a3ddbc68-5d23-420f-a844-c55759155260\") " Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.417606 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a3ddbc68-5d23-420f-a844-c55759155260-etc-machine-id\") pod \"a3ddbc68-5d23-420f-a844-c55759155260\" (UID: \"a3ddbc68-5d23-420f-a844-c55759155260\") " Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.417681 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3ddbc68-5d23-420f-a844-c55759155260-config-data\") pod \"a3ddbc68-5d23-420f-a844-c55759155260\" (UID: \"a3ddbc68-5d23-420f-a844-c55759155260\") " Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.417768 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3ddbc68-5d23-420f-a844-c55759155260-config-data-custom\") pod \"a3ddbc68-5d23-420f-a844-c55759155260\" (UID: \"a3ddbc68-5d23-420f-a844-c55759155260\") " Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.417807 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c932199b-1077-4aa1-aa88-7867c5c84212-ovsdbserver-nb\") pod \"c932199b-1077-4aa1-aa88-7867c5c84212\" (UID: \"c932199b-1077-4aa1-aa88-7867c5c84212\") " Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.417834 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3ddbc68-5d23-420f-a844-c55759155260-combined-ca-bundle\") pod \"a3ddbc68-5d23-420f-a844-c55759155260\" (UID: \"a3ddbc68-5d23-420f-a844-c55759155260\") " Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.417929 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxgj7\" (UniqueName: \"kubernetes.io/projected/c932199b-1077-4aa1-aa88-7867c5c84212-kube-api-access-mxgj7\") pod \"c932199b-1077-4aa1-aa88-7867c5c84212\" (UID: \"c932199b-1077-4aa1-aa88-7867c5c84212\") " Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.417976 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c932199b-1077-4aa1-aa88-7867c5c84212-dns-svc\") pod \"c932199b-1077-4aa1-aa88-7867c5c84212\" (UID: \"c932199b-1077-4aa1-aa88-7867c5c84212\") " Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.418012 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c932199b-1077-4aa1-aa88-7867c5c84212-config\") pod \"c932199b-1077-4aa1-aa88-7867c5c84212\" (UID: \"c932199b-1077-4aa1-aa88-7867c5c84212\") " Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.419251 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgmwp\" (UniqueName: \"kubernetes.io/projected/678592d4-5921-4bc3-bdc9-d47b36ffba37-kube-api-access-zgmwp\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.419271 4794 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/678592d4-5921-4bc3-bdc9-d47b36ffba37-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.419487 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3ddbc68-5d23-420f-a844-c55759155260-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a3ddbc68-5d23-420f-a844-c55759155260" (UID: "a3ddbc68-5d23-420f-a844-c55759155260"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.421097 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3ddbc68-5d23-420f-a844-c55759155260-logs" (OuterVolumeSpecName: "logs") pod "a3ddbc68-5d23-420f-a844-c55759155260" (UID: "a3ddbc68-5d23-420f-a844-c55759155260"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.435997 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3ddbc68-5d23-420f-a844-c55759155260-scripts" (OuterVolumeSpecName: "scripts") pod "a3ddbc68-5d23-420f-a844-c55759155260" (UID: "a3ddbc68-5d23-420f-a844-c55759155260"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.482606 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3ddbc68-5d23-420f-a844-c55759155260-kube-api-access-ks6rh" (OuterVolumeSpecName: "kube-api-access-ks6rh") pod "a3ddbc68-5d23-420f-a844-c55759155260" (UID: "a3ddbc68-5d23-420f-a844-c55759155260"). InnerVolumeSpecName "kube-api-access-ks6rh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.504585 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3ddbc68-5d23-420f-a844-c55759155260-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a3ddbc68-5d23-420f-a844-c55759155260" (UID: "a3ddbc68-5d23-420f-a844-c55759155260"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.521080 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ks6rh\" (UniqueName: \"kubernetes.io/projected/a3ddbc68-5d23-420f-a844-c55759155260-kube-api-access-ks6rh\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.521122 4794 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3ddbc68-5d23-420f-a844-c55759155260-logs\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.521136 4794 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3ddbc68-5d23-420f-a844-c55759155260-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.521147 4794 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a3ddbc68-5d23-420f-a844-c55759155260-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.521157 4794 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3ddbc68-5d23-420f-a844-c55759155260-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.523081 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c932199b-1077-4aa1-aa88-7867c5c84212-kube-api-access-mxgj7" (OuterVolumeSpecName: "kube-api-access-mxgj7") pod "c932199b-1077-4aa1-aa88-7867c5c84212" (UID: "c932199b-1077-4aa1-aa88-7867c5c84212"). InnerVolumeSpecName "kube-api-access-mxgj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.623436 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxgj7\" (UniqueName: \"kubernetes.io/projected/c932199b-1077-4aa1-aa88-7867c5c84212-kube-api-access-mxgj7\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:24 crc kubenswrapper[4794]: W0216 17:23:24.681741 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57584011_2a08_4edd_a53a_fa54541cfc82.slice/crio-d2dcf4a832965cf0964f184eaacf08aa5db951533a972ef0105ba36efd2d8750 WatchSource:0}: Error finding container d2dcf4a832965cf0964f184eaacf08aa5db951533a972ef0105ba36efd2d8750: Status 404 returned error can't find the container with id d2dcf4a832965cf0964f184eaacf08aa5db951533a972ef0105ba36efd2d8750 Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.684014 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-547586545-c5624"] Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.710974 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.720020 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-99f86f5f6-sdjdr" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.723210 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-868454c84d-mwnsk"] Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.725117 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjtg2\" (UniqueName: \"kubernetes.io/projected/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-kube-api-access-zjtg2\") pod \"5ddd9f53-b6ac-4624-92b4-a076ad62d8de\" (UID: \"5ddd9f53-b6ac-4624-92b4-a076ad62d8de\") " Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.726219 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-scripts\") pod \"5ddd9f53-b6ac-4624-92b4-a076ad62d8de\" (UID: \"5ddd9f53-b6ac-4624-92b4-a076ad62d8de\") " Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.726284 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-log-httpd\") pod \"5ddd9f53-b6ac-4624-92b4-a076ad62d8de\" (UID: \"5ddd9f53-b6ac-4624-92b4-a076ad62d8de\") " Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.726408 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-config-data\") pod \"5ddd9f53-b6ac-4624-92b4-a076ad62d8de\" (UID: \"5ddd9f53-b6ac-4624-92b4-a076ad62d8de\") " Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.726439 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-combined-ca-bundle\") pod \"5ddd9f53-b6ac-4624-92b4-a076ad62d8de\" (UID: \"5ddd9f53-b6ac-4624-92b4-a076ad62d8de\") " Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.726517 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-run-httpd\") pod \"5ddd9f53-b6ac-4624-92b4-a076ad62d8de\" (UID: \"5ddd9f53-b6ac-4624-92b4-a076ad62d8de\") " Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.726616 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-sg-core-conf-yaml\") pod \"5ddd9f53-b6ac-4624-92b4-a076ad62d8de\" (UID: \"5ddd9f53-b6ac-4624-92b4-a076ad62d8de\") " Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.727598 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5ddd9f53-b6ac-4624-92b4-a076ad62d8de" (UID: "5ddd9f53-b6ac-4624-92b4-a076ad62d8de"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.728003 4794 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.738649 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5ddd9f53-b6ac-4624-92b4-a076ad62d8de" (UID: "5ddd9f53-b6ac-4624-92b4-a076ad62d8de"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.739348 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/678592d4-5921-4bc3-bdc9-d47b36ffba37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "678592d4-5921-4bc3-bdc9-d47b36ffba37" (UID: "678592d4-5921-4bc3-bdc9-d47b36ffba37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.743001 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-kube-api-access-zjtg2" (OuterVolumeSpecName: "kube-api-access-zjtg2") pod "5ddd9f53-b6ac-4624-92b4-a076ad62d8de" (UID: "5ddd9f53-b6ac-4624-92b4-a076ad62d8de"). InnerVolumeSpecName "kube-api-access-zjtg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.763606 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-scripts" (OuterVolumeSpecName: "scripts") pod "5ddd9f53-b6ac-4624-92b4-a076ad62d8de" (UID: "5ddd9f53-b6ac-4624-92b4-a076ad62d8de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.772465 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3ddbc68-5d23-420f-a844-c55759155260-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3ddbc68-5d23-420f-a844-c55759155260" (UID: "a3ddbc68-5d23-420f-a844-c55759155260"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.831283 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5b69fea3-061c-40bb-86ff-ca8af8587049-config\") pod \"5b69fea3-061c-40bb-86ff-ca8af8587049\" (UID: \"5b69fea3-061c-40bb-86ff-ca8af8587049\") " Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.831425 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5b69fea3-061c-40bb-86ff-ca8af8587049-httpd-config\") pod \"5b69fea3-061c-40bb-86ff-ca8af8587049\" (UID: \"5b69fea3-061c-40bb-86ff-ca8af8587049\") " Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.832057 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qn5d7\" (UniqueName: \"kubernetes.io/projected/5b69fea3-061c-40bb-86ff-ca8af8587049-kube-api-access-qn5d7\") pod \"5b69fea3-061c-40bb-86ff-ca8af8587049\" (UID: \"5b69fea3-061c-40bb-86ff-ca8af8587049\") " Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.832105 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b69fea3-061c-40bb-86ff-ca8af8587049-combined-ca-bundle\") pod \"5b69fea3-061c-40bb-86ff-ca8af8587049\" (UID: \"5b69fea3-061c-40bb-86ff-ca8af8587049\") " Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.832251 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b69fea3-061c-40bb-86ff-ca8af8587049-ovndb-tls-certs\") pod \"5b69fea3-061c-40bb-86ff-ca8af8587049\" (UID: \"5b69fea3-061c-40bb-86ff-ca8af8587049\") " Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.833047 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjtg2\" (UniqueName: \"kubernetes.io/projected/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-kube-api-access-zjtg2\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.833067 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/678592d4-5921-4bc3-bdc9-d47b36ffba37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.833076 4794 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.833087 4794 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.833096 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3ddbc68-5d23-420f-a844-c55759155260-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.888038 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b69fea3-061c-40bb-86ff-ca8af8587049-kube-api-access-qn5d7" (OuterVolumeSpecName: "kube-api-access-qn5d7") pod "5b69fea3-061c-40bb-86ff-ca8af8587049" (UID: "5b69fea3-061c-40bb-86ff-ca8af8587049"). InnerVolumeSpecName "kube-api-access-qn5d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.889218 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b69fea3-061c-40bb-86ff-ca8af8587049-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "5b69fea3-061c-40bb-86ff-ca8af8587049" (UID: "5b69fea3-061c-40bb-86ff-ca8af8587049"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.898637 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/678592d4-5921-4bc3-bdc9-d47b36ffba37-config-data" (OuterVolumeSpecName: "config-data") pod "678592d4-5921-4bc3-bdc9-d47b36ffba37" (UID: "678592d4-5921-4bc3-bdc9-d47b36ffba37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.934992 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.935056 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-r78sp" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.935337 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.944933 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-67dd4d6c5f-84d7v" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.949274 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5ddd9f53-b6ac-4624-92b4-a076ad62d8de" (UID: "5ddd9f53-b6ac-4624-92b4-a076ad62d8de"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.951943 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-99f86f5f6-sdjdr" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.958085 4794 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5b69fea3-061c-40bb-86ff-ca8af8587049-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.959576 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qn5d7\" (UniqueName: \"kubernetes.io/projected/5b69fea3-061c-40bb-86ff-ca8af8587049-kube-api-access-qn5d7\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.959631 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/678592d4-5921-4bc3-bdc9-d47b36ffba37-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:24 crc kubenswrapper[4794]: I0216 17:23:24.977685 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6df69d99d9-jgbc2" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.007664 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c932199b-1077-4aa1-aa88-7867c5c84212-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c932199b-1077-4aa1-aa88-7867c5c84212" (UID: "c932199b-1077-4aa1-aa88-7867c5c84212"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.016624 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c932199b-1077-4aa1-aa88-7867c5c84212-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c932199b-1077-4aa1-aa88-7867c5c84212" (UID: "c932199b-1077-4aa1-aa88-7867c5c84212"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.041086 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.643810319 podStartE2EDuration="24.041057504s" podCreationTimestamp="2026-02-16 17:23:01 +0000 UTC" firstStartedPulling="2026-02-16 17:23:02.901249376 +0000 UTC m=+1408.849344023" lastFinishedPulling="2026-02-16 17:23:23.298496571 +0000 UTC m=+1429.246591208" observedRunningTime="2026-02-16 17:23:25.03597692 +0000 UTC m=+1430.984071567" watchObservedRunningTime="2026-02-16 17:23:25.041057504 +0000 UTC m=+1430.989152151" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.053110 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c932199b-1077-4aa1-aa88-7867c5c84212-config" (OuterVolumeSpecName: "config") pod "c932199b-1077-4aa1-aa88-7867c5c84212" (UID: "c932199b-1077-4aa1-aa88-7867c5c84212"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.063249 4794 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c932199b-1077-4aa1-aa88-7867c5c84212-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.063276 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c932199b-1077-4aa1-aa88-7867c5c84212-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.063285 4794 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c932199b-1077-4aa1-aa88-7867c5c84212-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.063295 4794 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.065678 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c932199b-1077-4aa1-aa88-7867c5c84212-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c932199b-1077-4aa1-aa88-7867c5c84212" (UID: "c932199b-1077-4aa1-aa88-7867c5c84212"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.075968 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c932199b-1077-4aa1-aa88-7867c5c84212-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c932199b-1077-4aa1-aa88-7867c5c84212" (UID: "c932199b-1077-4aa1-aa88-7867c5c84212"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.170096 4794 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c932199b-1077-4aa1-aa88-7867c5c84212-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.170126 4794 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c932199b-1077-4aa1-aa88-7867c5c84212-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.263823 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b69fea3-061c-40bb-86ff-ca8af8587049-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b69fea3-061c-40bb-86ff-ca8af8587049" (UID: "5b69fea3-061c-40bb-86ff-ca8af8587049"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.273347 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b69fea3-061c-40bb-86ff-ca8af8587049-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.293887 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b69fea3-061c-40bb-86ff-ca8af8587049-config" (OuterVolumeSpecName: "config") pod "5b69fea3-061c-40bb-86ff-ca8af8587049" (UID: "5b69fea3-061c-40bb-86ff-ca8af8587049"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.318745 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3ddbc68-5d23-420f-a844-c55759155260-config-data" (OuterVolumeSpecName: "config-data") pod "a3ddbc68-5d23-420f-a844-c55759155260" (UID: "a3ddbc68-5d23-420f-a844-c55759155260"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.336539 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5ddd9f53-b6ac-4624-92b4-a076ad62d8de" (UID: "5ddd9f53-b6ac-4624-92b4-a076ad62d8de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.364819 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b69fea3-061c-40bb-86ff-ca8af8587049-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "5b69fea3-061c-40bb-86ff-ca8af8587049" (UID: "5b69fea3-061c-40bb-86ff-ca8af8587049"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.378206 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.378256 4794 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b69fea3-061c-40bb-86ff-ca8af8587049-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.378270 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5b69fea3-061c-40bb-86ff-ca8af8587049-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.378282 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3ddbc68-5d23-420f-a844-c55759155260-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.423593 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-config-data" (OuterVolumeSpecName: "config-data") pod "5ddd9f53-b6ac-4624-92b4-a076ad62d8de" (UID: "5ddd9f53-b6ac-4624-92b4-a076ad62d8de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.448125 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ddd9f53-b6ac-4624-92b4-a076ad62d8de","Type":"ContainerDied","Data":"22477a207ed67d88b475741a1772c0f8c9cb8d095ec08b5046c0e0b0e83aafdf"} Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.448228 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-599bd89595-29q2j" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.448245 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-54fcd965cd-jvdzj"] Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.448266 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-54fbcd866f-568dg"] Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.448279 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-r78sp" event={"ID":"c932199b-1077-4aa1-aa88-7867c5c84212","Type":"ContainerDied","Data":"3685edbc37b084f6df55eacb53496f7a085a248e9d6ba1aacb2634c13d15d9a3"} Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.448312 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-547586545-c5624" event={"ID":"c0403b0e-4120-4eb9-b7ed-dcfafb224d46","Type":"ContainerStarted","Data":"d1be619063c989aa95220fba5e92a19630a87e0a51dd49e9d9b7058286405c8e"} Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.448329 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-849cbf9447-6chxp"] Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.448344 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"11c6449f-ae59-4210-9c59-bafcbb116ed8","Type":"ContainerStarted","Data":"d7a3135b6860b62aef5981ffe3f80edb797928450c1af0ab75feaadc1433a6e3"} Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.448355 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-54fcd965cd-jvdzj" event={"ID":"d1755f51-5efd-43e0-902e-c2c1b6760350","Type":"ContainerStarted","Data":"36722f1967f1d98dec65a551cad81925c20fc4158ae361852e886c67636660cc"} Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.448363 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6cb67474dc-d4tmw" event={"ID":"cd56173e-c7f0-4309-97a9-4bd89c7704f3","Type":"ContainerStarted","Data":"2a5501566dec3a71299b011edce9cebca38ee4141976f43777c2a59fa906d9b2"} Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.448374 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a3ddbc68-5d23-420f-a844-c55759155260","Type":"ContainerDied","Data":"1102e60ca1697705c26d4ad4a7d48484ae4c77431be2d97e106bc01e62d1cec1"} Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.448394 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-99f86f5f6-sdjdr" event={"ID":"5b69fea3-061c-40bb-86ff-ca8af8587049","Type":"ContainerDied","Data":"b5bd6823370a9894ca639d4b897e2b1cb0f3900a56cff1cd8a184ab7f6f72b08"} Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.448406 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6df69d99d9-jgbc2" event={"ID":"678592d4-5921-4bc3-bdc9-d47b36ffba37","Type":"ContainerDied","Data":"0876de444455ca21b42df3630f295022f846104fc11761e9b45cb8473280e91e"} Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.448417 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-868454c84d-mwnsk" event={"ID":"57584011-2a08-4edd-a53a-fa54541cfc82","Type":"ContainerStarted","Data":"d2dcf4a832965cf0964f184eaacf08aa5db951533a972ef0105ba36efd2d8750"} Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.448445 4794 scope.go:117] "RemoveContainer" containerID="d8ca38db2ddc3163a6c287eb50021e606fa8f95771f31edd7027c566317049fa" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.480150 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ddd9f53-b6ac-4624-92b4-a076ad62d8de-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.512034 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6df69d99d9-jgbc2"] Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.523103 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6df69d99d9-jgbc2"] Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.526501 4794 scope.go:117] "RemoveContainer" containerID="5fc05aaf20aad6db9c0337f839a3c35aef358c9173809b50b0e5fbfe3bfeed28" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.535291 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-67dd4d6c5f-84d7v"] Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.544766 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-67dd4d6c5f-84d7v"] Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.573159 4794 scope.go:117] "RemoveContainer" containerID="b7b183697fa62a18c0922b1d7b0ce4e7591a68eadb850d74d1de2eba8b6349a2" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.639349 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.664757 4794 scope.go:117] "RemoveContainer" containerID="68cb34ae47ee273c416e83f42d642c333007a14d80b0330f29a6a98f12df8e42" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.679871 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.690686 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 16 17:23:25 crc kubenswrapper[4794]: E0216 17:23:25.691342 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b69fea3-061c-40bb-86ff-ca8af8587049" containerName="neutron-api" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.692932 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b69fea3-061c-40bb-86ff-ca8af8587049" containerName="neutron-api" Feb 16 17:23:25 crc kubenswrapper[4794]: E0216 17:23:25.704539 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ddd9f53-b6ac-4624-92b4-a076ad62d8de" containerName="ceilometer-notification-agent" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.704595 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ddd9f53-b6ac-4624-92b4-a076ad62d8de" containerName="ceilometer-notification-agent" Feb 16 17:23:25 crc kubenswrapper[4794]: E0216 17:23:25.704647 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c932199b-1077-4aa1-aa88-7867c5c84212" containerName="dnsmasq-dns" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.704657 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="c932199b-1077-4aa1-aa88-7867c5c84212" containerName="dnsmasq-dns" Feb 16 17:23:25 crc kubenswrapper[4794]: E0216 17:23:25.704692 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b69fea3-061c-40bb-86ff-ca8af8587049" containerName="neutron-httpd" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.704701 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b69fea3-061c-40bb-86ff-ca8af8587049" containerName="neutron-httpd" Feb 16 17:23:25 crc kubenswrapper[4794]: E0216 17:23:25.704718 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d" containerName="heat-cfnapi" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.704737 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d" containerName="heat-cfnapi" Feb 16 17:23:25 crc kubenswrapper[4794]: E0216 17:23:25.704756 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c932199b-1077-4aa1-aa88-7867c5c84212" containerName="init" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.704763 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="c932199b-1077-4aa1-aa88-7867c5c84212" containerName="init" Feb 16 17:23:25 crc kubenswrapper[4794]: E0216 17:23:25.704808 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ddd9f53-b6ac-4624-92b4-a076ad62d8de" containerName="sg-core" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.704824 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ddd9f53-b6ac-4624-92b4-a076ad62d8de" containerName="sg-core" Feb 16 17:23:25 crc kubenswrapper[4794]: E0216 17:23:25.704867 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ddd9f53-b6ac-4624-92b4-a076ad62d8de" containerName="ceilometer-central-agent" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.704894 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ddd9f53-b6ac-4624-92b4-a076ad62d8de" containerName="ceilometer-central-agent" Feb 16 17:23:25 crc kubenswrapper[4794]: E0216 17:23:25.704911 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3ddbc68-5d23-420f-a844-c55759155260" containerName="cinder-api-log" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.704917 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3ddbc68-5d23-420f-a844-c55759155260" containerName="cinder-api-log" Feb 16 17:23:25 crc kubenswrapper[4794]: E0216 17:23:25.704927 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="678592d4-5921-4bc3-bdc9-d47b36ffba37" containerName="heat-api" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.704934 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="678592d4-5921-4bc3-bdc9-d47b36ffba37" containerName="heat-api" Feb 16 17:23:25 crc kubenswrapper[4794]: E0216 17:23:25.704955 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ddd9f53-b6ac-4624-92b4-a076ad62d8de" containerName="proxy-httpd" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.704960 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ddd9f53-b6ac-4624-92b4-a076ad62d8de" containerName="proxy-httpd" Feb 16 17:23:25 crc kubenswrapper[4794]: E0216 17:23:25.704984 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3ddbc68-5d23-420f-a844-c55759155260" containerName="cinder-api" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.704990 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3ddbc68-5d23-420f-a844-c55759155260" containerName="cinder-api" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.705629 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3ddbc68-5d23-420f-a844-c55759155260" containerName="cinder-api-log" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.705646 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ddd9f53-b6ac-4624-92b4-a076ad62d8de" containerName="ceilometer-central-agent" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.705666 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d" containerName="heat-cfnapi" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.705675 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ddd9f53-b6ac-4624-92b4-a076ad62d8de" containerName="sg-core" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.705691 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b69fea3-061c-40bb-86ff-ca8af8587049" containerName="neutron-api" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.705700 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b69fea3-061c-40bb-86ff-ca8af8587049" containerName="neutron-httpd" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.705708 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ddd9f53-b6ac-4624-92b4-a076ad62d8de" containerName="ceilometer-notification-agent" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.705716 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="678592d4-5921-4bc3-bdc9-d47b36ffba37" containerName="heat-api" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.705730 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ddd9f53-b6ac-4624-92b4-a076ad62d8de" containerName="proxy-httpd" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.705746 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3ddbc68-5d23-420f-a844-c55759155260" containerName="cinder-api" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.705754 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="c932199b-1077-4aa1-aa88-7867c5c84212" containerName="dnsmasq-dns" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.707463 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.712781 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.716115 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.716426 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.718841 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.730935 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.759182 4794 scope.go:117] "RemoveContainer" containerID="fe46504ecb88e7e48e3acf8fb9b0e1f42b729b703c11863dd2c44da68e2cfa2e" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.796419 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58f60884-ce4b-47ac-8720-dd812acdc8a8-config-data-custom\") pod \"cinder-api-0\" (UID: \"58f60884-ce4b-47ac-8720-dd812acdc8a8\") " pod="openstack/cinder-api-0" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.796512 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8qb4\" (UniqueName: \"kubernetes.io/projected/58f60884-ce4b-47ac-8720-dd812acdc8a8-kube-api-access-p8qb4\") pod \"cinder-api-0\" (UID: \"58f60884-ce4b-47ac-8720-dd812acdc8a8\") " pod="openstack/cinder-api-0" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.796543 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/58f60884-ce4b-47ac-8720-dd812acdc8a8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"58f60884-ce4b-47ac-8720-dd812acdc8a8\") " pod="openstack/cinder-api-0" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.796570 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58f60884-ce4b-47ac-8720-dd812acdc8a8-config-data\") pod \"cinder-api-0\" (UID: \"58f60884-ce4b-47ac-8720-dd812acdc8a8\") " pod="openstack/cinder-api-0" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.796621 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58f60884-ce4b-47ac-8720-dd812acdc8a8-public-tls-certs\") pod \"cinder-api-0\" (UID: \"58f60884-ce4b-47ac-8720-dd812acdc8a8\") " pod="openstack/cinder-api-0" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.796646 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58f60884-ce4b-47ac-8720-dd812acdc8a8-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"58f60884-ce4b-47ac-8720-dd812acdc8a8\") " pod="openstack/cinder-api-0" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.796677 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58f60884-ce4b-47ac-8720-dd812acdc8a8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"58f60884-ce4b-47ac-8720-dd812acdc8a8\") " pod="openstack/cinder-api-0" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.796747 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58f60884-ce4b-47ac-8720-dd812acdc8a8-scripts\") pod \"cinder-api-0\" (UID: \"58f60884-ce4b-47ac-8720-dd812acdc8a8\") " pod="openstack/cinder-api-0" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.796774 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58f60884-ce4b-47ac-8720-dd812acdc8a8-logs\") pod \"cinder-api-0\" (UID: \"58f60884-ce4b-47ac-8720-dd812acdc8a8\") " pod="openstack/cinder-api-0" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.839938 4794 scope.go:117] "RemoveContainer" containerID="c61f43a7593538088ee05f9232a915c955142186f9d16f1a1fbd078e2e9acd40" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.844407 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.860559 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-99f86f5f6-sdjdr"] Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.874773 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.879689 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.886386 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-99f86f5f6-sdjdr"] Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.894436 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.900500 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8qb4\" (UniqueName: \"kubernetes.io/projected/58f60884-ce4b-47ac-8720-dd812acdc8a8-kube-api-access-p8qb4\") pod \"cinder-api-0\" (UID: \"58f60884-ce4b-47ac-8720-dd812acdc8a8\") " pod="openstack/cinder-api-0" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.900563 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/58f60884-ce4b-47ac-8720-dd812acdc8a8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"58f60884-ce4b-47ac-8720-dd812acdc8a8\") " pod="openstack/cinder-api-0" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.900620 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58f60884-ce4b-47ac-8720-dd812acdc8a8-config-data\") pod \"cinder-api-0\" (UID: \"58f60884-ce4b-47ac-8720-dd812acdc8a8\") " pod="openstack/cinder-api-0" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.900675 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acdf75eb-c141-4f82-94f9-dd95db013ba7-log-httpd\") pod \"ceilometer-0\" (UID: \"acdf75eb-c141-4f82-94f9-dd95db013ba7\") " pod="openstack/ceilometer-0" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.900740 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58f60884-ce4b-47ac-8720-dd812acdc8a8-public-tls-certs\") pod \"cinder-api-0\" (UID: \"58f60884-ce4b-47ac-8720-dd812acdc8a8\") " pod="openstack/cinder-api-0" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.900767 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58f60884-ce4b-47ac-8720-dd812acdc8a8-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"58f60884-ce4b-47ac-8720-dd812acdc8a8\") " pod="openstack/cinder-api-0" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.900819 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58f60884-ce4b-47ac-8720-dd812acdc8a8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"58f60884-ce4b-47ac-8720-dd812acdc8a8\") " pod="openstack/cinder-api-0" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.900879 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acdf75eb-c141-4f82-94f9-dd95db013ba7-run-httpd\") pod \"ceilometer-0\" (UID: \"acdf75eb-c141-4f82-94f9-dd95db013ba7\") " pod="openstack/ceilometer-0" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.900974 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/acdf75eb-c141-4f82-94f9-dd95db013ba7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"acdf75eb-c141-4f82-94f9-dd95db013ba7\") " pod="openstack/ceilometer-0" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.901023 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acdf75eb-c141-4f82-94f9-dd95db013ba7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"acdf75eb-c141-4f82-94f9-dd95db013ba7\") " pod="openstack/ceilometer-0" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.901042 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acdf75eb-c141-4f82-94f9-dd95db013ba7-scripts\") pod \"ceilometer-0\" (UID: \"acdf75eb-c141-4f82-94f9-dd95db013ba7\") " pod="openstack/ceilometer-0" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.901078 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58f60884-ce4b-47ac-8720-dd812acdc8a8-scripts\") pod \"cinder-api-0\" (UID: \"58f60884-ce4b-47ac-8720-dd812acdc8a8\") " pod="openstack/cinder-api-0" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.901099 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btkhp\" (UniqueName: \"kubernetes.io/projected/acdf75eb-c141-4f82-94f9-dd95db013ba7-kube-api-access-btkhp\") pod \"ceilometer-0\" (UID: \"acdf75eb-c141-4f82-94f9-dd95db013ba7\") " pod="openstack/ceilometer-0" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.901120 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58f60884-ce4b-47ac-8720-dd812acdc8a8-logs\") pod \"cinder-api-0\" (UID: \"58f60884-ce4b-47ac-8720-dd812acdc8a8\") " pod="openstack/cinder-api-0" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.901198 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58f60884-ce4b-47ac-8720-dd812acdc8a8-config-data-custom\") pod \"cinder-api-0\" (UID: \"58f60884-ce4b-47ac-8720-dd812acdc8a8\") " pod="openstack/cinder-api-0" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.901225 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acdf75eb-c141-4f82-94f9-dd95db013ba7-config-data\") pod \"ceilometer-0\" (UID: \"acdf75eb-c141-4f82-94f9-dd95db013ba7\") " pod="openstack/ceilometer-0" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.901995 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.902149 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.914864 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58f60884-ce4b-47ac-8720-dd812acdc8a8-logs\") pod \"cinder-api-0\" (UID: \"58f60884-ce4b-47ac-8720-dd812acdc8a8\") " pod="openstack/cinder-api-0" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.915429 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/58f60884-ce4b-47ac-8720-dd812acdc8a8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"58f60884-ce4b-47ac-8720-dd812acdc8a8\") " pod="openstack/cinder-api-0" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.945996 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58f60884-ce4b-47ac-8720-dd812acdc8a8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"58f60884-ce4b-47ac-8720-dd812acdc8a8\") " pod="openstack/cinder-api-0" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.946203 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58f60884-ce4b-47ac-8720-dd812acdc8a8-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"58f60884-ce4b-47ac-8720-dd812acdc8a8\") " pod="openstack/cinder-api-0" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.946253 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58f60884-ce4b-47ac-8720-dd812acdc8a8-config-data\") pod \"cinder-api-0\" (UID: \"58f60884-ce4b-47ac-8720-dd812acdc8a8\") " pod="openstack/cinder-api-0" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.946751 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58f60884-ce4b-47ac-8720-dd812acdc8a8-public-tls-certs\") pod \"cinder-api-0\" (UID: \"58f60884-ce4b-47ac-8720-dd812acdc8a8\") " pod="openstack/cinder-api-0" Feb 16 17:23:25 crc kubenswrapper[4794]: I0216 17:23:25.947244 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58f60884-ce4b-47ac-8720-dd812acdc8a8-config-data-custom\") pod \"cinder-api-0\" (UID: \"58f60884-ce4b-47ac-8720-dd812acdc8a8\") " pod="openstack/cinder-api-0" Feb 16 17:23:26 crc kubenswrapper[4794]: I0216 17:23:26.005729 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acdf75eb-c141-4f82-94f9-dd95db013ba7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"acdf75eb-c141-4f82-94f9-dd95db013ba7\") " pod="openstack/ceilometer-0" Feb 16 17:23:26 crc kubenswrapper[4794]: I0216 17:23:26.006089 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acdf75eb-c141-4f82-94f9-dd95db013ba7-scripts\") pod \"ceilometer-0\" (UID: \"acdf75eb-c141-4f82-94f9-dd95db013ba7\") " pod="openstack/ceilometer-0" Feb 16 17:23:26 crc kubenswrapper[4794]: I0216 17:23:26.006130 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btkhp\" (UniqueName: \"kubernetes.io/projected/acdf75eb-c141-4f82-94f9-dd95db013ba7-kube-api-access-btkhp\") pod \"ceilometer-0\" (UID: \"acdf75eb-c141-4f82-94f9-dd95db013ba7\") " pod="openstack/ceilometer-0" Feb 16 17:23:26 crc kubenswrapper[4794]: I0216 17:23:26.006201 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acdf75eb-c141-4f82-94f9-dd95db013ba7-config-data\") pod \"ceilometer-0\" (UID: \"acdf75eb-c141-4f82-94f9-dd95db013ba7\") " pod="openstack/ceilometer-0" Feb 16 17:23:26 crc kubenswrapper[4794]: I0216 17:23:26.006325 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acdf75eb-c141-4f82-94f9-dd95db013ba7-log-httpd\") pod \"ceilometer-0\" (UID: \"acdf75eb-c141-4f82-94f9-dd95db013ba7\") " pod="openstack/ceilometer-0" Feb 16 17:23:26 crc kubenswrapper[4794]: I0216 17:23:26.006402 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acdf75eb-c141-4f82-94f9-dd95db013ba7-run-httpd\") pod \"ceilometer-0\" (UID: \"acdf75eb-c141-4f82-94f9-dd95db013ba7\") " pod="openstack/ceilometer-0" Feb 16 17:23:26 crc kubenswrapper[4794]: I0216 17:23:26.006446 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/acdf75eb-c141-4f82-94f9-dd95db013ba7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"acdf75eb-c141-4f82-94f9-dd95db013ba7\") " pod="openstack/ceilometer-0" Feb 16 17:23:26 crc kubenswrapper[4794]: I0216 17:23:26.011036 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acdf75eb-c141-4f82-94f9-dd95db013ba7-run-httpd\") pod \"ceilometer-0\" (UID: \"acdf75eb-c141-4f82-94f9-dd95db013ba7\") " pod="openstack/ceilometer-0" Feb 16 17:23:26 crc kubenswrapper[4794]: I0216 17:23:26.014849 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acdf75eb-c141-4f82-94f9-dd95db013ba7-log-httpd\") pod \"ceilometer-0\" (UID: \"acdf75eb-c141-4f82-94f9-dd95db013ba7\") " pod="openstack/ceilometer-0" Feb 16 17:23:26 crc kubenswrapper[4794]: I0216 17:23:26.024662 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acdf75eb-c141-4f82-94f9-dd95db013ba7-config-data\") pod \"ceilometer-0\" (UID: \"acdf75eb-c141-4f82-94f9-dd95db013ba7\") " pod="openstack/ceilometer-0" Feb 16 17:23:26 crc kubenswrapper[4794]: I0216 17:23:26.025329 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acdf75eb-c141-4f82-94f9-dd95db013ba7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"acdf75eb-c141-4f82-94f9-dd95db013ba7\") " pod="openstack/ceilometer-0" Feb 16 17:23:26 crc kubenswrapper[4794]: I0216 17:23:26.025877 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acdf75eb-c141-4f82-94f9-dd95db013ba7-scripts\") pod \"ceilometer-0\" (UID: \"acdf75eb-c141-4f82-94f9-dd95db013ba7\") " pod="openstack/ceilometer-0" Feb 16 17:23:26 crc kubenswrapper[4794]: I0216 17:23:26.039159 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/acdf75eb-c141-4f82-94f9-dd95db013ba7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"acdf75eb-c141-4f82-94f9-dd95db013ba7\") " pod="openstack/ceilometer-0" Feb 16 17:23:26 crc kubenswrapper[4794]: I0216 17:23:26.086219 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-54fbcd866f-568dg" event={"ID":"07460b16-5cea-4a16-8389-dc1d3e7c3ee8","Type":"ContainerStarted","Data":"07a7703ae85b026816df39c57725ec2d31417af521dfbfa360197ae38ee4f374"} Feb 16 17:23:26 crc kubenswrapper[4794]: I0216 17:23:26.093234 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-547586545-c5624" event={"ID":"c0403b0e-4120-4eb9-b7ed-dcfafb224d46","Type":"ContainerStarted","Data":"8cd3bacc494fcb7941f6dfd41bf70aa016093ecd849fe723be1104acac716a1c"} Feb 16 17:23:26 crc kubenswrapper[4794]: I0216 17:23:26.094762 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-547586545-c5624" Feb 16 17:23:26 crc kubenswrapper[4794]: I0216 17:23:26.096854 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-849cbf9447-6chxp" event={"ID":"ccd75b14-da33-40cb-ace9-fae71c629d01","Type":"ContainerStarted","Data":"3085c62cbedf98180c526e919ecd1c00251d116b1ebe9c9aa1f5136d54a4721b"} Feb 16 17:23:26 crc kubenswrapper[4794]: I0216 17:23:26.096891 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-849cbf9447-6chxp" event={"ID":"ccd75b14-da33-40cb-ace9-fae71c629d01","Type":"ContainerStarted","Data":"f5d265bacc0588740a007b7d4e9059c06048fb0c0d88f3f3043a249739b0352e"} Feb 16 17:23:26 crc kubenswrapper[4794]: I0216 17:23:26.097163 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-849cbf9447-6chxp" Feb 16 17:23:26 crc kubenswrapper[4794]: I0216 17:23:26.119926 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-547586545-c5624" podStartSLOduration=13.11990117 podStartE2EDuration="13.11990117s" podCreationTimestamp="2026-02-16 17:23:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:23:26.119865529 +0000 UTC m=+1432.067960166" watchObservedRunningTime="2026-02-16 17:23:26.11990117 +0000 UTC m=+1432.067995817" Feb 16 17:23:26 crc kubenswrapper[4794]: I0216 17:23:26.155450 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-849cbf9447-6chxp" podStartSLOduration=9.155427963 podStartE2EDuration="9.155427963s" podCreationTimestamp="2026-02-16 17:23:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:23:26.149788224 +0000 UTC m=+1432.097882881" watchObservedRunningTime="2026-02-16 17:23:26.155427963 +0000 UTC m=+1432.103522610" Feb 16 17:23:26 crc kubenswrapper[4794]: I0216 17:23:26.166890 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58f60884-ce4b-47ac-8720-dd812acdc8a8-scripts\") pod \"cinder-api-0\" (UID: \"58f60884-ce4b-47ac-8720-dd812acdc8a8\") " pod="openstack/cinder-api-0" Feb 16 17:23:26 crc kubenswrapper[4794]: I0216 17:23:26.167093 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8qb4\" (UniqueName: \"kubernetes.io/projected/58f60884-ce4b-47ac-8720-dd812acdc8a8-kube-api-access-p8qb4\") pod \"cinder-api-0\" (UID: \"58f60884-ce4b-47ac-8720-dd812acdc8a8\") " pod="openstack/cinder-api-0" Feb 16 17:23:26 crc kubenswrapper[4794]: I0216 17:23:26.172318 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btkhp\" (UniqueName: \"kubernetes.io/projected/acdf75eb-c141-4f82-94f9-dd95db013ba7-kube-api-access-btkhp\") pod \"ceilometer-0\" (UID: \"acdf75eb-c141-4f82-94f9-dd95db013ba7\") " pod="openstack/ceilometer-0" Feb 16 17:23:26 crc kubenswrapper[4794]: I0216 17:23:26.279737 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-57f99b44dd-9kw4m" Feb 16 17:23:26 crc kubenswrapper[4794]: I0216 17:23:26.382489 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 17:23:26 crc kubenswrapper[4794]: I0216 17:23:26.683995 4794 scope.go:117] "RemoveContainer" containerID="201f8bd8e605ac419284e64c74d6ff0721a0638fba59d4096a40a4878d91a42b" Feb 16 17:23:26 crc kubenswrapper[4794]: I0216 17:23:26.723966 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:23:26 crc kubenswrapper[4794]: I0216 17:23:26.844996 4794 scope.go:117] "RemoveContainer" containerID="df9a6e065fb5627d63f0f8ec59dba882792d59a7d229e3380544dc340f4051b5" Feb 16 17:23:26 crc kubenswrapper[4794]: I0216 17:23:26.868852 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b69fea3-061c-40bb-86ff-ca8af8587049" path="/var/lib/kubelet/pods/5b69fea3-061c-40bb-86ff-ca8af8587049/volumes" Feb 16 17:23:26 crc kubenswrapper[4794]: I0216 17:23:26.869642 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ddd9f53-b6ac-4624-92b4-a076ad62d8de" path="/var/lib/kubelet/pods/5ddd9f53-b6ac-4624-92b4-a076ad62d8de/volumes" Feb 16 17:23:26 crc kubenswrapper[4794]: I0216 17:23:26.870465 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="678592d4-5921-4bc3-bdc9-d47b36ffba37" path="/var/lib/kubelet/pods/678592d4-5921-4bc3-bdc9-d47b36ffba37/volumes" Feb 16 17:23:26 crc kubenswrapper[4794]: I0216 17:23:26.873131 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3ddbc68-5d23-420f-a844-c55759155260" path="/var/lib/kubelet/pods/a3ddbc68-5d23-420f-a844-c55759155260/volumes" Feb 16 17:23:26 crc kubenswrapper[4794]: I0216 17:23:26.874192 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d" path="/var/lib/kubelet/pods/f3d4bb4f-25b9-450d-bb30-d2a0a9838e8d/volumes" Feb 16 17:23:26 crc kubenswrapper[4794]: I0216 17:23:26.878955 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-57f99b44dd-9kw4m" Feb 16 17:23:26 crc kubenswrapper[4794]: I0216 17:23:26.977497 4794 scope.go:117] "RemoveContainer" containerID="e72c1c52f98c2b1baaab6d99b99add46e1dd0d4a019fedd86f26bdd1e4265a79" Feb 16 17:23:27 crc kubenswrapper[4794]: I0216 17:23:27.056592 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-57b87468-bqjtk"] Feb 16 17:23:27 crc kubenswrapper[4794]: I0216 17:23:27.056857 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-57b87468-bqjtk" podUID="b92c8cdf-5125-46d9-89c1-8549a2dc1b74" containerName="placement-log" containerID="cri-o://025c1074d07a609013845c69edfd72908e6163c9e3e9bc96693d96d9fbd6981f" gracePeriod=30 Feb 16 17:23:27 crc kubenswrapper[4794]: I0216 17:23:27.057215 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-57b87468-bqjtk" podUID="b92c8cdf-5125-46d9-89c1-8549a2dc1b74" containerName="placement-api" containerID="cri-o://ad5759da07de4f2d2fa94d28bda14c0227f2d26817e7a0edb7a7e29f3edd7c8f" gracePeriod=30 Feb 16 17:23:27 crc kubenswrapper[4794]: I0216 17:23:27.099921 4794 scope.go:117] "RemoveContainer" containerID="12db5c04ecc3f1a679a59c218185982a095aeb876b9954d19b4c4aecd06fef40" Feb 16 17:23:27 crc kubenswrapper[4794]: I0216 17:23:27.113952 4794 generic.go:334] "Generic (PLEG): container finished" podID="d1755f51-5efd-43e0-902e-c2c1b6760350" containerID="d3c5801b69b07f49d4c9b3281f51da1ec9e6903add554dea7243e0ea43492da5" exitCode=1 Feb 16 17:23:27 crc kubenswrapper[4794]: I0216 17:23:27.114204 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-54fcd965cd-jvdzj" event={"ID":"d1755f51-5efd-43e0-902e-c2c1b6760350","Type":"ContainerDied","Data":"d3c5801b69b07f49d4c9b3281f51da1ec9e6903add554dea7243e0ea43492da5"} Feb 16 17:23:27 crc kubenswrapper[4794]: I0216 17:23:27.115372 4794 scope.go:117] "RemoveContainer" containerID="d3c5801b69b07f49d4c9b3281f51da1ec9e6903add554dea7243e0ea43492da5" Feb 16 17:23:27 crc kubenswrapper[4794]: I0216 17:23:27.123915 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6cb67474dc-d4tmw" event={"ID":"cd56173e-c7f0-4309-97a9-4bd89c7704f3","Type":"ContainerStarted","Data":"65103177f7082e238704c697ffe37bc424ac74ce824e4742d4444478ad625ac8"} Feb 16 17:23:27 crc kubenswrapper[4794]: I0216 17:23:27.123978 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6cb67474dc-d4tmw" Feb 16 17:23:27 crc kubenswrapper[4794]: I0216 17:23:27.124665 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6cb67474dc-d4tmw" Feb 16 17:23:27 crc kubenswrapper[4794]: I0216 17:23:27.130920 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-868454c84d-mwnsk" event={"ID":"57584011-2a08-4edd-a53a-fa54541cfc82","Type":"ContainerStarted","Data":"d24048a25970d649900e8d8e5953eb7fe8ec0d5a0dc0e1ca8c1d003f49018d7b"} Feb 16 17:23:27 crc kubenswrapper[4794]: I0216 17:23:27.131115 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-868454c84d-mwnsk" Feb 16 17:23:27 crc kubenswrapper[4794]: I0216 17:23:27.132340 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-54fbcd866f-568dg" event={"ID":"07460b16-5cea-4a16-8389-dc1d3e7c3ee8","Type":"ContainerStarted","Data":"2e5901393a901c4ec21896c2c623a2da69bb48a1c83744cd46e0028fa13d15c7"} Feb 16 17:23:27 crc kubenswrapper[4794]: I0216 17:23:27.133493 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-54fbcd866f-568dg" Feb 16 17:23:27 crc kubenswrapper[4794]: I0216 17:23:27.143423 4794 scope.go:117] "RemoveContainer" containerID="bdcbee2f070a73c1f074e7323388703fc457c7eb6c68b2bb52e4fc93498634db" Feb 16 17:23:27 crc kubenswrapper[4794]: I0216 17:23:27.185977 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6cb67474dc-d4tmw" podStartSLOduration=15.185957886 podStartE2EDuration="15.185957886s" podCreationTimestamp="2026-02-16 17:23:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:23:27.175633385 +0000 UTC m=+1433.123728032" watchObservedRunningTime="2026-02-16 17:23:27.185957886 +0000 UTC m=+1433.134052543" Feb 16 17:23:27 crc kubenswrapper[4794]: I0216 17:23:27.220050 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-54fbcd866f-568dg" podStartSLOduration=14.220027088 podStartE2EDuration="14.220027088s" podCreationTimestamp="2026-02-16 17:23:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:23:27.195796854 +0000 UTC m=+1433.143891501" watchObservedRunningTime="2026-02-16 17:23:27.220027088 +0000 UTC m=+1433.168121735" Feb 16 17:23:27 crc kubenswrapper[4794]: I0216 17:23:27.238137 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-868454c84d-mwnsk" podStartSLOduration=10.238113828 podStartE2EDuration="10.238113828s" podCreationTimestamp="2026-02-16 17:23:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:23:27.227970372 +0000 UTC m=+1433.176065029" watchObservedRunningTime="2026-02-16 17:23:27.238113828 +0000 UTC m=+1433.186208475" Feb 16 17:23:27 crc kubenswrapper[4794]: I0216 17:23:27.344178 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 17:23:27 crc kubenswrapper[4794]: I0216 17:23:27.497500 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:23:28 crc kubenswrapper[4794]: I0216 17:23:28.190866 4794 generic.go:334] "Generic (PLEG): container finished" podID="d1755f51-5efd-43e0-902e-c2c1b6760350" containerID="e228aa19f3e4f3f7b5db5a369d576ef8d32dbe0c6adf629577e1444bde573485" exitCode=1 Feb 16 17:23:28 crc kubenswrapper[4794]: I0216 17:23:28.191204 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-54fcd965cd-jvdzj" event={"ID":"d1755f51-5efd-43e0-902e-c2c1b6760350","Type":"ContainerDied","Data":"e228aa19f3e4f3f7b5db5a369d576ef8d32dbe0c6adf629577e1444bde573485"} Feb 16 17:23:28 crc kubenswrapper[4794]: I0216 17:23:28.191238 4794 scope.go:117] "RemoveContainer" containerID="d3c5801b69b07f49d4c9b3281f51da1ec9e6903add554dea7243e0ea43492da5" Feb 16 17:23:28 crc kubenswrapper[4794]: I0216 17:23:28.191968 4794 scope.go:117] "RemoveContainer" containerID="e228aa19f3e4f3f7b5db5a369d576ef8d32dbe0c6adf629577e1444bde573485" Feb 16 17:23:28 crc kubenswrapper[4794]: E0216 17:23:28.192212 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-54fcd965cd-jvdzj_openstack(d1755f51-5efd-43e0-902e-c2c1b6760350)\"" pod="openstack/heat-api-54fcd965cd-jvdzj" podUID="d1755f51-5efd-43e0-902e-c2c1b6760350" Feb 16 17:23:28 crc kubenswrapper[4794]: I0216 17:23:28.208673 4794 generic.go:334] "Generic (PLEG): container finished" podID="b92c8cdf-5125-46d9-89c1-8549a2dc1b74" containerID="025c1074d07a609013845c69edfd72908e6163c9e3e9bc96693d96d9fbd6981f" exitCode=143 Feb 16 17:23:28 crc kubenswrapper[4794]: I0216 17:23:28.208758 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-57b87468-bqjtk" event={"ID":"b92c8cdf-5125-46d9-89c1-8549a2dc1b74","Type":"ContainerDied","Data":"025c1074d07a609013845c69edfd72908e6163c9e3e9bc96693d96d9fbd6981f"} Feb 16 17:23:28 crc kubenswrapper[4794]: I0216 17:23:28.212889 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"58f60884-ce4b-47ac-8720-dd812acdc8a8","Type":"ContainerStarted","Data":"44f2aec3b19ed09b8d4674ee11c5247be8c7c33bee8d78dd6cff5037de653923"} Feb 16 17:23:28 crc kubenswrapper[4794]: I0216 17:23:28.243493 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acdf75eb-c141-4f82-94f9-dd95db013ba7","Type":"ContainerStarted","Data":"155757857d113d89aa8f6e10ba8da8b97f02c0510fe313a869957fc1757aeec3"} Feb 16 17:23:28 crc kubenswrapper[4794]: I0216 17:23:28.256769 4794 generic.go:334] "Generic (PLEG): container finished" podID="07460b16-5cea-4a16-8389-dc1d3e7c3ee8" containerID="2e5901393a901c4ec21896c2c623a2da69bb48a1c83744cd46e0028fa13d15c7" exitCode=1 Feb 16 17:23:28 crc kubenswrapper[4794]: I0216 17:23:28.257447 4794 scope.go:117] "RemoveContainer" containerID="2e5901393a901c4ec21896c2c623a2da69bb48a1c83744cd46e0028fa13d15c7" Feb 16 17:23:28 crc kubenswrapper[4794]: I0216 17:23:28.256908 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-54fbcd866f-568dg" event={"ID":"07460b16-5cea-4a16-8389-dc1d3e7c3ee8","Type":"ContainerDied","Data":"2e5901393a901c4ec21896c2c623a2da69bb48a1c83744cd46e0028fa13d15c7"} Feb 16 17:23:29 crc kubenswrapper[4794]: I0216 17:23:29.030158 4794 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-54fcd965cd-jvdzj" Feb 16 17:23:29 crc kubenswrapper[4794]: I0216 17:23:29.030466 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-54fcd965cd-jvdzj" Feb 16 17:23:29 crc kubenswrapper[4794]: I0216 17:23:29.069402 4794 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-54fbcd866f-568dg" Feb 16 17:23:29 crc kubenswrapper[4794]: I0216 17:23:29.267585 4794 generic.go:334] "Generic (PLEG): container finished" podID="07460b16-5cea-4a16-8389-dc1d3e7c3ee8" containerID="5d689b127434cc55d2784c9b4928add67726ffd7eb7a8ec5585038d18384fc8d" exitCode=1 Feb 16 17:23:29 crc kubenswrapper[4794]: I0216 17:23:29.267645 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-54fbcd866f-568dg" event={"ID":"07460b16-5cea-4a16-8389-dc1d3e7c3ee8","Type":"ContainerDied","Data":"5d689b127434cc55d2784c9b4928add67726ffd7eb7a8ec5585038d18384fc8d"} Feb 16 17:23:29 crc kubenswrapper[4794]: I0216 17:23:29.267683 4794 scope.go:117] "RemoveContainer" containerID="2e5901393a901c4ec21896c2c623a2da69bb48a1c83744cd46e0028fa13d15c7" Feb 16 17:23:29 crc kubenswrapper[4794]: I0216 17:23:29.268452 4794 scope.go:117] "RemoveContainer" containerID="5d689b127434cc55d2784c9b4928add67726ffd7eb7a8ec5585038d18384fc8d" Feb 16 17:23:29 crc kubenswrapper[4794]: E0216 17:23:29.268779 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-54fbcd866f-568dg_openstack(07460b16-5cea-4a16-8389-dc1d3e7c3ee8)\"" pod="openstack/heat-cfnapi-54fbcd866f-568dg" podUID="07460b16-5cea-4a16-8389-dc1d3e7c3ee8" Feb 16 17:23:29 crc kubenswrapper[4794]: I0216 17:23:29.271541 4794 scope.go:117] "RemoveContainer" containerID="e228aa19f3e4f3f7b5db5a369d576ef8d32dbe0c6adf629577e1444bde573485" Feb 16 17:23:29 crc kubenswrapper[4794]: E0216 17:23:29.271789 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-54fcd965cd-jvdzj_openstack(d1755f51-5efd-43e0-902e-c2c1b6760350)\"" pod="openstack/heat-api-54fcd965cd-jvdzj" podUID="d1755f51-5efd-43e0-902e-c2c1b6760350" Feb 16 17:23:29 crc kubenswrapper[4794]: I0216 17:23:29.275184 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"58f60884-ce4b-47ac-8720-dd812acdc8a8","Type":"ContainerStarted","Data":"5341c1d34effe9d94440cc80abcbd23b2cdd37ae0ecfbd5621269709d436d4fa"} Feb 16 17:23:29 crc kubenswrapper[4794]: I0216 17:23:29.276235 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acdf75eb-c141-4f82-94f9-dd95db013ba7","Type":"ContainerStarted","Data":"e0b792bccee785b3dadc00ab69e369429e68cd2e968efa5ef07ebe987371d6e2"} Feb 16 17:23:29 crc kubenswrapper[4794]: I0216 17:23:29.434205 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:23:30 crc kubenswrapper[4794]: I0216 17:23:30.327371 4794 scope.go:117] "RemoveContainer" containerID="5d689b127434cc55d2784c9b4928add67726ffd7eb7a8ec5585038d18384fc8d" Feb 16 17:23:30 crc kubenswrapper[4794]: E0216 17:23:30.328948 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-54fbcd866f-568dg_openstack(07460b16-5cea-4a16-8389-dc1d3e7c3ee8)\"" pod="openstack/heat-cfnapi-54fbcd866f-568dg" podUID="07460b16-5cea-4a16-8389-dc1d3e7c3ee8" Feb 16 17:23:30 crc kubenswrapper[4794]: I0216 17:23:30.330129 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"58f60884-ce4b-47ac-8720-dd812acdc8a8","Type":"ContainerStarted","Data":"56058d04567fedaf7472f62c2b93d3d35b6d7fd9c66403478e4db47d6eb9afd1"} Feb 16 17:23:30 crc kubenswrapper[4794]: I0216 17:23:30.330524 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 16 17:23:30 crc kubenswrapper[4794]: I0216 17:23:30.345180 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acdf75eb-c141-4f82-94f9-dd95db013ba7","Type":"ContainerStarted","Data":"05cd36878ab4c8654b0bdaa4cdb45ca8d8a3ce6c325e55731ca7d3ae1b13f180"} Feb 16 17:23:30 crc kubenswrapper[4794]: I0216 17:23:30.345237 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acdf75eb-c141-4f82-94f9-dd95db013ba7","Type":"ContainerStarted","Data":"cc98ca6dacaf528e6ee39f94aedbceabc700f6277151f33e820bf28a61f6a147"} Feb 16 17:23:30 crc kubenswrapper[4794]: I0216 17:23:30.346649 4794 scope.go:117] "RemoveContainer" containerID="e228aa19f3e4f3f7b5db5a369d576ef8d32dbe0c6adf629577e1444bde573485" Feb 16 17:23:30 crc kubenswrapper[4794]: E0216 17:23:30.347382 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-54fcd965cd-jvdzj_openstack(d1755f51-5efd-43e0-902e-c2c1b6760350)\"" pod="openstack/heat-api-54fcd965cd-jvdzj" podUID="d1755f51-5efd-43e0-902e-c2c1b6760350" Feb 16 17:23:30 crc kubenswrapper[4794]: I0216 17:23:30.390674 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.390653326 podStartE2EDuration="5.390653326s" podCreationTimestamp="2026-02-16 17:23:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:23:30.388837665 +0000 UTC m=+1436.336932312" watchObservedRunningTime="2026-02-16 17:23:30.390653326 +0000 UTC m=+1436.338747983" Feb 16 17:23:30 crc kubenswrapper[4794]: I0216 17:23:30.649686 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-g8d8v"] Feb 16 17:23:30 crc kubenswrapper[4794]: I0216 17:23:30.651201 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-g8d8v" Feb 16 17:23:30 crc kubenswrapper[4794]: I0216 17:23:30.669979 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-g8d8v"] Feb 16 17:23:30 crc kubenswrapper[4794]: I0216 17:23:30.752739 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-sn2z4"] Feb 16 17:23:30 crc kubenswrapper[4794]: I0216 17:23:30.754257 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-sn2z4" Feb 16 17:23:30 crc kubenswrapper[4794]: I0216 17:23:30.767106 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-sn2z4"] Feb 16 17:23:30 crc kubenswrapper[4794]: I0216 17:23:30.777131 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0770add-b35a-4790-b877-78e7a2661b48-operator-scripts\") pod \"nova-api-db-create-g8d8v\" (UID: \"e0770add-b35a-4790-b877-78e7a2661b48\") " pod="openstack/nova-api-db-create-g8d8v" Feb 16 17:23:30 crc kubenswrapper[4794]: I0216 17:23:30.777309 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgmgf\" (UniqueName: \"kubernetes.io/projected/e0770add-b35a-4790-b877-78e7a2661b48-kube-api-access-dgmgf\") pod \"nova-api-db-create-g8d8v\" (UID: \"e0770add-b35a-4790-b877-78e7a2661b48\") " pod="openstack/nova-api-db-create-g8d8v" Feb 16 17:23:30 crc kubenswrapper[4794]: I0216 17:23:30.848413 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-9ed5-account-create-update-4c5kr"] Feb 16 17:23:30 crc kubenswrapper[4794]: I0216 17:23:30.850153 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9ed5-account-create-update-4c5kr" Feb 16 17:23:30 crc kubenswrapper[4794]: I0216 17:23:30.859423 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 16 17:23:30 crc kubenswrapper[4794]: I0216 17:23:30.867382 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9ed5-account-create-update-4c5kr"] Feb 16 17:23:30 crc kubenswrapper[4794]: I0216 17:23:30.881570 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgmgf\" (UniqueName: \"kubernetes.io/projected/e0770add-b35a-4790-b877-78e7a2661b48-kube-api-access-dgmgf\") pod \"nova-api-db-create-g8d8v\" (UID: \"e0770add-b35a-4790-b877-78e7a2661b48\") " pod="openstack/nova-api-db-create-g8d8v" Feb 16 17:23:30 crc kubenswrapper[4794]: I0216 17:23:30.881671 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tllbz\" (UniqueName: \"kubernetes.io/projected/1469ebf3-80e2-45db-bb76-9c0d75fa6ba0-kube-api-access-tllbz\") pod \"nova-cell0-db-create-sn2z4\" (UID: \"1469ebf3-80e2-45db-bb76-9c0d75fa6ba0\") " pod="openstack/nova-cell0-db-create-sn2z4" Feb 16 17:23:30 crc kubenswrapper[4794]: I0216 17:23:30.881754 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0770add-b35a-4790-b877-78e7a2661b48-operator-scripts\") pod \"nova-api-db-create-g8d8v\" (UID: \"e0770add-b35a-4790-b877-78e7a2661b48\") " pod="openstack/nova-api-db-create-g8d8v" Feb 16 17:23:30 crc kubenswrapper[4794]: I0216 17:23:30.881828 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1469ebf3-80e2-45db-bb76-9c0d75fa6ba0-operator-scripts\") pod \"nova-cell0-db-create-sn2z4\" (UID: \"1469ebf3-80e2-45db-bb76-9c0d75fa6ba0\") " pod="openstack/nova-cell0-db-create-sn2z4" Feb 16 17:23:30 crc kubenswrapper[4794]: I0216 17:23:30.882879 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0770add-b35a-4790-b877-78e7a2661b48-operator-scripts\") pod \"nova-api-db-create-g8d8v\" (UID: \"e0770add-b35a-4790-b877-78e7a2661b48\") " pod="openstack/nova-api-db-create-g8d8v" Feb 16 17:23:30 crc kubenswrapper[4794]: I0216 17:23:30.918282 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgmgf\" (UniqueName: \"kubernetes.io/projected/e0770add-b35a-4790-b877-78e7a2661b48-kube-api-access-dgmgf\") pod \"nova-api-db-create-g8d8v\" (UID: \"e0770add-b35a-4790-b877-78e7a2661b48\") " pod="openstack/nova-api-db-create-g8d8v" Feb 16 17:23:30 crc kubenswrapper[4794]: I0216 17:23:30.976249 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-g8d8v" Feb 16 17:23:30 crc kubenswrapper[4794]: I0216 17:23:30.983837 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab6d5750-0a23-4cce-8557-b0b1d867f91b-operator-scripts\") pod \"nova-api-9ed5-account-create-update-4c5kr\" (UID: \"ab6d5750-0a23-4cce-8557-b0b1d867f91b\") " pod="openstack/nova-api-9ed5-account-create-update-4c5kr" Feb 16 17:23:30 crc kubenswrapper[4794]: I0216 17:23:30.983933 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tllbz\" (UniqueName: \"kubernetes.io/projected/1469ebf3-80e2-45db-bb76-9c0d75fa6ba0-kube-api-access-tllbz\") pod \"nova-cell0-db-create-sn2z4\" (UID: \"1469ebf3-80e2-45db-bb76-9c0d75fa6ba0\") " pod="openstack/nova-cell0-db-create-sn2z4" Feb 16 17:23:30 crc kubenswrapper[4794]: I0216 17:23:30.983961 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrkp5\" (UniqueName: \"kubernetes.io/projected/ab6d5750-0a23-4cce-8557-b0b1d867f91b-kube-api-access-vrkp5\") pod \"nova-api-9ed5-account-create-update-4c5kr\" (UID: \"ab6d5750-0a23-4cce-8557-b0b1d867f91b\") " pod="openstack/nova-api-9ed5-account-create-update-4c5kr" Feb 16 17:23:30 crc kubenswrapper[4794]: I0216 17:23:30.984170 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1469ebf3-80e2-45db-bb76-9c0d75fa6ba0-operator-scripts\") pod \"nova-cell0-db-create-sn2z4\" (UID: \"1469ebf3-80e2-45db-bb76-9c0d75fa6ba0\") " pod="openstack/nova-cell0-db-create-sn2z4" Feb 16 17:23:30 crc kubenswrapper[4794]: I0216 17:23:30.984913 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1469ebf3-80e2-45db-bb76-9c0d75fa6ba0-operator-scripts\") pod \"nova-cell0-db-create-sn2z4\" (UID: \"1469ebf3-80e2-45db-bb76-9c0d75fa6ba0\") " pod="openstack/nova-cell0-db-create-sn2z4" Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.009840 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tllbz\" (UniqueName: \"kubernetes.io/projected/1469ebf3-80e2-45db-bb76-9c0d75fa6ba0-kube-api-access-tllbz\") pod \"nova-cell0-db-create-sn2z4\" (UID: \"1469ebf3-80e2-45db-bb76-9c0d75fa6ba0\") " pod="openstack/nova-cell0-db-create-sn2z4" Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.077330 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-sn2z4" Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.087481 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab6d5750-0a23-4cce-8557-b0b1d867f91b-operator-scripts\") pod \"nova-api-9ed5-account-create-update-4c5kr\" (UID: \"ab6d5750-0a23-4cce-8557-b0b1d867f91b\") " pod="openstack/nova-api-9ed5-account-create-update-4c5kr" Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.087579 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrkp5\" (UniqueName: \"kubernetes.io/projected/ab6d5750-0a23-4cce-8557-b0b1d867f91b-kube-api-access-vrkp5\") pod \"nova-api-9ed5-account-create-update-4c5kr\" (UID: \"ab6d5750-0a23-4cce-8557-b0b1d867f91b\") " pod="openstack/nova-api-9ed5-account-create-update-4c5kr" Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.089925 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab6d5750-0a23-4cce-8557-b0b1d867f91b-operator-scripts\") pod \"nova-api-9ed5-account-create-update-4c5kr\" (UID: \"ab6d5750-0a23-4cce-8557-b0b1d867f91b\") " pod="openstack/nova-api-9ed5-account-create-update-4c5kr" Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.089986 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-7154-account-create-update-wzjn5"] Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.100934 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7154-account-create-update-wzjn5" Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.105814 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrkp5\" (UniqueName: \"kubernetes.io/projected/ab6d5750-0a23-4cce-8557-b0b1d867f91b-kube-api-access-vrkp5\") pod \"nova-api-9ed5-account-create-update-4c5kr\" (UID: \"ab6d5750-0a23-4cce-8557-b0b1d867f91b\") " pod="openstack/nova-api-9ed5-account-create-update-4c5kr" Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.107322 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.114237 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-vsrl6"] Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.122654 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vsrl6" Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.189584 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9ed5-account-create-update-4c5kr" Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.196502 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7154-account-create-update-wzjn5"] Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.233113 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-vsrl6"] Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.282488 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-8970-account-create-update-bjpr9"] Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.283995 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8970-account-create-update-bjpr9" Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.294593 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-8970-account-create-update-bjpr9"] Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.295990 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.297591 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99jcb\" (UniqueName: \"kubernetes.io/projected/589ff3be-38fe-4b42-9465-749794f9d7ac-kube-api-access-99jcb\") pod \"nova-cell1-db-create-vsrl6\" (UID: \"589ff3be-38fe-4b42-9465-749794f9d7ac\") " pod="openstack/nova-cell1-db-create-vsrl6" Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.297923 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ngzr\" (UniqueName: \"kubernetes.io/projected/baa5596a-0a62-46eb-9652-e6fd66238582-kube-api-access-4ngzr\") pod \"nova-cell0-7154-account-create-update-wzjn5\" (UID: \"baa5596a-0a62-46eb-9652-e6fd66238582\") " pod="openstack/nova-cell0-7154-account-create-update-wzjn5" Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.298027 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/589ff3be-38fe-4b42-9465-749794f9d7ac-operator-scripts\") pod \"nova-cell1-db-create-vsrl6\" (UID: \"589ff3be-38fe-4b42-9465-749794f9d7ac\") " pod="openstack/nova-cell1-db-create-vsrl6" Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.298128 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baa5596a-0a62-46eb-9652-e6fd66238582-operator-scripts\") pod \"nova-cell0-7154-account-create-update-wzjn5\" (UID: \"baa5596a-0a62-46eb-9652-e6fd66238582\") " pod="openstack/nova-cell0-7154-account-create-update-wzjn5" Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.399929 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22fb06b0-be61-4104-bd06-e83653551448-operator-scripts\") pod \"nova-cell1-8970-account-create-update-bjpr9\" (UID: \"22fb06b0-be61-4104-bd06-e83653551448\") " pod="openstack/nova-cell1-8970-account-create-update-bjpr9" Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.400004 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ngzr\" (UniqueName: \"kubernetes.io/projected/baa5596a-0a62-46eb-9652-e6fd66238582-kube-api-access-4ngzr\") pod \"nova-cell0-7154-account-create-update-wzjn5\" (UID: \"baa5596a-0a62-46eb-9652-e6fd66238582\") " pod="openstack/nova-cell0-7154-account-create-update-wzjn5" Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.400043 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v65hq\" (UniqueName: \"kubernetes.io/projected/22fb06b0-be61-4104-bd06-e83653551448-kube-api-access-v65hq\") pod \"nova-cell1-8970-account-create-update-bjpr9\" (UID: \"22fb06b0-be61-4104-bd06-e83653551448\") " pod="openstack/nova-cell1-8970-account-create-update-bjpr9" Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.400079 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/589ff3be-38fe-4b42-9465-749794f9d7ac-operator-scripts\") pod \"nova-cell1-db-create-vsrl6\" (UID: \"589ff3be-38fe-4b42-9465-749794f9d7ac\") " pod="openstack/nova-cell1-db-create-vsrl6" Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.400587 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baa5596a-0a62-46eb-9652-e6fd66238582-operator-scripts\") pod \"nova-cell0-7154-account-create-update-wzjn5\" (UID: \"baa5596a-0a62-46eb-9652-e6fd66238582\") " pod="openstack/nova-cell0-7154-account-create-update-wzjn5" Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.400645 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99jcb\" (UniqueName: \"kubernetes.io/projected/589ff3be-38fe-4b42-9465-749794f9d7ac-kube-api-access-99jcb\") pod \"nova-cell1-db-create-vsrl6\" (UID: \"589ff3be-38fe-4b42-9465-749794f9d7ac\") " pod="openstack/nova-cell1-db-create-vsrl6" Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.402830 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baa5596a-0a62-46eb-9652-e6fd66238582-operator-scripts\") pod \"nova-cell0-7154-account-create-update-wzjn5\" (UID: \"baa5596a-0a62-46eb-9652-e6fd66238582\") " pod="openstack/nova-cell0-7154-account-create-update-wzjn5" Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.417020 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/589ff3be-38fe-4b42-9465-749794f9d7ac-operator-scripts\") pod \"nova-cell1-db-create-vsrl6\" (UID: \"589ff3be-38fe-4b42-9465-749794f9d7ac\") " pod="openstack/nova-cell1-db-create-vsrl6" Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.428426 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ngzr\" (UniqueName: \"kubernetes.io/projected/baa5596a-0a62-46eb-9652-e6fd66238582-kube-api-access-4ngzr\") pod \"nova-cell0-7154-account-create-update-wzjn5\" (UID: \"baa5596a-0a62-46eb-9652-e6fd66238582\") " pod="openstack/nova-cell0-7154-account-create-update-wzjn5" Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.428474 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99jcb\" (UniqueName: \"kubernetes.io/projected/589ff3be-38fe-4b42-9465-749794f9d7ac-kube-api-access-99jcb\") pod \"nova-cell1-db-create-vsrl6\" (UID: \"589ff3be-38fe-4b42-9465-749794f9d7ac\") " pod="openstack/nova-cell1-db-create-vsrl6" Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.502827 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v65hq\" (UniqueName: \"kubernetes.io/projected/22fb06b0-be61-4104-bd06-e83653551448-kube-api-access-v65hq\") pod \"nova-cell1-8970-account-create-update-bjpr9\" (UID: \"22fb06b0-be61-4104-bd06-e83653551448\") " pod="openstack/nova-cell1-8970-account-create-update-bjpr9" Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.503192 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22fb06b0-be61-4104-bd06-e83653551448-operator-scripts\") pod \"nova-cell1-8970-account-create-update-bjpr9\" (UID: \"22fb06b0-be61-4104-bd06-e83653551448\") " pod="openstack/nova-cell1-8970-account-create-update-bjpr9" Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.506006 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22fb06b0-be61-4104-bd06-e83653551448-operator-scripts\") pod \"nova-cell1-8970-account-create-update-bjpr9\" (UID: \"22fb06b0-be61-4104-bd06-e83653551448\") " pod="openstack/nova-cell1-8970-account-create-update-bjpr9" Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.558414 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v65hq\" (UniqueName: \"kubernetes.io/projected/22fb06b0-be61-4104-bd06-e83653551448-kube-api-access-v65hq\") pod \"nova-cell1-8970-account-create-update-bjpr9\" (UID: \"22fb06b0-be61-4104-bd06-e83653551448\") " pod="openstack/nova-cell1-8970-account-create-update-bjpr9" Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.559237 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.559449 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4cf7b50d-6ee8-41b2-b69f-123961055859" containerName="glance-log" containerID="cri-o://ef455fabfdbfeea902086b034c0c5be8b9f499365193ee1ac962d1962cc87e5a" gracePeriod=30 Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.559896 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4cf7b50d-6ee8-41b2-b69f-123961055859" containerName="glance-httpd" containerID="cri-o://89e84af07a003d96cbc866678c72d698389a2a203439d0e7c5e5c35be3e29833" gracePeriod=30 Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.674847 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7154-account-create-update-wzjn5" Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.692656 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vsrl6" Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.720000 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8970-account-create-update-bjpr9" Feb 16 17:23:31 crc kubenswrapper[4794]: I0216 17:23:31.848087 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-g8d8v"] Feb 16 17:23:32 crc kubenswrapper[4794]: I0216 17:23:32.034230 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-sn2z4"] Feb 16 17:23:32 crc kubenswrapper[4794]: I0216 17:23:32.321980 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9ed5-account-create-update-4c5kr"] Feb 16 17:23:32 crc kubenswrapper[4794]: I0216 17:23:32.489955 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9ed5-account-create-update-4c5kr" event={"ID":"ab6d5750-0a23-4cce-8557-b0b1d867f91b","Type":"ContainerStarted","Data":"faeb80c1f24d9bad6139eb4321bd44f16d037ff3456ecbc47711f4cd2dd9ec07"} Feb 16 17:23:32 crc kubenswrapper[4794]: I0216 17:23:32.506505 4794 generic.go:334] "Generic (PLEG): container finished" podID="b92c8cdf-5125-46d9-89c1-8549a2dc1b74" containerID="ad5759da07de4f2d2fa94d28bda14c0227f2d26817e7a0edb7a7e29f3edd7c8f" exitCode=0 Feb 16 17:23:32 crc kubenswrapper[4794]: I0216 17:23:32.506614 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-57b87468-bqjtk" event={"ID":"b92c8cdf-5125-46d9-89c1-8549a2dc1b74","Type":"ContainerDied","Data":"ad5759da07de4f2d2fa94d28bda14c0227f2d26817e7a0edb7a7e29f3edd7c8f"} Feb 16 17:23:32 crc kubenswrapper[4794]: I0216 17:23:32.507833 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-g8d8v" event={"ID":"e0770add-b35a-4790-b877-78e7a2661b48","Type":"ContainerStarted","Data":"b742909a2cb2270531d1aac2b672dbe56ce28a9869ad2b6b27f42d313793f196"} Feb 16 17:23:32 crc kubenswrapper[4794]: I0216 17:23:32.509961 4794 generic.go:334] "Generic (PLEG): container finished" podID="4cf7b50d-6ee8-41b2-b69f-123961055859" containerID="ef455fabfdbfeea902086b034c0c5be8b9f499365193ee1ac962d1962cc87e5a" exitCode=143 Feb 16 17:23:32 crc kubenswrapper[4794]: I0216 17:23:32.510004 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4cf7b50d-6ee8-41b2-b69f-123961055859","Type":"ContainerDied","Data":"ef455fabfdbfeea902086b034c0c5be8b9f499365193ee1ac962d1962cc87e5a"} Feb 16 17:23:32 crc kubenswrapper[4794]: I0216 17:23:32.511332 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-sn2z4" event={"ID":"1469ebf3-80e2-45db-bb76-9c0d75fa6ba0","Type":"ContainerStarted","Data":"7361e542dec1558adeb043f90a8655b669f55bad8abdafb081dc30c0b058c4ce"} Feb 16 17:23:32 crc kubenswrapper[4794]: I0216 17:23:32.531292 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7154-account-create-update-wzjn5"] Feb 16 17:23:32 crc kubenswrapper[4794]: I0216 17:23:32.574804 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-57b87468-bqjtk" Feb 16 17:23:32 crc kubenswrapper[4794]: I0216 17:23:32.643932 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-logs\") pod \"b92c8cdf-5125-46d9-89c1-8549a2dc1b74\" (UID: \"b92c8cdf-5125-46d9-89c1-8549a2dc1b74\") " Feb 16 17:23:32 crc kubenswrapper[4794]: I0216 17:23:32.644024 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-public-tls-certs\") pod \"b92c8cdf-5125-46d9-89c1-8549a2dc1b74\" (UID: \"b92c8cdf-5125-46d9-89c1-8549a2dc1b74\") " Feb 16 17:23:32 crc kubenswrapper[4794]: I0216 17:23:32.644077 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-config-data\") pod \"b92c8cdf-5125-46d9-89c1-8549a2dc1b74\" (UID: \"b92c8cdf-5125-46d9-89c1-8549a2dc1b74\") " Feb 16 17:23:32 crc kubenswrapper[4794]: I0216 17:23:32.644147 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-scripts\") pod \"b92c8cdf-5125-46d9-89c1-8549a2dc1b74\" (UID: \"b92c8cdf-5125-46d9-89c1-8549a2dc1b74\") " Feb 16 17:23:32 crc kubenswrapper[4794]: I0216 17:23:32.644184 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbdkq\" (UniqueName: \"kubernetes.io/projected/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-kube-api-access-zbdkq\") pod \"b92c8cdf-5125-46d9-89c1-8549a2dc1b74\" (UID: \"b92c8cdf-5125-46d9-89c1-8549a2dc1b74\") " Feb 16 17:23:32 crc kubenswrapper[4794]: I0216 17:23:32.644209 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-internal-tls-certs\") pod \"b92c8cdf-5125-46d9-89c1-8549a2dc1b74\" (UID: \"b92c8cdf-5125-46d9-89c1-8549a2dc1b74\") " Feb 16 17:23:32 crc kubenswrapper[4794]: I0216 17:23:32.644249 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-combined-ca-bundle\") pod \"b92c8cdf-5125-46d9-89c1-8549a2dc1b74\" (UID: \"b92c8cdf-5125-46d9-89c1-8549a2dc1b74\") " Feb 16 17:23:32 crc kubenswrapper[4794]: I0216 17:23:32.650787 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-logs" (OuterVolumeSpecName: "logs") pod "b92c8cdf-5125-46d9-89c1-8549a2dc1b74" (UID: "b92c8cdf-5125-46d9-89c1-8549a2dc1b74"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:23:32 crc kubenswrapper[4794]: I0216 17:23:32.661545 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-kube-api-access-zbdkq" (OuterVolumeSpecName: "kube-api-access-zbdkq") pod "b92c8cdf-5125-46d9-89c1-8549a2dc1b74" (UID: "b92c8cdf-5125-46d9-89c1-8549a2dc1b74"). InnerVolumeSpecName "kube-api-access-zbdkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:23:32 crc kubenswrapper[4794]: I0216 17:23:32.672469 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-scripts" (OuterVolumeSpecName: "scripts") pod "b92c8cdf-5125-46d9-89c1-8549a2dc1b74" (UID: "b92c8cdf-5125-46d9-89c1-8549a2dc1b74"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:32 crc kubenswrapper[4794]: I0216 17:23:32.759633 4794 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:32 crc kubenswrapper[4794]: I0216 17:23:32.759970 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbdkq\" (UniqueName: \"kubernetes.io/projected/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-kube-api-access-zbdkq\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:32 crc kubenswrapper[4794]: I0216 17:23:32.759988 4794 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-logs\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:32 crc kubenswrapper[4794]: I0216 17:23:32.829709 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6cb67474dc-d4tmw" Feb 16 17:23:32 crc kubenswrapper[4794]: I0216 17:23:32.829750 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6cb67474dc-d4tmw" Feb 16 17:23:33 crc kubenswrapper[4794]: I0216 17:23:33.162387 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b92c8cdf-5125-46d9-89c1-8549a2dc1b74" (UID: "b92c8cdf-5125-46d9-89c1-8549a2dc1b74"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:33 crc kubenswrapper[4794]: I0216 17:23:33.190576 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:33 crc kubenswrapper[4794]: I0216 17:23:33.227210 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-vsrl6"] Feb 16 17:23:33 crc kubenswrapper[4794]: I0216 17:23:33.253488 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-8970-account-create-update-bjpr9"] Feb 16 17:23:33 crc kubenswrapper[4794]: I0216 17:23:33.261602 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-config-data" (OuterVolumeSpecName: "config-data") pod "b92c8cdf-5125-46d9-89c1-8549a2dc1b74" (UID: "b92c8cdf-5125-46d9-89c1-8549a2dc1b74"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:33 crc kubenswrapper[4794]: I0216 17:23:33.292356 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:33 crc kubenswrapper[4794]: I0216 17:23:33.408548 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b92c8cdf-5125-46d9-89c1-8549a2dc1b74" (UID: "b92c8cdf-5125-46d9-89c1-8549a2dc1b74"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:33 crc kubenswrapper[4794]: I0216 17:23:33.408921 4794 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:33 crc kubenswrapper[4794]: I0216 17:23:33.497434 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b92c8cdf-5125-46d9-89c1-8549a2dc1b74" (UID: "b92c8cdf-5125-46d9-89c1-8549a2dc1b74"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:33 crc kubenswrapper[4794]: I0216 17:23:33.514013 4794 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b92c8cdf-5125-46d9-89c1-8549a2dc1b74-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:33 crc kubenswrapper[4794]: I0216 17:23:33.523522 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-sn2z4" event={"ID":"1469ebf3-80e2-45db-bb76-9c0d75fa6ba0","Type":"ContainerStarted","Data":"8b9d4df14b19d8a7d1f25a537e2b0eedc5720d1d72f45bd9882b2e2d1e86d954"} Feb 16 17:23:33 crc kubenswrapper[4794]: I0216 17:23:33.526454 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9ed5-account-create-update-4c5kr" event={"ID":"ab6d5750-0a23-4cce-8557-b0b1d867f91b","Type":"ContainerStarted","Data":"9c411ab12e345bf5eb3aa9ad5f019b654392dfe10ca69b38d59cefff19ea1efe"} Feb 16 17:23:33 crc kubenswrapper[4794]: I0216 17:23:33.528512 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8970-account-create-update-bjpr9" event={"ID":"22fb06b0-be61-4104-bd06-e83653551448","Type":"ContainerStarted","Data":"7619b2bf8fc1200dd608f90ec827ebbcac8b8eb879aaf0799c12f9c9155cf506"} Feb 16 17:23:33 crc kubenswrapper[4794]: I0216 17:23:33.531347 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-57b87468-bqjtk" event={"ID":"b92c8cdf-5125-46d9-89c1-8549a2dc1b74","Type":"ContainerDied","Data":"9159454ff34f615c02055d29642b7f8c4cf8c4af2dab8a0f4af98030cad8168a"} Feb 16 17:23:33 crc kubenswrapper[4794]: I0216 17:23:33.531382 4794 scope.go:117] "RemoveContainer" containerID="ad5759da07de4f2d2fa94d28bda14c0227f2d26817e7a0edb7a7e29f3edd7c8f" Feb 16 17:23:33 crc kubenswrapper[4794]: I0216 17:23:33.531492 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-57b87468-bqjtk" Feb 16 17:23:33 crc kubenswrapper[4794]: I0216 17:23:33.540369 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7154-account-create-update-wzjn5" event={"ID":"baa5596a-0a62-46eb-9652-e6fd66238582","Type":"ContainerStarted","Data":"b7a35f3b78548f1be1b9d9518d0143e50d94669d1f71d3e011ef618f04698e89"} Feb 16 17:23:33 crc kubenswrapper[4794]: I0216 17:23:33.555395 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-sn2z4" podStartSLOduration=3.555376018 podStartE2EDuration="3.555376018s" podCreationTimestamp="2026-02-16 17:23:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:23:33.548496094 +0000 UTC m=+1439.496590741" watchObservedRunningTime="2026-02-16 17:23:33.555376018 +0000 UTC m=+1439.503470665" Feb 16 17:23:33 crc kubenswrapper[4794]: I0216 17:23:33.559501 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vsrl6" event={"ID":"589ff3be-38fe-4b42-9465-749794f9d7ac","Type":"ContainerStarted","Data":"a4b03611d5e19d2f8bd4e57a11527213d12065462261adf8808f888b0be5ad80"} Feb 16 17:23:33 crc kubenswrapper[4794]: I0216 17:23:33.577699 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acdf75eb-c141-4f82-94f9-dd95db013ba7","Type":"ContainerStarted","Data":"a4c8de7d82737b60c8a6e1c698fbb67eb43ed3b4451edc75d24a13c8a11af62b"} Feb 16 17:23:33 crc kubenswrapper[4794]: I0216 17:23:33.577862 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="acdf75eb-c141-4f82-94f9-dd95db013ba7" containerName="ceilometer-central-agent" containerID="cri-o://e0b792bccee785b3dadc00ab69e369429e68cd2e968efa5ef07ebe987371d6e2" gracePeriod=30 Feb 16 17:23:33 crc kubenswrapper[4794]: I0216 17:23:33.578089 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 17:23:33 crc kubenswrapper[4794]: I0216 17:23:33.578553 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="acdf75eb-c141-4f82-94f9-dd95db013ba7" containerName="proxy-httpd" containerID="cri-o://a4c8de7d82737b60c8a6e1c698fbb67eb43ed3b4451edc75d24a13c8a11af62b" gracePeriod=30 Feb 16 17:23:33 crc kubenswrapper[4794]: I0216 17:23:33.578608 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="acdf75eb-c141-4f82-94f9-dd95db013ba7" containerName="sg-core" containerID="cri-o://05cd36878ab4c8654b0bdaa4cdb45ca8d8a3ce6c325e55731ca7d3ae1b13f180" gracePeriod=30 Feb 16 17:23:33 crc kubenswrapper[4794]: I0216 17:23:33.578644 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="acdf75eb-c141-4f82-94f9-dd95db013ba7" containerName="ceilometer-notification-agent" containerID="cri-o://cc98ca6dacaf528e6ee39f94aedbceabc700f6277151f33e820bf28a61f6a147" gracePeriod=30 Feb 16 17:23:33 crc kubenswrapper[4794]: I0216 17:23:33.582476 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-g8d8v" event={"ID":"e0770add-b35a-4790-b877-78e7a2661b48","Type":"ContainerStarted","Data":"8f80acd8b9b614a25974d6d5208ae776bb28b2b98fccdb9c6243444def7d7447"} Feb 16 17:23:33 crc kubenswrapper[4794]: I0216 17:23:33.590580 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-9ed5-account-create-update-4c5kr" podStartSLOduration=3.590560161 podStartE2EDuration="3.590560161s" podCreationTimestamp="2026-02-16 17:23:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:23:33.576926616 +0000 UTC m=+1439.525021263" watchObservedRunningTime="2026-02-16 17:23:33.590560161 +0000 UTC m=+1439.538654808" Feb 16 17:23:33 crc kubenswrapper[4794]: I0216 17:23:33.620390 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.688560986 podStartE2EDuration="8.620368843s" podCreationTimestamp="2026-02-16 17:23:25 +0000 UTC" firstStartedPulling="2026-02-16 17:23:27.521026525 +0000 UTC m=+1433.469121172" lastFinishedPulling="2026-02-16 17:23:32.452834382 +0000 UTC m=+1438.400929029" observedRunningTime="2026-02-16 17:23:33.600023388 +0000 UTC m=+1439.548118045" watchObservedRunningTime="2026-02-16 17:23:33.620368843 +0000 UTC m=+1439.568463490" Feb 16 17:23:33 crc kubenswrapper[4794]: I0216 17:23:33.642854 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-g8d8v" podStartSLOduration=3.642828297 podStartE2EDuration="3.642828297s" podCreationTimestamp="2026-02-16 17:23:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:23:33.61036207 +0000 UTC m=+1439.558456717" watchObservedRunningTime="2026-02-16 17:23:33.642828297 +0000 UTC m=+1439.590922944" Feb 16 17:23:33 crc kubenswrapper[4794]: I0216 17:23:33.793859 4794 scope.go:117] "RemoveContainer" containerID="025c1074d07a609013845c69edfd72908e6163c9e3e9bc96693d96d9fbd6981f" Feb 16 17:23:33 crc kubenswrapper[4794]: I0216 17:23:33.833379 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-57b87468-bqjtk"] Feb 16 17:23:33 crc kubenswrapper[4794]: I0216 17:23:33.843557 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-57b87468-bqjtk"] Feb 16 17:23:34 crc kubenswrapper[4794]: I0216 17:23:34.067340 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-54fbcd866f-568dg" Feb 16 17:23:34 crc kubenswrapper[4794]: I0216 17:23:34.067394 4794 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-54fbcd866f-568dg" Feb 16 17:23:34 crc kubenswrapper[4794]: I0216 17:23:34.068153 4794 scope.go:117] "RemoveContainer" containerID="5d689b127434cc55d2784c9b4928add67726ffd7eb7a8ec5585038d18384fc8d" Feb 16 17:23:34 crc kubenswrapper[4794]: E0216 17:23:34.068565 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-54fbcd866f-568dg_openstack(07460b16-5cea-4a16-8389-dc1d3e7c3ee8)\"" pod="openstack/heat-cfnapi-54fbcd866f-568dg" podUID="07460b16-5cea-4a16-8389-dc1d3e7c3ee8" Feb 16 17:23:34 crc kubenswrapper[4794]: I0216 17:23:34.232121 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-868454c84d-mwnsk" Feb 16 17:23:34 crc kubenswrapper[4794]: I0216 17:23:34.304658 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-54fcd965cd-jvdzj"] Feb 16 17:23:34 crc kubenswrapper[4794]: I0216 17:23:34.635969 4794 generic.go:334] "Generic (PLEG): container finished" podID="acdf75eb-c141-4f82-94f9-dd95db013ba7" containerID="a4c8de7d82737b60c8a6e1c698fbb67eb43ed3b4451edc75d24a13c8a11af62b" exitCode=0 Feb 16 17:23:34 crc kubenswrapper[4794]: I0216 17:23:34.636001 4794 generic.go:334] "Generic (PLEG): container finished" podID="acdf75eb-c141-4f82-94f9-dd95db013ba7" containerID="05cd36878ab4c8654b0bdaa4cdb45ca8d8a3ce6c325e55731ca7d3ae1b13f180" exitCode=2 Feb 16 17:23:34 crc kubenswrapper[4794]: I0216 17:23:34.636008 4794 generic.go:334] "Generic (PLEG): container finished" podID="acdf75eb-c141-4f82-94f9-dd95db013ba7" containerID="cc98ca6dacaf528e6ee39f94aedbceabc700f6277151f33e820bf28a61f6a147" exitCode=0 Feb 16 17:23:34 crc kubenswrapper[4794]: I0216 17:23:34.636059 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acdf75eb-c141-4f82-94f9-dd95db013ba7","Type":"ContainerDied","Data":"a4c8de7d82737b60c8a6e1c698fbb67eb43ed3b4451edc75d24a13c8a11af62b"} Feb 16 17:23:34 crc kubenswrapper[4794]: I0216 17:23:34.636098 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acdf75eb-c141-4f82-94f9-dd95db013ba7","Type":"ContainerDied","Data":"05cd36878ab4c8654b0bdaa4cdb45ca8d8a3ce6c325e55731ca7d3ae1b13f180"} Feb 16 17:23:34 crc kubenswrapper[4794]: I0216 17:23:34.636111 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acdf75eb-c141-4f82-94f9-dd95db013ba7","Type":"ContainerDied","Data":"cc98ca6dacaf528e6ee39f94aedbceabc700f6277151f33e820bf28a61f6a147"} Feb 16 17:23:34 crc kubenswrapper[4794]: I0216 17:23:34.641770 4794 generic.go:334] "Generic (PLEG): container finished" podID="e0770add-b35a-4790-b877-78e7a2661b48" containerID="8f80acd8b9b614a25974d6d5208ae776bb28b2b98fccdb9c6243444def7d7447" exitCode=0 Feb 16 17:23:34 crc kubenswrapper[4794]: I0216 17:23:34.641827 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-g8d8v" event={"ID":"e0770add-b35a-4790-b877-78e7a2661b48","Type":"ContainerDied","Data":"8f80acd8b9b614a25974d6d5208ae776bb28b2b98fccdb9c6243444def7d7447"} Feb 16 17:23:34 crc kubenswrapper[4794]: I0216 17:23:34.649593 4794 generic.go:334] "Generic (PLEG): container finished" podID="1469ebf3-80e2-45db-bb76-9c0d75fa6ba0" containerID="8b9d4df14b19d8a7d1f25a537e2b0eedc5720d1d72f45bd9882b2e2d1e86d954" exitCode=0 Feb 16 17:23:34 crc kubenswrapper[4794]: I0216 17:23:34.649655 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-sn2z4" event={"ID":"1469ebf3-80e2-45db-bb76-9c0d75fa6ba0","Type":"ContainerDied","Data":"8b9d4df14b19d8a7d1f25a537e2b0eedc5720d1d72f45bd9882b2e2d1e86d954"} Feb 16 17:23:34 crc kubenswrapper[4794]: I0216 17:23:34.656598 4794 generic.go:334] "Generic (PLEG): container finished" podID="ab6d5750-0a23-4cce-8557-b0b1d867f91b" containerID="9c411ab12e345bf5eb3aa9ad5f019b654392dfe10ca69b38d59cefff19ea1efe" exitCode=0 Feb 16 17:23:34 crc kubenswrapper[4794]: I0216 17:23:34.656766 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9ed5-account-create-update-4c5kr" event={"ID":"ab6d5750-0a23-4cce-8557-b0b1d867f91b","Type":"ContainerDied","Data":"9c411ab12e345bf5eb3aa9ad5f019b654392dfe10ca69b38d59cefff19ea1efe"} Feb 16 17:23:34 crc kubenswrapper[4794]: I0216 17:23:34.660930 4794 generic.go:334] "Generic (PLEG): container finished" podID="22fb06b0-be61-4104-bd06-e83653551448" containerID="0bece7d5f48dd15b39b877b5ae23d371ea7e7316114f0f37dd3c64ed978a21cf" exitCode=0 Feb 16 17:23:34 crc kubenswrapper[4794]: I0216 17:23:34.660975 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8970-account-create-update-bjpr9" event={"ID":"22fb06b0-be61-4104-bd06-e83653551448","Type":"ContainerDied","Data":"0bece7d5f48dd15b39b877b5ae23d371ea7e7316114f0f37dd3c64ed978a21cf"} Feb 16 17:23:34 crc kubenswrapper[4794]: I0216 17:23:34.664944 4794 generic.go:334] "Generic (PLEG): container finished" podID="baa5596a-0a62-46eb-9652-e6fd66238582" containerID="00af2cf0c0dc48b41c8b45fbe8fd9b92f4071b05ba2749a483c92ce3cb5c8a31" exitCode=0 Feb 16 17:23:34 crc kubenswrapper[4794]: I0216 17:23:34.664982 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7154-account-create-update-wzjn5" event={"ID":"baa5596a-0a62-46eb-9652-e6fd66238582","Type":"ContainerDied","Data":"00af2cf0c0dc48b41c8b45fbe8fd9b92f4071b05ba2749a483c92ce3cb5c8a31"} Feb 16 17:23:34 crc kubenswrapper[4794]: I0216 17:23:34.676241 4794 generic.go:334] "Generic (PLEG): container finished" podID="589ff3be-38fe-4b42-9465-749794f9d7ac" containerID="e07c6e5e2092eed02c076d0483a4fa722898fa351f4362e0d7732096b6b23487" exitCode=0 Feb 16 17:23:34 crc kubenswrapper[4794]: I0216 17:23:34.676916 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vsrl6" event={"ID":"589ff3be-38fe-4b42-9465-749794f9d7ac","Type":"ContainerDied","Data":"e07c6e5e2092eed02c076d0483a4fa722898fa351f4362e0d7732096b6b23487"} Feb 16 17:23:34 crc kubenswrapper[4794]: I0216 17:23:34.821586 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b92c8cdf-5125-46d9-89c1-8549a2dc1b74" path="/var/lib/kubelet/pods/b92c8cdf-5125-46d9-89c1-8549a2dc1b74/volumes" Feb 16 17:23:34 crc kubenswrapper[4794]: I0216 17:23:34.962713 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-54fcd965cd-jvdzj" Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.053907 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-849cbf9447-6chxp" Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.067285 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knwkv\" (UniqueName: \"kubernetes.io/projected/d1755f51-5efd-43e0-902e-c2c1b6760350-kube-api-access-knwkv\") pod \"d1755f51-5efd-43e0-902e-c2c1b6760350\" (UID: \"d1755f51-5efd-43e0-902e-c2c1b6760350\") " Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.067601 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1755f51-5efd-43e0-902e-c2c1b6760350-config-data\") pod \"d1755f51-5efd-43e0-902e-c2c1b6760350\" (UID: \"d1755f51-5efd-43e0-902e-c2c1b6760350\") " Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.067751 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1755f51-5efd-43e0-902e-c2c1b6760350-config-data-custom\") pod \"d1755f51-5efd-43e0-902e-c2c1b6760350\" (UID: \"d1755f51-5efd-43e0-902e-c2c1b6760350\") " Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.068492 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1755f51-5efd-43e0-902e-c2c1b6760350-combined-ca-bundle\") pod \"d1755f51-5efd-43e0-902e-c2c1b6760350\" (UID: \"d1755f51-5efd-43e0-902e-c2c1b6760350\") " Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.075099 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1755f51-5efd-43e0-902e-c2c1b6760350-kube-api-access-knwkv" (OuterVolumeSpecName: "kube-api-access-knwkv") pod "d1755f51-5efd-43e0-902e-c2c1b6760350" (UID: "d1755f51-5efd-43e0-902e-c2c1b6760350"). InnerVolumeSpecName "kube-api-access-knwkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.085444 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1755f51-5efd-43e0-902e-c2c1b6760350-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d1755f51-5efd-43e0-902e-c2c1b6760350" (UID: "d1755f51-5efd-43e0-902e-c2c1b6760350"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.151718 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1755f51-5efd-43e0-902e-c2c1b6760350-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1755f51-5efd-43e0-902e-c2c1b6760350" (UID: "d1755f51-5efd-43e0-902e-c2c1b6760350"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.178857 4794 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1755f51-5efd-43e0-902e-c2c1b6760350-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.178891 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1755f51-5efd-43e0-902e-c2c1b6760350-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.178901 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knwkv\" (UniqueName: \"kubernetes.io/projected/d1755f51-5efd-43e0-902e-c2c1b6760350-kube-api-access-knwkv\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.190480 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1755f51-5efd-43e0-902e-c2c1b6760350-config-data" (OuterVolumeSpecName: "config-data") pod "d1755f51-5efd-43e0-902e-c2c1b6760350" (UID: "d1755f51-5efd-43e0-902e-c2c1b6760350"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.219840 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-54fbcd866f-568dg"] Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.283622 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1755f51-5efd-43e0-902e-c2c1b6760350-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.447088 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.590259 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cf7b50d-6ee8-41b2-b69f-123961055859-combined-ca-bundle\") pod \"4cf7b50d-6ee8-41b2-b69f-123961055859\" (UID: \"4cf7b50d-6ee8-41b2-b69f-123961055859\") " Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.590392 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8rjm\" (UniqueName: \"kubernetes.io/projected/4cf7b50d-6ee8-41b2-b69f-123961055859-kube-api-access-c8rjm\") pod \"4cf7b50d-6ee8-41b2-b69f-123961055859\" (UID: \"4cf7b50d-6ee8-41b2-b69f-123961055859\") " Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.590433 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cf7b50d-6ee8-41b2-b69f-123961055859-config-data\") pod \"4cf7b50d-6ee8-41b2-b69f-123961055859\" (UID: \"4cf7b50d-6ee8-41b2-b69f-123961055859\") " Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.590465 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cf7b50d-6ee8-41b2-b69f-123961055859-scripts\") pod \"4cf7b50d-6ee8-41b2-b69f-123961055859\" (UID: \"4cf7b50d-6ee8-41b2-b69f-123961055859\") " Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.591208 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b\") pod \"4cf7b50d-6ee8-41b2-b69f-123961055859\" (UID: \"4cf7b50d-6ee8-41b2-b69f-123961055859\") " Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.591281 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cf7b50d-6ee8-41b2-b69f-123961055859-public-tls-certs\") pod \"4cf7b50d-6ee8-41b2-b69f-123961055859\" (UID: \"4cf7b50d-6ee8-41b2-b69f-123961055859\") " Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.591351 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4cf7b50d-6ee8-41b2-b69f-123961055859-httpd-run\") pod \"4cf7b50d-6ee8-41b2-b69f-123961055859\" (UID: \"4cf7b50d-6ee8-41b2-b69f-123961055859\") " Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.591400 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cf7b50d-6ee8-41b2-b69f-123961055859-logs\") pod \"4cf7b50d-6ee8-41b2-b69f-123961055859\" (UID: \"4cf7b50d-6ee8-41b2-b69f-123961055859\") " Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.592264 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cf7b50d-6ee8-41b2-b69f-123961055859-logs" (OuterVolumeSpecName: "logs") pod "4cf7b50d-6ee8-41b2-b69f-123961055859" (UID: "4cf7b50d-6ee8-41b2-b69f-123961055859"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.609509 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cf7b50d-6ee8-41b2-b69f-123961055859-scripts" (OuterVolumeSpecName: "scripts") pod "4cf7b50d-6ee8-41b2-b69f-123961055859" (UID: "4cf7b50d-6ee8-41b2-b69f-123961055859"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.618650 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cf7b50d-6ee8-41b2-b69f-123961055859-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4cf7b50d-6ee8-41b2-b69f-123961055859" (UID: "4cf7b50d-6ee8-41b2-b69f-123961055859"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.624595 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cf7b50d-6ee8-41b2-b69f-123961055859-kube-api-access-c8rjm" (OuterVolumeSpecName: "kube-api-access-c8rjm") pod "4cf7b50d-6ee8-41b2-b69f-123961055859" (UID: "4cf7b50d-6ee8-41b2-b69f-123961055859"). InnerVolumeSpecName "kube-api-access-c8rjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.694733 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8rjm\" (UniqueName: \"kubernetes.io/projected/4cf7b50d-6ee8-41b2-b69f-123961055859-kube-api-access-c8rjm\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.694777 4794 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cf7b50d-6ee8-41b2-b69f-123961055859-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.694789 4794 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4cf7b50d-6ee8-41b2-b69f-123961055859-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.694800 4794 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cf7b50d-6ee8-41b2-b69f-123961055859-logs\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.713499 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cf7b50d-6ee8-41b2-b69f-123961055859-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4cf7b50d-6ee8-41b2-b69f-123961055859" (UID: "4cf7b50d-6ee8-41b2-b69f-123961055859"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.727283 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-54fcd965cd-jvdzj" Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.728393 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-54fcd965cd-jvdzj" event={"ID":"d1755f51-5efd-43e0-902e-c2c1b6760350","Type":"ContainerDied","Data":"36722f1967f1d98dec65a551cad81925c20fc4158ae361852e886c67636660cc"} Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.728465 4794 scope.go:117] "RemoveContainer" containerID="e228aa19f3e4f3f7b5db5a369d576ef8d32dbe0c6adf629577e1444bde573485" Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.731899 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cf7b50d-6ee8-41b2-b69f-123961055859-config-data" (OuterVolumeSpecName: "config-data") pod "4cf7b50d-6ee8-41b2-b69f-123961055859" (UID: "4cf7b50d-6ee8-41b2-b69f-123961055859"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.758679 4794 generic.go:334] "Generic (PLEG): container finished" podID="4cf7b50d-6ee8-41b2-b69f-123961055859" containerID="89e84af07a003d96cbc866678c72d698389a2a203439d0e7c5e5c35be3e29833" exitCode=0 Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.758840 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4cf7b50d-6ee8-41b2-b69f-123961055859","Type":"ContainerDied","Data":"89e84af07a003d96cbc866678c72d698389a2a203439d0e7c5e5c35be3e29833"} Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.758907 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4cf7b50d-6ee8-41b2-b69f-123961055859","Type":"ContainerDied","Data":"6f0e21fb54e8b5c703a514d2f3c7034ddb9a4aac709051e03a9fb6c053f3b800"} Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.759125 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.776794 4794 scope.go:117] "RemoveContainer" containerID="89e84af07a003d96cbc866678c72d698389a2a203439d0e7c5e5c35be3e29833" Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.780545 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-54fbcd866f-568dg" Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.798390 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cf7b50d-6ee8-41b2-b69f-123961055859-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.798431 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cf7b50d-6ee8-41b2-b69f-123961055859-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.819671 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cf7b50d-6ee8-41b2-b69f-123961055859-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4cf7b50d-6ee8-41b2-b69f-123961055859" (UID: "4cf7b50d-6ee8-41b2-b69f-123961055859"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.867482 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-54fcd965cd-jvdzj"] Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.900387 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07460b16-5cea-4a16-8389-dc1d3e7c3ee8-config-data\") pod \"07460b16-5cea-4a16-8389-dc1d3e7c3ee8\" (UID: \"07460b16-5cea-4a16-8389-dc1d3e7c3ee8\") " Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.900539 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07460b16-5cea-4a16-8389-dc1d3e7c3ee8-config-data-custom\") pod \"07460b16-5cea-4a16-8389-dc1d3e7c3ee8\" (UID: \"07460b16-5cea-4a16-8389-dc1d3e7c3ee8\") " Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.900595 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6dqp\" (UniqueName: \"kubernetes.io/projected/07460b16-5cea-4a16-8389-dc1d3e7c3ee8-kube-api-access-f6dqp\") pod \"07460b16-5cea-4a16-8389-dc1d3e7c3ee8\" (UID: \"07460b16-5cea-4a16-8389-dc1d3e7c3ee8\") " Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.900632 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07460b16-5cea-4a16-8389-dc1d3e7c3ee8-combined-ca-bundle\") pod \"07460b16-5cea-4a16-8389-dc1d3e7c3ee8\" (UID: \"07460b16-5cea-4a16-8389-dc1d3e7c3ee8\") " Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.901980 4794 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cf7b50d-6ee8-41b2-b69f-123961055859-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.915435 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07460b16-5cea-4a16-8389-dc1d3e7c3ee8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "07460b16-5cea-4a16-8389-dc1d3e7c3ee8" (UID: "07460b16-5cea-4a16-8389-dc1d3e7c3ee8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.923252 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07460b16-5cea-4a16-8389-dc1d3e7c3ee8-kube-api-access-f6dqp" (OuterVolumeSpecName: "kube-api-access-f6dqp") pod "07460b16-5cea-4a16-8389-dc1d3e7c3ee8" (UID: "07460b16-5cea-4a16-8389-dc1d3e7c3ee8"). InnerVolumeSpecName "kube-api-access-f6dqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.938370 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-54fcd965cd-jvdzj"] Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.979990 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07460b16-5cea-4a16-8389-dc1d3e7c3ee8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07460b16-5cea-4a16-8389-dc1d3e7c3ee8" (UID: "07460b16-5cea-4a16-8389-dc1d3e7c3ee8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:35 crc kubenswrapper[4794]: I0216 17:23:35.985920 4794 scope.go:117] "RemoveContainer" containerID="ef455fabfdbfeea902086b034c0c5be8b9f499365193ee1ac962d1962cc87e5a" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.004031 4794 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07460b16-5cea-4a16-8389-dc1d3e7c3ee8-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.004058 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6dqp\" (UniqueName: \"kubernetes.io/projected/07460b16-5cea-4a16-8389-dc1d3e7c3ee8-kube-api-access-f6dqp\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.004067 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07460b16-5cea-4a16-8389-dc1d3e7c3ee8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.025385 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07460b16-5cea-4a16-8389-dc1d3e7c3ee8-config-data" (OuterVolumeSpecName: "config-data") pod "07460b16-5cea-4a16-8389-dc1d3e7c3ee8" (UID: "07460b16-5cea-4a16-8389-dc1d3e7c3ee8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.040271 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b" (OuterVolumeSpecName: "glance") pod "4cf7b50d-6ee8-41b2-b69f-123961055859" (UID: "4cf7b50d-6ee8-41b2-b69f-123961055859"). InnerVolumeSpecName "pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.101952 4794 scope.go:117] "RemoveContainer" containerID="89e84af07a003d96cbc866678c72d698389a2a203439d0e7c5e5c35be3e29833" Feb 16 17:23:36 crc kubenswrapper[4794]: E0216 17:23:36.102278 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89e84af07a003d96cbc866678c72d698389a2a203439d0e7c5e5c35be3e29833\": container with ID starting with 89e84af07a003d96cbc866678c72d698389a2a203439d0e7c5e5c35be3e29833 not found: ID does not exist" containerID="89e84af07a003d96cbc866678c72d698389a2a203439d0e7c5e5c35be3e29833" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.102329 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89e84af07a003d96cbc866678c72d698389a2a203439d0e7c5e5c35be3e29833"} err="failed to get container status \"89e84af07a003d96cbc866678c72d698389a2a203439d0e7c5e5c35be3e29833\": rpc error: code = NotFound desc = could not find container \"89e84af07a003d96cbc866678c72d698389a2a203439d0e7c5e5c35be3e29833\": container with ID starting with 89e84af07a003d96cbc866678c72d698389a2a203439d0e7c5e5c35be3e29833 not found: ID does not exist" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.102348 4794 scope.go:117] "RemoveContainer" containerID="ef455fabfdbfeea902086b034c0c5be8b9f499365193ee1ac962d1962cc87e5a" Feb 16 17:23:36 crc kubenswrapper[4794]: E0216 17:23:36.102589 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef455fabfdbfeea902086b034c0c5be8b9f499365193ee1ac962d1962cc87e5a\": container with ID starting with ef455fabfdbfeea902086b034c0c5be8b9f499365193ee1ac962d1962cc87e5a not found: ID does not exist" containerID="ef455fabfdbfeea902086b034c0c5be8b9f499365193ee1ac962d1962cc87e5a" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.102609 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef455fabfdbfeea902086b034c0c5be8b9f499365193ee1ac962d1962cc87e5a"} err="failed to get container status \"ef455fabfdbfeea902086b034c0c5be8b9f499365193ee1ac962d1962cc87e5a\": rpc error: code = NotFound desc = could not find container \"ef455fabfdbfeea902086b034c0c5be8b9f499365193ee1ac962d1962cc87e5a\": container with ID starting with ef455fabfdbfeea902086b034c0c5be8b9f499365193ee1ac962d1962cc87e5a not found: ID does not exist" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.107740 4794 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b\") on node \"crc\" " Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.107784 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07460b16-5cea-4a16-8389-dc1d3e7c3ee8-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.160239 4794 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.160658 4794 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b") on node "crc" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.233104 4794 reconciler_common.go:293] "Volume detached for volume \"pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.266533 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.284814 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.299278 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 17:23:36 crc kubenswrapper[4794]: E0216 17:23:36.299749 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07460b16-5cea-4a16-8389-dc1d3e7c3ee8" containerName="heat-cfnapi" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.299766 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="07460b16-5cea-4a16-8389-dc1d3e7c3ee8" containerName="heat-cfnapi" Feb 16 17:23:36 crc kubenswrapper[4794]: E0216 17:23:36.299778 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cf7b50d-6ee8-41b2-b69f-123961055859" containerName="glance-httpd" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.299786 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cf7b50d-6ee8-41b2-b69f-123961055859" containerName="glance-httpd" Feb 16 17:23:36 crc kubenswrapper[4794]: E0216 17:23:36.299809 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1755f51-5efd-43e0-902e-c2c1b6760350" containerName="heat-api" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.299817 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1755f51-5efd-43e0-902e-c2c1b6760350" containerName="heat-api" Feb 16 17:23:36 crc kubenswrapper[4794]: E0216 17:23:36.299826 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cf7b50d-6ee8-41b2-b69f-123961055859" containerName="glance-log" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.299831 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cf7b50d-6ee8-41b2-b69f-123961055859" containerName="glance-log" Feb 16 17:23:36 crc kubenswrapper[4794]: E0216 17:23:36.299846 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b92c8cdf-5125-46d9-89c1-8549a2dc1b74" containerName="placement-log" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.299853 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="b92c8cdf-5125-46d9-89c1-8549a2dc1b74" containerName="placement-log" Feb 16 17:23:36 crc kubenswrapper[4794]: E0216 17:23:36.299883 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b92c8cdf-5125-46d9-89c1-8549a2dc1b74" containerName="placement-api" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.299891 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="b92c8cdf-5125-46d9-89c1-8549a2dc1b74" containerName="placement-api" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.300116 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cf7b50d-6ee8-41b2-b69f-123961055859" containerName="glance-log" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.300131 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cf7b50d-6ee8-41b2-b69f-123961055859" containerName="glance-httpd" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.300145 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="b92c8cdf-5125-46d9-89c1-8549a2dc1b74" containerName="placement-api" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.300155 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1755f51-5efd-43e0-902e-c2c1b6760350" containerName="heat-api" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.300165 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1755f51-5efd-43e0-902e-c2c1b6760350" containerName="heat-api" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.300180 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="07460b16-5cea-4a16-8389-dc1d3e7c3ee8" containerName="heat-cfnapi" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.300196 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="b92c8cdf-5125-46d9-89c1-8549a2dc1b74" containerName="placement-log" Feb 16 17:23:36 crc kubenswrapper[4794]: E0216 17:23:36.300402 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07460b16-5cea-4a16-8389-dc1d3e7c3ee8" containerName="heat-cfnapi" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.300409 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="07460b16-5cea-4a16-8389-dc1d3e7c3ee8" containerName="heat-cfnapi" Feb 16 17:23:36 crc kubenswrapper[4794]: E0216 17:23:36.300424 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1755f51-5efd-43e0-902e-c2c1b6760350" containerName="heat-api" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.300430 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1755f51-5efd-43e0-902e-c2c1b6760350" containerName="heat-api" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.300657 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="07460b16-5cea-4a16-8389-dc1d3e7c3ee8" containerName="heat-cfnapi" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.306583 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.308917 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.314849 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.315053 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.334922 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzns2\" (UniqueName: \"kubernetes.io/projected/4db1f19d-64b2-439f-a763-ab694b3e2953-kube-api-access-fzns2\") pod \"glance-default-external-api-0\" (UID: \"4db1f19d-64b2-439f-a763-ab694b3e2953\") " pod="openstack/glance-default-external-api-0" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.335282 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4db1f19d-64b2-439f-a763-ab694b3e2953-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4db1f19d-64b2-439f-a763-ab694b3e2953\") " pod="openstack/glance-default-external-api-0" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.335481 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4db1f19d-64b2-439f-a763-ab694b3e2953-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4db1f19d-64b2-439f-a763-ab694b3e2953\") " pod="openstack/glance-default-external-api-0" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.335600 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4db1f19d-64b2-439f-a763-ab694b3e2953-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4db1f19d-64b2-439f-a763-ab694b3e2953\") " pod="openstack/glance-default-external-api-0" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.335867 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b\") pod \"glance-default-external-api-0\" (UID: \"4db1f19d-64b2-439f-a763-ab694b3e2953\") " pod="openstack/glance-default-external-api-0" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.335975 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4db1f19d-64b2-439f-a763-ab694b3e2953-config-data\") pod \"glance-default-external-api-0\" (UID: \"4db1f19d-64b2-439f-a763-ab694b3e2953\") " pod="openstack/glance-default-external-api-0" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.336064 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4db1f19d-64b2-439f-a763-ab694b3e2953-logs\") pod \"glance-default-external-api-0\" (UID: \"4db1f19d-64b2-439f-a763-ab694b3e2953\") " pod="openstack/glance-default-external-api-0" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.336202 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4db1f19d-64b2-439f-a763-ab694b3e2953-scripts\") pod \"glance-default-external-api-0\" (UID: \"4db1f19d-64b2-439f-a763-ab694b3e2953\") " pod="openstack/glance-default-external-api-0" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.438484 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzns2\" (UniqueName: \"kubernetes.io/projected/4db1f19d-64b2-439f-a763-ab694b3e2953-kube-api-access-fzns2\") pod \"glance-default-external-api-0\" (UID: \"4db1f19d-64b2-439f-a763-ab694b3e2953\") " pod="openstack/glance-default-external-api-0" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.438555 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4db1f19d-64b2-439f-a763-ab694b3e2953-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4db1f19d-64b2-439f-a763-ab694b3e2953\") " pod="openstack/glance-default-external-api-0" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.438589 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4db1f19d-64b2-439f-a763-ab694b3e2953-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4db1f19d-64b2-439f-a763-ab694b3e2953\") " pod="openstack/glance-default-external-api-0" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.438631 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4db1f19d-64b2-439f-a763-ab694b3e2953-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4db1f19d-64b2-439f-a763-ab694b3e2953\") " pod="openstack/glance-default-external-api-0" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.438701 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b\") pod \"glance-default-external-api-0\" (UID: \"4db1f19d-64b2-439f-a763-ab694b3e2953\") " pod="openstack/glance-default-external-api-0" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.438743 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4db1f19d-64b2-439f-a763-ab694b3e2953-config-data\") pod \"glance-default-external-api-0\" (UID: \"4db1f19d-64b2-439f-a763-ab694b3e2953\") " pod="openstack/glance-default-external-api-0" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.438767 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4db1f19d-64b2-439f-a763-ab694b3e2953-logs\") pod \"glance-default-external-api-0\" (UID: \"4db1f19d-64b2-439f-a763-ab694b3e2953\") " pod="openstack/glance-default-external-api-0" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.438819 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4db1f19d-64b2-439f-a763-ab694b3e2953-scripts\") pod \"glance-default-external-api-0\" (UID: \"4db1f19d-64b2-439f-a763-ab694b3e2953\") " pod="openstack/glance-default-external-api-0" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.439854 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4db1f19d-64b2-439f-a763-ab694b3e2953-logs\") pod \"glance-default-external-api-0\" (UID: \"4db1f19d-64b2-439f-a763-ab694b3e2953\") " pod="openstack/glance-default-external-api-0" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.440019 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4db1f19d-64b2-439f-a763-ab694b3e2953-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4db1f19d-64b2-439f-a763-ab694b3e2953\") " pod="openstack/glance-default-external-api-0" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.443637 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4db1f19d-64b2-439f-a763-ab694b3e2953-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4db1f19d-64b2-439f-a763-ab694b3e2953\") " pod="openstack/glance-default-external-api-0" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.446185 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4db1f19d-64b2-439f-a763-ab694b3e2953-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4db1f19d-64b2-439f-a763-ab694b3e2953\") " pod="openstack/glance-default-external-api-0" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.447218 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4db1f19d-64b2-439f-a763-ab694b3e2953-config-data\") pod \"glance-default-external-api-0\" (UID: \"4db1f19d-64b2-439f-a763-ab694b3e2953\") " pod="openstack/glance-default-external-api-0" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.447596 4794 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.447623 4794 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b\") pod \"glance-default-external-api-0\" (UID: \"4db1f19d-64b2-439f-a763-ab694b3e2953\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/02eb96684cb2daee1e7757d905c4024416c5994d26b1f18fcded63c6a3978ca1/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.454938 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4db1f19d-64b2-439f-a763-ab694b3e2953-scripts\") pod \"glance-default-external-api-0\" (UID: \"4db1f19d-64b2-439f-a763-ab694b3e2953\") " pod="openstack/glance-default-external-api-0" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.466994 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzns2\" (UniqueName: \"kubernetes.io/projected/4db1f19d-64b2-439f-a763-ab694b3e2953-kube-api-access-fzns2\") pod \"glance-default-external-api-0\" (UID: \"4db1f19d-64b2-439f-a763-ab694b3e2953\") " pod="openstack/glance-default-external-api-0" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.529249 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90dec0f6-7306-4e92-99b8-2e5577babb4b\") pod \"glance-default-external-api-0\" (UID: \"4db1f19d-64b2-439f-a763-ab694b3e2953\") " pod="openstack/glance-default-external-api-0" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.592053 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-g8d8v" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.645033 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.645246 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgmgf\" (UniqueName: \"kubernetes.io/projected/e0770add-b35a-4790-b877-78e7a2661b48-kube-api-access-dgmgf\") pod \"e0770add-b35a-4790-b877-78e7a2661b48\" (UID: \"e0770add-b35a-4790-b877-78e7a2661b48\") " Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.645472 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0770add-b35a-4790-b877-78e7a2661b48-operator-scripts\") pod \"e0770add-b35a-4790-b877-78e7a2661b48\" (UID: \"e0770add-b35a-4790-b877-78e7a2661b48\") " Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.646627 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0770add-b35a-4790-b877-78e7a2661b48-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e0770add-b35a-4790-b877-78e7a2661b48" (UID: "e0770add-b35a-4790-b877-78e7a2661b48"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.660592 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0770add-b35a-4790-b877-78e7a2661b48-kube-api-access-dgmgf" (OuterVolumeSpecName: "kube-api-access-dgmgf") pod "e0770add-b35a-4790-b877-78e7a2661b48" (UID: "e0770add-b35a-4790-b877-78e7a2661b48"). InnerVolumeSpecName "kube-api-access-dgmgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.748321 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgmgf\" (UniqueName: \"kubernetes.io/projected/e0770add-b35a-4790-b877-78e7a2661b48-kube-api-access-dgmgf\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.748371 4794 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0770add-b35a-4790-b877-78e7a2661b48-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.780649 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-g8d8v" event={"ID":"e0770add-b35a-4790-b877-78e7a2661b48","Type":"ContainerDied","Data":"b742909a2cb2270531d1aac2b672dbe56ce28a9869ad2b6b27f42d313793f196"} Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.780701 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b742909a2cb2270531d1aac2b672dbe56ce28a9869ad2b6b27f42d313793f196" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.780776 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-g8d8v" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.824451 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cf7b50d-6ee8-41b2-b69f-123961055859" path="/var/lib/kubelet/pods/4cf7b50d-6ee8-41b2-b69f-123961055859/volumes" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.825690 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1755f51-5efd-43e0-902e-c2c1b6760350" path="/var/lib/kubelet/pods/d1755f51-5efd-43e0-902e-c2c1b6760350/volumes" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.826134 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-54fbcd866f-568dg" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.830109 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7154-account-create-update-wzjn5" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.834032 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-54fbcd866f-568dg" event={"ID":"07460b16-5cea-4a16-8389-dc1d3e7c3ee8","Type":"ContainerDied","Data":"07a7703ae85b026816df39c57725ec2d31417af521dfbfa360197ae38ee4f374"} Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.836712 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8970-account-create-update-bjpr9" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.838151 4794 scope.go:117] "RemoveContainer" containerID="5d689b127434cc55d2784c9b4928add67726ffd7eb7a8ec5585038d18384fc8d" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.864915 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vsrl6" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.929172 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9ed5-account-create-update-4c5kr" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.942033 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-sn2z4" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.962335 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-54fbcd866f-568dg"] Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.975101 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baa5596a-0a62-46eb-9652-e6fd66238582-operator-scripts\") pod \"baa5596a-0a62-46eb-9652-e6fd66238582\" (UID: \"baa5596a-0a62-46eb-9652-e6fd66238582\") " Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.975220 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99jcb\" (UniqueName: \"kubernetes.io/projected/589ff3be-38fe-4b42-9465-749794f9d7ac-kube-api-access-99jcb\") pod \"589ff3be-38fe-4b42-9465-749794f9d7ac\" (UID: \"589ff3be-38fe-4b42-9465-749794f9d7ac\") " Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.975493 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/589ff3be-38fe-4b42-9465-749794f9d7ac-operator-scripts\") pod \"589ff3be-38fe-4b42-9465-749794f9d7ac\" (UID: \"589ff3be-38fe-4b42-9465-749794f9d7ac\") " Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.975527 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22fb06b0-be61-4104-bd06-e83653551448-operator-scripts\") pod \"22fb06b0-be61-4104-bd06-e83653551448\" (UID: \"22fb06b0-be61-4104-bd06-e83653551448\") " Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.975606 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ngzr\" (UniqueName: \"kubernetes.io/projected/baa5596a-0a62-46eb-9652-e6fd66238582-kube-api-access-4ngzr\") pod \"baa5596a-0a62-46eb-9652-e6fd66238582\" (UID: \"baa5596a-0a62-46eb-9652-e6fd66238582\") " Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.975743 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v65hq\" (UniqueName: \"kubernetes.io/projected/22fb06b0-be61-4104-bd06-e83653551448-kube-api-access-v65hq\") pod \"22fb06b0-be61-4104-bd06-e83653551448\" (UID: \"22fb06b0-be61-4104-bd06-e83653551448\") " Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.977510 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-54fbcd866f-568dg"] Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.978989 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22fb06b0-be61-4104-bd06-e83653551448-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "22fb06b0-be61-4104-bd06-e83653551448" (UID: "22fb06b0-be61-4104-bd06-e83653551448"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.979566 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/589ff3be-38fe-4b42-9465-749794f9d7ac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "589ff3be-38fe-4b42-9465-749794f9d7ac" (UID: "589ff3be-38fe-4b42-9465-749794f9d7ac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.980577 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/baa5596a-0a62-46eb-9652-e6fd66238582-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "baa5596a-0a62-46eb-9652-e6fd66238582" (UID: "baa5596a-0a62-46eb-9652-e6fd66238582"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.987664 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baa5596a-0a62-46eb-9652-e6fd66238582-kube-api-access-4ngzr" (OuterVolumeSpecName: "kube-api-access-4ngzr") pod "baa5596a-0a62-46eb-9652-e6fd66238582" (UID: "baa5596a-0a62-46eb-9652-e6fd66238582"). InnerVolumeSpecName "kube-api-access-4ngzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:23:36 crc kubenswrapper[4794]: I0216 17:23:36.998786 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/589ff3be-38fe-4b42-9465-749794f9d7ac-kube-api-access-99jcb" (OuterVolumeSpecName: "kube-api-access-99jcb") pod "589ff3be-38fe-4b42-9465-749794f9d7ac" (UID: "589ff3be-38fe-4b42-9465-749794f9d7ac"). InnerVolumeSpecName "kube-api-access-99jcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.006687 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22fb06b0-be61-4104-bd06-e83653551448-kube-api-access-v65hq" (OuterVolumeSpecName: "kube-api-access-v65hq") pod "22fb06b0-be61-4104-bd06-e83653551448" (UID: "22fb06b0-be61-4104-bd06-e83653551448"). InnerVolumeSpecName "kube-api-access-v65hq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.077542 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrkp5\" (UniqueName: \"kubernetes.io/projected/ab6d5750-0a23-4cce-8557-b0b1d867f91b-kube-api-access-vrkp5\") pod \"ab6d5750-0a23-4cce-8557-b0b1d867f91b\" (UID: \"ab6d5750-0a23-4cce-8557-b0b1d867f91b\") " Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.077649 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tllbz\" (UniqueName: \"kubernetes.io/projected/1469ebf3-80e2-45db-bb76-9c0d75fa6ba0-kube-api-access-tllbz\") pod \"1469ebf3-80e2-45db-bb76-9c0d75fa6ba0\" (UID: \"1469ebf3-80e2-45db-bb76-9c0d75fa6ba0\") " Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.077776 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab6d5750-0a23-4cce-8557-b0b1d867f91b-operator-scripts\") pod \"ab6d5750-0a23-4cce-8557-b0b1d867f91b\" (UID: \"ab6d5750-0a23-4cce-8557-b0b1d867f91b\") " Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.077809 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1469ebf3-80e2-45db-bb76-9c0d75fa6ba0-operator-scripts\") pod \"1469ebf3-80e2-45db-bb76-9c0d75fa6ba0\" (UID: \"1469ebf3-80e2-45db-bb76-9c0d75fa6ba0\") " Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.078414 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ngzr\" (UniqueName: \"kubernetes.io/projected/baa5596a-0a62-46eb-9652-e6fd66238582-kube-api-access-4ngzr\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.078431 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v65hq\" (UniqueName: \"kubernetes.io/projected/22fb06b0-be61-4104-bd06-e83653551448-kube-api-access-v65hq\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.078440 4794 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baa5596a-0a62-46eb-9652-e6fd66238582-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.078450 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99jcb\" (UniqueName: \"kubernetes.io/projected/589ff3be-38fe-4b42-9465-749794f9d7ac-kube-api-access-99jcb\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.078461 4794 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/589ff3be-38fe-4b42-9465-749794f9d7ac-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.078472 4794 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22fb06b0-be61-4104-bd06-e83653551448-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.079192 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1469ebf3-80e2-45db-bb76-9c0d75fa6ba0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1469ebf3-80e2-45db-bb76-9c0d75fa6ba0" (UID: "1469ebf3-80e2-45db-bb76-9c0d75fa6ba0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.079210 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab6d5750-0a23-4cce-8557-b0b1d867f91b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ab6d5750-0a23-4cce-8557-b0b1d867f91b" (UID: "ab6d5750-0a23-4cce-8557-b0b1d867f91b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.083518 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab6d5750-0a23-4cce-8557-b0b1d867f91b-kube-api-access-vrkp5" (OuterVolumeSpecName: "kube-api-access-vrkp5") pod "ab6d5750-0a23-4cce-8557-b0b1d867f91b" (UID: "ab6d5750-0a23-4cce-8557-b0b1d867f91b"). InnerVolumeSpecName "kube-api-access-vrkp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.091584 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1469ebf3-80e2-45db-bb76-9c0d75fa6ba0-kube-api-access-tllbz" (OuterVolumeSpecName: "kube-api-access-tllbz") pod "1469ebf3-80e2-45db-bb76-9c0d75fa6ba0" (UID: "1469ebf3-80e2-45db-bb76-9c0d75fa6ba0"). InnerVolumeSpecName "kube-api-access-tllbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.111971 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.112245 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d19b9ba9-ea39-41de-a397-3c5e844f24d8" containerName="glance-log" containerID="cri-o://f7fcc9d2e2a3cf045de6933c6f3157fa88ef76b4c31433b15d47742f69c704c4" gracePeriod=30 Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.112972 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d19b9ba9-ea39-41de-a397-3c5e844f24d8" containerName="glance-httpd" containerID="cri-o://32f52e093ea4ea104e1eb0b7e91432ca9fd23e6ef1f6124fdb162c3d7dca3c56" gracePeriod=30 Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.180758 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrkp5\" (UniqueName: \"kubernetes.io/projected/ab6d5750-0a23-4cce-8557-b0b1d867f91b-kube-api-access-vrkp5\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.181018 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tllbz\" (UniqueName: \"kubernetes.io/projected/1469ebf3-80e2-45db-bb76-9c0d75fa6ba0-kube-api-access-tllbz\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.181029 4794 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab6d5750-0a23-4cce-8557-b0b1d867f91b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.181038 4794 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1469ebf3-80e2-45db-bb76-9c0d75fa6ba0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.401969 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 17:23:37 crc kubenswrapper[4794]: W0216 17:23:37.412606 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4db1f19d_64b2_439f_a763_ab694b3e2953.slice/crio-b2c005d6f7b347b7e0ca24fc03896c234b393adfd467b4b57a56a1111beeafe8 WatchSource:0}: Error finding container b2c005d6f7b347b7e0ca24fc03896c234b393adfd467b4b57a56a1111beeafe8: Status 404 returned error can't find the container with id b2c005d6f7b347b7e0ca24fc03896c234b393adfd467b4b57a56a1111beeafe8 Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.843139 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4db1f19d-64b2-439f-a763-ab694b3e2953","Type":"ContainerStarted","Data":"b2c005d6f7b347b7e0ca24fc03896c234b393adfd467b4b57a56a1111beeafe8"} Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.845346 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7154-account-create-update-wzjn5" event={"ID":"baa5596a-0a62-46eb-9652-e6fd66238582","Type":"ContainerDied","Data":"b7a35f3b78548f1be1b9d9518d0143e50d94669d1f71d3e011ef618f04698e89"} Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.845372 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7a35f3b78548f1be1b9d9518d0143e50d94669d1f71d3e011ef618f04698e89" Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.845425 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7154-account-create-update-wzjn5" Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.858363 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9ed5-account-create-update-4c5kr" event={"ID":"ab6d5750-0a23-4cce-8557-b0b1d867f91b","Type":"ContainerDied","Data":"faeb80c1f24d9bad6139eb4321bd44f16d037ff3456ecbc47711f4cd2dd9ec07"} Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.858402 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="faeb80c1f24d9bad6139eb4321bd44f16d037ff3456ecbc47711f4cd2dd9ec07" Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.858471 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9ed5-account-create-update-4c5kr" Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.869911 4794 generic.go:334] "Generic (PLEG): container finished" podID="d19b9ba9-ea39-41de-a397-3c5e844f24d8" containerID="f7fcc9d2e2a3cf045de6933c6f3157fa88ef76b4c31433b15d47742f69c704c4" exitCode=143 Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.869994 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d19b9ba9-ea39-41de-a397-3c5e844f24d8","Type":"ContainerDied","Data":"f7fcc9d2e2a3cf045de6933c6f3157fa88ef76b4c31433b15d47742f69c704c4"} Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.874713 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-sn2z4" event={"ID":"1469ebf3-80e2-45db-bb76-9c0d75fa6ba0","Type":"ContainerDied","Data":"7361e542dec1558adeb043f90a8655b669f55bad8abdafb081dc30c0b058c4ce"} Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.874741 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7361e542dec1558adeb043f90a8655b669f55bad8abdafb081dc30c0b058c4ce" Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.874794 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-sn2z4" Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.892914 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8970-account-create-update-bjpr9" event={"ID":"22fb06b0-be61-4104-bd06-e83653551448","Type":"ContainerDied","Data":"7619b2bf8fc1200dd608f90ec827ebbcac8b8eb879aaf0799c12f9c9155cf506"} Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.892974 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7619b2bf8fc1200dd608f90ec827ebbcac8b8eb879aaf0799c12f9c9155cf506" Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.893251 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8970-account-create-update-bjpr9" Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.902978 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vsrl6" event={"ID":"589ff3be-38fe-4b42-9465-749794f9d7ac","Type":"ContainerDied","Data":"a4b03611d5e19d2f8bd4e57a11527213d12065462261adf8808f888b0be5ad80"} Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.903020 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4b03611d5e19d2f8bd4e57a11527213d12065462261adf8808f888b0be5ad80" Feb 16 17:23:37 crc kubenswrapper[4794]: I0216 17:23:37.903113 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vsrl6" Feb 16 17:23:38 crc kubenswrapper[4794]: I0216 17:23:38.814068 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07460b16-5cea-4a16-8389-dc1d3e7c3ee8" path="/var/lib/kubelet/pods/07460b16-5cea-4a16-8389-dc1d3e7c3ee8/volumes" Feb 16 17:23:38 crc kubenswrapper[4794]: I0216 17:23:38.923185 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4db1f19d-64b2-439f-a763-ab694b3e2953","Type":"ContainerStarted","Data":"c939dd0cc425827cd9db757ac849abfb9f6ccdeb55ae4a3df2d1baae4aa0d403"} Feb 16 17:23:38 crc kubenswrapper[4794]: I0216 17:23:38.923237 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4db1f19d-64b2-439f-a763-ab694b3e2953","Type":"ContainerStarted","Data":"e75b25ee87c95c67a916ea1a6059e041d4f8c26e719d8c4bc0cb858c7c939d4e"} Feb 16 17:23:38 crc kubenswrapper[4794]: I0216 17:23:38.956809 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=2.956788212 podStartE2EDuration="2.956788212s" podCreationTimestamp="2026-02-16 17:23:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:23:38.946361668 +0000 UTC m=+1444.894456325" watchObservedRunningTime="2026-02-16 17:23:38.956788212 +0000 UTC m=+1444.904882859" Feb 16 17:23:39 crc kubenswrapper[4794]: I0216 17:23:39.907085 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 16 17:23:40 crc kubenswrapper[4794]: I0216 17:23:40.897027 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.013028 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d19b9ba9-ea39-41de-a397-3c5e844f24d8-logs\") pod \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\" (UID: \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\") " Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.013114 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98qzb\" (UniqueName: \"kubernetes.io/projected/d19b9ba9-ea39-41de-a397-3c5e844f24d8-kube-api-access-98qzb\") pod \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\" (UID: \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\") " Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.013267 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d19b9ba9-ea39-41de-a397-3c5e844f24d8-httpd-run\") pod \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\" (UID: \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\") " Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.013352 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d19b9ba9-ea39-41de-a397-3c5e844f24d8-internal-tls-certs\") pod \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\" (UID: \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\") " Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.013410 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d19b9ba9-ea39-41de-a397-3c5e844f24d8-scripts\") pod \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\" (UID: \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\") " Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.013439 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d19b9ba9-ea39-41de-a397-3c5e844f24d8-config-data\") pod \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\" (UID: \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\") " Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.013471 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d19b9ba9-ea39-41de-a397-3c5e844f24d8-combined-ca-bundle\") pod \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\" (UID: \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\") " Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.014208 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d19b9ba9-ea39-41de-a397-3c5e844f24d8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d19b9ba9-ea39-41de-a397-3c5e844f24d8" (UID: "d19b9ba9-ea39-41de-a397-3c5e844f24d8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.014563 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d19b9ba9-ea39-41de-a397-3c5e844f24d8-logs" (OuterVolumeSpecName: "logs") pod "d19b9ba9-ea39-41de-a397-3c5e844f24d8" (UID: "d19b9ba9-ea39-41de-a397-3c5e844f24d8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.014983 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd\") pod \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\" (UID: \"d19b9ba9-ea39-41de-a397-3c5e844f24d8\") " Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.015614 4794 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d19b9ba9-ea39-41de-a397-3c5e844f24d8-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.015626 4794 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d19b9ba9-ea39-41de-a397-3c5e844f24d8-logs\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.034042 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19b9ba9-ea39-41de-a397-3c5e844f24d8-kube-api-access-98qzb" (OuterVolumeSpecName: "kube-api-access-98qzb") pod "d19b9ba9-ea39-41de-a397-3c5e844f24d8" (UID: "d19b9ba9-ea39-41de-a397-3c5e844f24d8"). InnerVolumeSpecName "kube-api-access-98qzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.040199 4794 generic.go:334] "Generic (PLEG): container finished" podID="d19b9ba9-ea39-41de-a397-3c5e844f24d8" containerID="32f52e093ea4ea104e1eb0b7e91432ca9fd23e6ef1f6124fdb162c3d7dca3c56" exitCode=0 Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.040250 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d19b9ba9-ea39-41de-a397-3c5e844f24d8","Type":"ContainerDied","Data":"32f52e093ea4ea104e1eb0b7e91432ca9fd23e6ef1f6124fdb162c3d7dca3c56"} Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.040285 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d19b9ba9-ea39-41de-a397-3c5e844f24d8","Type":"ContainerDied","Data":"95320eead86b6201e11386ddce890128fb5bfb64949195e914dbc9f3fa6fdfc1"} Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.040412 4794 scope.go:117] "RemoveContainer" containerID="32f52e093ea4ea104e1eb0b7e91432ca9fd23e6ef1f6124fdb162c3d7dca3c56" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.040664 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.045775 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19b9ba9-ea39-41de-a397-3c5e844f24d8-scripts" (OuterVolumeSpecName: "scripts") pod "d19b9ba9-ea39-41de-a397-3c5e844f24d8" (UID: "d19b9ba9-ea39-41de-a397-3c5e844f24d8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.058670 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd" (OuterVolumeSpecName: "glance") pod "d19b9ba9-ea39-41de-a397-3c5e844f24d8" (UID: "d19b9ba9-ea39-41de-a397-3c5e844f24d8"). InnerVolumeSpecName "pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.086448 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19b9ba9-ea39-41de-a397-3c5e844f24d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d19b9ba9-ea39-41de-a397-3c5e844f24d8" (UID: "d19b9ba9-ea39-41de-a397-3c5e844f24d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.131640 4794 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d19b9ba9-ea39-41de-a397-3c5e844f24d8-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.131676 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d19b9ba9-ea39-41de-a397-3c5e844f24d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.131711 4794 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd\") on node \"crc\" " Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.131721 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98qzb\" (UniqueName: \"kubernetes.io/projected/d19b9ba9-ea39-41de-a397-3c5e844f24d8-kube-api-access-98qzb\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.153794 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19b9ba9-ea39-41de-a397-3c5e844f24d8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d19b9ba9-ea39-41de-a397-3c5e844f24d8" (UID: "d19b9ba9-ea39-41de-a397-3c5e844f24d8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.165610 4794 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.165818 4794 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd") on node "crc" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.171955 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19b9ba9-ea39-41de-a397-3c5e844f24d8-config-data" (OuterVolumeSpecName: "config-data") pod "d19b9ba9-ea39-41de-a397-3c5e844f24d8" (UID: "d19b9ba9-ea39-41de-a397-3c5e844f24d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.190240 4794 scope.go:117] "RemoveContainer" containerID="f7fcc9d2e2a3cf045de6933c6f3157fa88ef76b4c31433b15d47742f69c704c4" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.219807 4794 scope.go:117] "RemoveContainer" containerID="32f52e093ea4ea104e1eb0b7e91432ca9fd23e6ef1f6124fdb162c3d7dca3c56" Feb 16 17:23:41 crc kubenswrapper[4794]: E0216 17:23:41.220221 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32f52e093ea4ea104e1eb0b7e91432ca9fd23e6ef1f6124fdb162c3d7dca3c56\": container with ID starting with 32f52e093ea4ea104e1eb0b7e91432ca9fd23e6ef1f6124fdb162c3d7dca3c56 not found: ID does not exist" containerID="32f52e093ea4ea104e1eb0b7e91432ca9fd23e6ef1f6124fdb162c3d7dca3c56" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.220253 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32f52e093ea4ea104e1eb0b7e91432ca9fd23e6ef1f6124fdb162c3d7dca3c56"} err="failed to get container status \"32f52e093ea4ea104e1eb0b7e91432ca9fd23e6ef1f6124fdb162c3d7dca3c56\": rpc error: code = NotFound desc = could not find container \"32f52e093ea4ea104e1eb0b7e91432ca9fd23e6ef1f6124fdb162c3d7dca3c56\": container with ID starting with 32f52e093ea4ea104e1eb0b7e91432ca9fd23e6ef1f6124fdb162c3d7dca3c56 not found: ID does not exist" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.220279 4794 scope.go:117] "RemoveContainer" containerID="f7fcc9d2e2a3cf045de6933c6f3157fa88ef76b4c31433b15d47742f69c704c4" Feb 16 17:23:41 crc kubenswrapper[4794]: E0216 17:23:41.221557 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7fcc9d2e2a3cf045de6933c6f3157fa88ef76b4c31433b15d47742f69c704c4\": container with ID starting with f7fcc9d2e2a3cf045de6933c6f3157fa88ef76b4c31433b15d47742f69c704c4 not found: ID does not exist" containerID="f7fcc9d2e2a3cf045de6933c6f3157fa88ef76b4c31433b15d47742f69c704c4" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.221581 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7fcc9d2e2a3cf045de6933c6f3157fa88ef76b4c31433b15d47742f69c704c4"} err="failed to get container status \"f7fcc9d2e2a3cf045de6933c6f3157fa88ef76b4c31433b15d47742f69c704c4\": rpc error: code = NotFound desc = could not find container \"f7fcc9d2e2a3cf045de6933c6f3157fa88ef76b4c31433b15d47742f69c704c4\": container with ID starting with f7fcc9d2e2a3cf045de6933c6f3157fa88ef76b4c31433b15d47742f69c704c4 not found: ID does not exist" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.233748 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d19b9ba9-ea39-41de-a397-3c5e844f24d8-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.233784 4794 reconciler_common.go:293] "Volume detached for volume \"pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.233796 4794 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d19b9ba9-ea39-41de-a397-3c5e844f24d8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.302153 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4gn4j"] Feb 16 17:23:41 crc kubenswrapper[4794]: E0216 17:23:41.302638 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0770add-b35a-4790-b877-78e7a2661b48" containerName="mariadb-database-create" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.302655 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0770add-b35a-4790-b877-78e7a2661b48" containerName="mariadb-database-create" Feb 16 17:23:41 crc kubenswrapper[4794]: E0216 17:23:41.302669 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d19b9ba9-ea39-41de-a397-3c5e844f24d8" containerName="glance-log" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.302677 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="d19b9ba9-ea39-41de-a397-3c5e844f24d8" containerName="glance-log" Feb 16 17:23:41 crc kubenswrapper[4794]: E0216 17:23:41.302689 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22fb06b0-be61-4104-bd06-e83653551448" containerName="mariadb-account-create-update" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.302697 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="22fb06b0-be61-4104-bd06-e83653551448" containerName="mariadb-account-create-update" Feb 16 17:23:41 crc kubenswrapper[4794]: E0216 17:23:41.302707 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab6d5750-0a23-4cce-8557-b0b1d867f91b" containerName="mariadb-account-create-update" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.302713 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab6d5750-0a23-4cce-8557-b0b1d867f91b" containerName="mariadb-account-create-update" Feb 16 17:23:41 crc kubenswrapper[4794]: E0216 17:23:41.302728 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1469ebf3-80e2-45db-bb76-9c0d75fa6ba0" containerName="mariadb-database-create" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.302734 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="1469ebf3-80e2-45db-bb76-9c0d75fa6ba0" containerName="mariadb-database-create" Feb 16 17:23:41 crc kubenswrapper[4794]: E0216 17:23:41.302743 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d19b9ba9-ea39-41de-a397-3c5e844f24d8" containerName="glance-httpd" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.302750 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="d19b9ba9-ea39-41de-a397-3c5e844f24d8" containerName="glance-httpd" Feb 16 17:23:41 crc kubenswrapper[4794]: E0216 17:23:41.302776 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baa5596a-0a62-46eb-9652-e6fd66238582" containerName="mariadb-account-create-update" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.302782 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="baa5596a-0a62-46eb-9652-e6fd66238582" containerName="mariadb-account-create-update" Feb 16 17:23:41 crc kubenswrapper[4794]: E0216 17:23:41.302805 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="589ff3be-38fe-4b42-9465-749794f9d7ac" containerName="mariadb-database-create" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.302811 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="589ff3be-38fe-4b42-9465-749794f9d7ac" containerName="mariadb-database-create" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.303004 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0770add-b35a-4790-b877-78e7a2661b48" containerName="mariadb-database-create" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.303016 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="baa5596a-0a62-46eb-9652-e6fd66238582" containerName="mariadb-account-create-update" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.303025 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="d19b9ba9-ea39-41de-a397-3c5e844f24d8" containerName="glance-httpd" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.303037 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="22fb06b0-be61-4104-bd06-e83653551448" containerName="mariadb-account-create-update" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.303046 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="d19b9ba9-ea39-41de-a397-3c5e844f24d8" containerName="glance-log" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.303057 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab6d5750-0a23-4cce-8557-b0b1d867f91b" containerName="mariadb-account-create-update" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.303070 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="589ff3be-38fe-4b42-9465-749794f9d7ac" containerName="mariadb-database-create" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.303090 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="1469ebf3-80e2-45db-bb76-9c0d75fa6ba0" containerName="mariadb-database-create" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.304179 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4gn4j" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.307627 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.309416 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-b4ckg" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.309516 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.319288 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4gn4j"] Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.438275 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/619e01e3-7fcb-4b21-b1df-07ba70374e09-config-data\") pod \"nova-cell0-conductor-db-sync-4gn4j\" (UID: \"619e01e3-7fcb-4b21-b1df-07ba70374e09\") " pod="openstack/nova-cell0-conductor-db-sync-4gn4j" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.438407 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/619e01e3-7fcb-4b21-b1df-07ba70374e09-scripts\") pod \"nova-cell0-conductor-db-sync-4gn4j\" (UID: \"619e01e3-7fcb-4b21-b1df-07ba70374e09\") " pod="openstack/nova-cell0-conductor-db-sync-4gn4j" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.438607 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntvjp\" (UniqueName: \"kubernetes.io/projected/619e01e3-7fcb-4b21-b1df-07ba70374e09-kube-api-access-ntvjp\") pod \"nova-cell0-conductor-db-sync-4gn4j\" (UID: \"619e01e3-7fcb-4b21-b1df-07ba70374e09\") " pod="openstack/nova-cell0-conductor-db-sync-4gn4j" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.438688 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/619e01e3-7fcb-4b21-b1df-07ba70374e09-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4gn4j\" (UID: \"619e01e3-7fcb-4b21-b1df-07ba70374e09\") " pod="openstack/nova-cell0-conductor-db-sync-4gn4j" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.469372 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.480698 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.493116 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.495394 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.500826 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.501048 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.509231 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.544316 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntvjp\" (UniqueName: \"kubernetes.io/projected/619e01e3-7fcb-4b21-b1df-07ba70374e09-kube-api-access-ntvjp\") pod \"nova-cell0-conductor-db-sync-4gn4j\" (UID: \"619e01e3-7fcb-4b21-b1df-07ba70374e09\") " pod="openstack/nova-cell0-conductor-db-sync-4gn4j" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.544381 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/619e01e3-7fcb-4b21-b1df-07ba70374e09-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4gn4j\" (UID: \"619e01e3-7fcb-4b21-b1df-07ba70374e09\") " pod="openstack/nova-cell0-conductor-db-sync-4gn4j" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.544438 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/619e01e3-7fcb-4b21-b1df-07ba70374e09-config-data\") pod \"nova-cell0-conductor-db-sync-4gn4j\" (UID: \"619e01e3-7fcb-4b21-b1df-07ba70374e09\") " pod="openstack/nova-cell0-conductor-db-sync-4gn4j" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.544490 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/619e01e3-7fcb-4b21-b1df-07ba70374e09-scripts\") pod \"nova-cell0-conductor-db-sync-4gn4j\" (UID: \"619e01e3-7fcb-4b21-b1df-07ba70374e09\") " pod="openstack/nova-cell0-conductor-db-sync-4gn4j" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.557559 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/619e01e3-7fcb-4b21-b1df-07ba70374e09-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4gn4j\" (UID: \"619e01e3-7fcb-4b21-b1df-07ba70374e09\") " pod="openstack/nova-cell0-conductor-db-sync-4gn4j" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.558704 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/619e01e3-7fcb-4b21-b1df-07ba70374e09-scripts\") pod \"nova-cell0-conductor-db-sync-4gn4j\" (UID: \"619e01e3-7fcb-4b21-b1df-07ba70374e09\") " pod="openstack/nova-cell0-conductor-db-sync-4gn4j" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.559788 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/619e01e3-7fcb-4b21-b1df-07ba70374e09-config-data\") pod \"nova-cell0-conductor-db-sync-4gn4j\" (UID: \"619e01e3-7fcb-4b21-b1df-07ba70374e09\") " pod="openstack/nova-cell0-conductor-db-sync-4gn4j" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.571421 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntvjp\" (UniqueName: \"kubernetes.io/projected/619e01e3-7fcb-4b21-b1df-07ba70374e09-kube-api-access-ntvjp\") pod \"nova-cell0-conductor-db-sync-4gn4j\" (UID: \"619e01e3-7fcb-4b21-b1df-07ba70374e09\") " pod="openstack/nova-cell0-conductor-db-sync-4gn4j" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.622697 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4gn4j" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.646703 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd\") pod \"glance-default-internal-api-0\" (UID: \"446c165c-d077-4e2c-a902-ee7d1961edc6\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.646762 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/446c165c-d077-4e2c-a902-ee7d1961edc6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"446c165c-d077-4e2c-a902-ee7d1961edc6\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.646811 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s99d9\" (UniqueName: \"kubernetes.io/projected/446c165c-d077-4e2c-a902-ee7d1961edc6-kube-api-access-s99d9\") pod \"glance-default-internal-api-0\" (UID: \"446c165c-d077-4e2c-a902-ee7d1961edc6\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.646898 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/446c165c-d077-4e2c-a902-ee7d1961edc6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"446c165c-d077-4e2c-a902-ee7d1961edc6\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.646974 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/446c165c-d077-4e2c-a902-ee7d1961edc6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"446c165c-d077-4e2c-a902-ee7d1961edc6\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.646998 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/446c165c-d077-4e2c-a902-ee7d1961edc6-logs\") pod \"glance-default-internal-api-0\" (UID: \"446c165c-d077-4e2c-a902-ee7d1961edc6\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.647062 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/446c165c-d077-4e2c-a902-ee7d1961edc6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"446c165c-d077-4e2c-a902-ee7d1961edc6\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.647078 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/446c165c-d077-4e2c-a902-ee7d1961edc6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"446c165c-d077-4e2c-a902-ee7d1961edc6\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.749461 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/446c165c-d077-4e2c-a902-ee7d1961edc6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"446c165c-d077-4e2c-a902-ee7d1961edc6\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.749523 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/446c165c-d077-4e2c-a902-ee7d1961edc6-logs\") pod \"glance-default-internal-api-0\" (UID: \"446c165c-d077-4e2c-a902-ee7d1961edc6\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.749606 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/446c165c-d077-4e2c-a902-ee7d1961edc6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"446c165c-d077-4e2c-a902-ee7d1961edc6\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.749622 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/446c165c-d077-4e2c-a902-ee7d1961edc6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"446c165c-d077-4e2c-a902-ee7d1961edc6\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.749774 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd\") pod \"glance-default-internal-api-0\" (UID: \"446c165c-d077-4e2c-a902-ee7d1961edc6\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.749806 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/446c165c-d077-4e2c-a902-ee7d1961edc6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"446c165c-d077-4e2c-a902-ee7d1961edc6\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.749856 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s99d9\" (UniqueName: \"kubernetes.io/projected/446c165c-d077-4e2c-a902-ee7d1961edc6-kube-api-access-s99d9\") pod \"glance-default-internal-api-0\" (UID: \"446c165c-d077-4e2c-a902-ee7d1961edc6\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.749889 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/446c165c-d077-4e2c-a902-ee7d1961edc6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"446c165c-d077-4e2c-a902-ee7d1961edc6\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.752239 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/446c165c-d077-4e2c-a902-ee7d1961edc6-logs\") pod \"glance-default-internal-api-0\" (UID: \"446c165c-d077-4e2c-a902-ee7d1961edc6\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.752725 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/446c165c-d077-4e2c-a902-ee7d1961edc6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"446c165c-d077-4e2c-a902-ee7d1961edc6\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.757015 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/446c165c-d077-4e2c-a902-ee7d1961edc6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"446c165c-d077-4e2c-a902-ee7d1961edc6\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.761527 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/446c165c-d077-4e2c-a902-ee7d1961edc6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"446c165c-d077-4e2c-a902-ee7d1961edc6\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.761957 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/446c165c-d077-4e2c-a902-ee7d1961edc6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"446c165c-d077-4e2c-a902-ee7d1961edc6\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.773985 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s99d9\" (UniqueName: \"kubernetes.io/projected/446c165c-d077-4e2c-a902-ee7d1961edc6-kube-api-access-s99d9\") pod \"glance-default-internal-api-0\" (UID: \"446c165c-d077-4e2c-a902-ee7d1961edc6\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.779391 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/446c165c-d077-4e2c-a902-ee7d1961edc6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"446c165c-d077-4e2c-a902-ee7d1961edc6\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.810551 4794 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:23:41 crc kubenswrapper[4794]: I0216 17:23:41.810609 4794 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd\") pod \"glance-default-internal-api-0\" (UID: \"446c165c-d077-4e2c-a902-ee7d1961edc6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/cb6046d42ed9a0eea3afd967978370f0d4a85f1a0cd82d5e783a4e6c6e087e5f/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 16 17:23:42 crc kubenswrapper[4794]: I0216 17:23:42.148863 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11eabb3a-26f9-4bc3-91e1-9673ce649afd\") pod \"glance-default-internal-api-0\" (UID: \"446c165c-d077-4e2c-a902-ee7d1961edc6\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:23:42 crc kubenswrapper[4794]: I0216 17:23:42.416747 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 17:23:42 crc kubenswrapper[4794]: I0216 17:23:42.421404 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4gn4j"] Feb 16 17:23:42 crc kubenswrapper[4794]: I0216 17:23:42.809112 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19b9ba9-ea39-41de-a397-3c5e844f24d8" path="/var/lib/kubelet/pods/d19b9ba9-ea39-41de-a397-3c5e844f24d8/volumes" Feb 16 17:23:43 crc kubenswrapper[4794]: I0216 17:23:43.004726 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 17:23:43 crc kubenswrapper[4794]: I0216 17:23:43.086432 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4gn4j" event={"ID":"619e01e3-7fcb-4b21-b1df-07ba70374e09","Type":"ContainerStarted","Data":"9aebbce0e022da03763509bc0aa4b1334bc0e436b4857cd508e37f8bb1876b52"} Feb 16 17:23:43 crc kubenswrapper[4794]: I0216 17:23:43.096561 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"446c165c-d077-4e2c-a902-ee7d1961edc6","Type":"ContainerStarted","Data":"8362a3d61cae74b1874df9c1c4cc3fe04f9577074ec71688ed2339cde7f8b6b6"} Feb 16 17:23:43 crc kubenswrapper[4794]: I0216 17:23:43.998766 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.126902 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acdf75eb-c141-4f82-94f9-dd95db013ba7-config-data\") pod \"acdf75eb-c141-4f82-94f9-dd95db013ba7\" (UID: \"acdf75eb-c141-4f82-94f9-dd95db013ba7\") " Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.126942 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acdf75eb-c141-4f82-94f9-dd95db013ba7-scripts\") pod \"acdf75eb-c141-4f82-94f9-dd95db013ba7\" (UID: \"acdf75eb-c141-4f82-94f9-dd95db013ba7\") " Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.126995 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acdf75eb-c141-4f82-94f9-dd95db013ba7-combined-ca-bundle\") pod \"acdf75eb-c141-4f82-94f9-dd95db013ba7\" (UID: \"acdf75eb-c141-4f82-94f9-dd95db013ba7\") " Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.127203 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btkhp\" (UniqueName: \"kubernetes.io/projected/acdf75eb-c141-4f82-94f9-dd95db013ba7-kube-api-access-btkhp\") pod \"acdf75eb-c141-4f82-94f9-dd95db013ba7\" (UID: \"acdf75eb-c141-4f82-94f9-dd95db013ba7\") " Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.127219 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/acdf75eb-c141-4f82-94f9-dd95db013ba7-sg-core-conf-yaml\") pod \"acdf75eb-c141-4f82-94f9-dd95db013ba7\" (UID: \"acdf75eb-c141-4f82-94f9-dd95db013ba7\") " Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.127249 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acdf75eb-c141-4f82-94f9-dd95db013ba7-run-httpd\") pod \"acdf75eb-c141-4f82-94f9-dd95db013ba7\" (UID: \"acdf75eb-c141-4f82-94f9-dd95db013ba7\") " Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.127288 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acdf75eb-c141-4f82-94f9-dd95db013ba7-log-httpd\") pod \"acdf75eb-c141-4f82-94f9-dd95db013ba7\" (UID: \"acdf75eb-c141-4f82-94f9-dd95db013ba7\") " Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.132523 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acdf75eb-c141-4f82-94f9-dd95db013ba7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "acdf75eb-c141-4f82-94f9-dd95db013ba7" (UID: "acdf75eb-c141-4f82-94f9-dd95db013ba7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.133757 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acdf75eb-c141-4f82-94f9-dd95db013ba7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "acdf75eb-c141-4f82-94f9-dd95db013ba7" (UID: "acdf75eb-c141-4f82-94f9-dd95db013ba7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.139187 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acdf75eb-c141-4f82-94f9-dd95db013ba7-scripts" (OuterVolumeSpecName: "scripts") pod "acdf75eb-c141-4f82-94f9-dd95db013ba7" (UID: "acdf75eb-c141-4f82-94f9-dd95db013ba7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.140114 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acdf75eb-c141-4f82-94f9-dd95db013ba7-kube-api-access-btkhp" (OuterVolumeSpecName: "kube-api-access-btkhp") pod "acdf75eb-c141-4f82-94f9-dd95db013ba7" (UID: "acdf75eb-c141-4f82-94f9-dd95db013ba7"). InnerVolumeSpecName "kube-api-access-btkhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.141246 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"446c165c-d077-4e2c-a902-ee7d1961edc6","Type":"ContainerStarted","Data":"01d9153c4cffb38735e94bb91cfe1088cd0214ed7cb940788b151801da130b33"} Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.174673 4794 generic.go:334] "Generic (PLEG): container finished" podID="acdf75eb-c141-4f82-94f9-dd95db013ba7" containerID="e0b792bccee785b3dadc00ab69e369429e68cd2e968efa5ef07ebe987371d6e2" exitCode=0 Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.174718 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acdf75eb-c141-4f82-94f9-dd95db013ba7","Type":"ContainerDied","Data":"e0b792bccee785b3dadc00ab69e369429e68cd2e968efa5ef07ebe987371d6e2"} Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.174747 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acdf75eb-c141-4f82-94f9-dd95db013ba7","Type":"ContainerDied","Data":"155757857d113d89aa8f6e10ba8da8b97f02c0510fe313a869957fc1757aeec3"} Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.174764 4794 scope.go:117] "RemoveContainer" containerID="a4c8de7d82737b60c8a6e1c698fbb67eb43ed3b4451edc75d24a13c8a11af62b" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.174911 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.213910 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acdf75eb-c141-4f82-94f9-dd95db013ba7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "acdf75eb-c141-4f82-94f9-dd95db013ba7" (UID: "acdf75eb-c141-4f82-94f9-dd95db013ba7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.236027 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btkhp\" (UniqueName: \"kubernetes.io/projected/acdf75eb-c141-4f82-94f9-dd95db013ba7-kube-api-access-btkhp\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.236073 4794 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/acdf75eb-c141-4f82-94f9-dd95db013ba7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.236084 4794 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acdf75eb-c141-4f82-94f9-dd95db013ba7-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.236094 4794 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acdf75eb-c141-4f82-94f9-dd95db013ba7-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.236103 4794 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acdf75eb-c141-4f82-94f9-dd95db013ba7-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.260520 4794 scope.go:117] "RemoveContainer" containerID="05cd36878ab4c8654b0bdaa4cdb45ca8d8a3ce6c325e55731ca7d3ae1b13f180" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.261983 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acdf75eb-c141-4f82-94f9-dd95db013ba7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "acdf75eb-c141-4f82-94f9-dd95db013ba7" (UID: "acdf75eb-c141-4f82-94f9-dd95db013ba7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.300506 4794 scope.go:117] "RemoveContainer" containerID="cc98ca6dacaf528e6ee39f94aedbceabc700f6277151f33e820bf28a61f6a147" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.309814 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-547586545-c5624" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.317364 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acdf75eb-c141-4f82-94f9-dd95db013ba7-config-data" (OuterVolumeSpecName: "config-data") pod "acdf75eb-c141-4f82-94f9-dd95db013ba7" (UID: "acdf75eb-c141-4f82-94f9-dd95db013ba7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.338555 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acdf75eb-c141-4f82-94f9-dd95db013ba7-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.338593 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acdf75eb-c141-4f82-94f9-dd95db013ba7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.343649 4794 scope.go:117] "RemoveContainer" containerID="e0b792bccee785b3dadc00ab69e369429e68cd2e968efa5ef07ebe987371d6e2" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.369962 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-599bd89595-29q2j"] Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.370165 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-599bd89595-29q2j" podUID="f1082952-332b-4c48-b37b-52c919f87f0f" containerName="heat-engine" containerID="cri-o://bfb3f2ff3e912556f3ae81e4ead3c17ca392e240787708cd7bf47fb132d69f7b" gracePeriod=60 Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.418639 4794 scope.go:117] "RemoveContainer" containerID="a4c8de7d82737b60c8a6e1c698fbb67eb43ed3b4451edc75d24a13c8a11af62b" Feb 16 17:23:44 crc kubenswrapper[4794]: E0216 17:23:44.445715 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4c8de7d82737b60c8a6e1c698fbb67eb43ed3b4451edc75d24a13c8a11af62b\": container with ID starting with a4c8de7d82737b60c8a6e1c698fbb67eb43ed3b4451edc75d24a13c8a11af62b not found: ID does not exist" containerID="a4c8de7d82737b60c8a6e1c698fbb67eb43ed3b4451edc75d24a13c8a11af62b" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.445770 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4c8de7d82737b60c8a6e1c698fbb67eb43ed3b4451edc75d24a13c8a11af62b"} err="failed to get container status \"a4c8de7d82737b60c8a6e1c698fbb67eb43ed3b4451edc75d24a13c8a11af62b\": rpc error: code = NotFound desc = could not find container \"a4c8de7d82737b60c8a6e1c698fbb67eb43ed3b4451edc75d24a13c8a11af62b\": container with ID starting with a4c8de7d82737b60c8a6e1c698fbb67eb43ed3b4451edc75d24a13c8a11af62b not found: ID does not exist" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.445799 4794 scope.go:117] "RemoveContainer" containerID="05cd36878ab4c8654b0bdaa4cdb45ca8d8a3ce6c325e55731ca7d3ae1b13f180" Feb 16 17:23:44 crc kubenswrapper[4794]: E0216 17:23:44.446192 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05cd36878ab4c8654b0bdaa4cdb45ca8d8a3ce6c325e55731ca7d3ae1b13f180\": container with ID starting with 05cd36878ab4c8654b0bdaa4cdb45ca8d8a3ce6c325e55731ca7d3ae1b13f180 not found: ID does not exist" containerID="05cd36878ab4c8654b0bdaa4cdb45ca8d8a3ce6c325e55731ca7d3ae1b13f180" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.446215 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05cd36878ab4c8654b0bdaa4cdb45ca8d8a3ce6c325e55731ca7d3ae1b13f180"} err="failed to get container status \"05cd36878ab4c8654b0bdaa4cdb45ca8d8a3ce6c325e55731ca7d3ae1b13f180\": rpc error: code = NotFound desc = could not find container \"05cd36878ab4c8654b0bdaa4cdb45ca8d8a3ce6c325e55731ca7d3ae1b13f180\": container with ID starting with 05cd36878ab4c8654b0bdaa4cdb45ca8d8a3ce6c325e55731ca7d3ae1b13f180 not found: ID does not exist" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.446230 4794 scope.go:117] "RemoveContainer" containerID="cc98ca6dacaf528e6ee39f94aedbceabc700f6277151f33e820bf28a61f6a147" Feb 16 17:23:44 crc kubenswrapper[4794]: E0216 17:23:44.446580 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc98ca6dacaf528e6ee39f94aedbceabc700f6277151f33e820bf28a61f6a147\": container with ID starting with cc98ca6dacaf528e6ee39f94aedbceabc700f6277151f33e820bf28a61f6a147 not found: ID does not exist" containerID="cc98ca6dacaf528e6ee39f94aedbceabc700f6277151f33e820bf28a61f6a147" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.446649 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc98ca6dacaf528e6ee39f94aedbceabc700f6277151f33e820bf28a61f6a147"} err="failed to get container status \"cc98ca6dacaf528e6ee39f94aedbceabc700f6277151f33e820bf28a61f6a147\": rpc error: code = NotFound desc = could not find container \"cc98ca6dacaf528e6ee39f94aedbceabc700f6277151f33e820bf28a61f6a147\": container with ID starting with cc98ca6dacaf528e6ee39f94aedbceabc700f6277151f33e820bf28a61f6a147 not found: ID does not exist" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.446690 4794 scope.go:117] "RemoveContainer" containerID="e0b792bccee785b3dadc00ab69e369429e68cd2e968efa5ef07ebe987371d6e2" Feb 16 17:23:44 crc kubenswrapper[4794]: E0216 17:23:44.447195 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0b792bccee785b3dadc00ab69e369429e68cd2e968efa5ef07ebe987371d6e2\": container with ID starting with e0b792bccee785b3dadc00ab69e369429e68cd2e968efa5ef07ebe987371d6e2 not found: ID does not exist" containerID="e0b792bccee785b3dadc00ab69e369429e68cd2e968efa5ef07ebe987371d6e2" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.447225 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0b792bccee785b3dadc00ab69e369429e68cd2e968efa5ef07ebe987371d6e2"} err="failed to get container status \"e0b792bccee785b3dadc00ab69e369429e68cd2e968efa5ef07ebe987371d6e2\": rpc error: code = NotFound desc = could not find container \"e0b792bccee785b3dadc00ab69e369429e68cd2e968efa5ef07ebe987371d6e2\": container with ID starting with e0b792bccee785b3dadc00ab69e369429e68cd2e968efa5ef07ebe987371d6e2 not found: ID does not exist" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.624345 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.639486 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.655386 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:23:44 crc kubenswrapper[4794]: E0216 17:23:44.655978 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acdf75eb-c141-4f82-94f9-dd95db013ba7" containerName="sg-core" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.656013 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="acdf75eb-c141-4f82-94f9-dd95db013ba7" containerName="sg-core" Feb 16 17:23:44 crc kubenswrapper[4794]: E0216 17:23:44.656037 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acdf75eb-c141-4f82-94f9-dd95db013ba7" containerName="proxy-httpd" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.656047 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="acdf75eb-c141-4f82-94f9-dd95db013ba7" containerName="proxy-httpd" Feb 16 17:23:44 crc kubenswrapper[4794]: E0216 17:23:44.656060 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acdf75eb-c141-4f82-94f9-dd95db013ba7" containerName="ceilometer-central-agent" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.656067 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="acdf75eb-c141-4f82-94f9-dd95db013ba7" containerName="ceilometer-central-agent" Feb 16 17:23:44 crc kubenswrapper[4794]: E0216 17:23:44.656114 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acdf75eb-c141-4f82-94f9-dd95db013ba7" containerName="ceilometer-notification-agent" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.656124 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="acdf75eb-c141-4f82-94f9-dd95db013ba7" containerName="ceilometer-notification-agent" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.656432 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="acdf75eb-c141-4f82-94f9-dd95db013ba7" containerName="ceilometer-central-agent" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.656461 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="acdf75eb-c141-4f82-94f9-dd95db013ba7" containerName="sg-core" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.656473 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="acdf75eb-c141-4f82-94f9-dd95db013ba7" containerName="proxy-httpd" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.656497 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="acdf75eb-c141-4f82-94f9-dd95db013ba7" containerName="ceilometer-notification-agent" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.659125 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.662922 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.684831 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.689315 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.750165 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2cc8fc0-5d5c-402c-9623-d6bd09039197-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2cc8fc0-5d5c-402c-9623-d6bd09039197\") " pod="openstack/ceilometer-0" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.750479 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2cc8fc0-5d5c-402c-9623-d6bd09039197-scripts\") pod \"ceilometer-0\" (UID: \"f2cc8fc0-5d5c-402c-9623-d6bd09039197\") " pod="openstack/ceilometer-0" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.750557 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2cc8fc0-5d5c-402c-9623-d6bd09039197-config-data\") pod \"ceilometer-0\" (UID: \"f2cc8fc0-5d5c-402c-9623-d6bd09039197\") " pod="openstack/ceilometer-0" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.750726 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2cc8fc0-5d5c-402c-9623-d6bd09039197-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2cc8fc0-5d5c-402c-9623-d6bd09039197\") " pod="openstack/ceilometer-0" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.750917 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzrtx\" (UniqueName: \"kubernetes.io/projected/f2cc8fc0-5d5c-402c-9623-d6bd09039197-kube-api-access-dzrtx\") pod \"ceilometer-0\" (UID: \"f2cc8fc0-5d5c-402c-9623-d6bd09039197\") " pod="openstack/ceilometer-0" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.750979 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2cc8fc0-5d5c-402c-9623-d6bd09039197-run-httpd\") pod \"ceilometer-0\" (UID: \"f2cc8fc0-5d5c-402c-9623-d6bd09039197\") " pod="openstack/ceilometer-0" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.751136 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2cc8fc0-5d5c-402c-9623-d6bd09039197-log-httpd\") pod \"ceilometer-0\" (UID: \"f2cc8fc0-5d5c-402c-9623-d6bd09039197\") " pod="openstack/ceilometer-0" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.810023 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acdf75eb-c141-4f82-94f9-dd95db013ba7" path="/var/lib/kubelet/pods/acdf75eb-c141-4f82-94f9-dd95db013ba7/volumes" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.852922 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2cc8fc0-5d5c-402c-9623-d6bd09039197-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2cc8fc0-5d5c-402c-9623-d6bd09039197\") " pod="openstack/ceilometer-0" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.853108 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2cc8fc0-5d5c-402c-9623-d6bd09039197-scripts\") pod \"ceilometer-0\" (UID: \"f2cc8fc0-5d5c-402c-9623-d6bd09039197\") " pod="openstack/ceilometer-0" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.853149 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2cc8fc0-5d5c-402c-9623-d6bd09039197-config-data\") pod \"ceilometer-0\" (UID: \"f2cc8fc0-5d5c-402c-9623-d6bd09039197\") " pod="openstack/ceilometer-0" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.853211 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2cc8fc0-5d5c-402c-9623-d6bd09039197-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2cc8fc0-5d5c-402c-9623-d6bd09039197\") " pod="openstack/ceilometer-0" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.853330 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzrtx\" (UniqueName: \"kubernetes.io/projected/f2cc8fc0-5d5c-402c-9623-d6bd09039197-kube-api-access-dzrtx\") pod \"ceilometer-0\" (UID: \"f2cc8fc0-5d5c-402c-9623-d6bd09039197\") " pod="openstack/ceilometer-0" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.853360 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2cc8fc0-5d5c-402c-9623-d6bd09039197-run-httpd\") pod \"ceilometer-0\" (UID: \"f2cc8fc0-5d5c-402c-9623-d6bd09039197\") " pod="openstack/ceilometer-0" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.853455 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2cc8fc0-5d5c-402c-9623-d6bd09039197-log-httpd\") pod \"ceilometer-0\" (UID: \"f2cc8fc0-5d5c-402c-9623-d6bd09039197\") " pod="openstack/ceilometer-0" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.853951 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2cc8fc0-5d5c-402c-9623-d6bd09039197-log-httpd\") pod \"ceilometer-0\" (UID: \"f2cc8fc0-5d5c-402c-9623-d6bd09039197\") " pod="openstack/ceilometer-0" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.857132 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2cc8fc0-5d5c-402c-9623-d6bd09039197-run-httpd\") pod \"ceilometer-0\" (UID: \"f2cc8fc0-5d5c-402c-9623-d6bd09039197\") " pod="openstack/ceilometer-0" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.858244 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2cc8fc0-5d5c-402c-9623-d6bd09039197-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2cc8fc0-5d5c-402c-9623-d6bd09039197\") " pod="openstack/ceilometer-0" Feb 16 17:23:44 crc kubenswrapper[4794]: E0216 17:23:44.858644 4794 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bfb3f2ff3e912556f3ae81e4ead3c17ca392e240787708cd7bf47fb132d69f7b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 16 17:23:44 crc kubenswrapper[4794]: E0216 17:23:44.860346 4794 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bfb3f2ff3e912556f3ae81e4ead3c17ca392e240787708cd7bf47fb132d69f7b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.862079 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2cc8fc0-5d5c-402c-9623-d6bd09039197-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2cc8fc0-5d5c-402c-9623-d6bd09039197\") " pod="openstack/ceilometer-0" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.862438 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2cc8fc0-5d5c-402c-9623-d6bd09039197-config-data\") pod \"ceilometer-0\" (UID: \"f2cc8fc0-5d5c-402c-9623-d6bd09039197\") " pod="openstack/ceilometer-0" Feb 16 17:23:44 crc kubenswrapper[4794]: E0216 17:23:44.862681 4794 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bfb3f2ff3e912556f3ae81e4ead3c17ca392e240787708cd7bf47fb132d69f7b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 16 17:23:44 crc kubenswrapper[4794]: E0216 17:23:44.862733 4794 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-599bd89595-29q2j" podUID="f1082952-332b-4c48-b37b-52c919f87f0f" containerName="heat-engine" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.862780 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2cc8fc0-5d5c-402c-9623-d6bd09039197-scripts\") pod \"ceilometer-0\" (UID: \"f2cc8fc0-5d5c-402c-9623-d6bd09039197\") " pod="openstack/ceilometer-0" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.900952 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzrtx\" (UniqueName: \"kubernetes.io/projected/f2cc8fc0-5d5c-402c-9623-d6bd09039197-kube-api-access-dzrtx\") pod \"ceilometer-0\" (UID: \"f2cc8fc0-5d5c-402c-9623-d6bd09039197\") " pod="openstack/ceilometer-0" Feb 16 17:23:44 crc kubenswrapper[4794]: I0216 17:23:44.979851 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:23:45 crc kubenswrapper[4794]: I0216 17:23:45.221606 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"446c165c-d077-4e2c-a902-ee7d1961edc6","Type":"ContainerStarted","Data":"2e36680eedf93a53be801ef43c901e1ff5a88f32181dc31bf7711667ca21f86f"} Feb 16 17:23:45 crc kubenswrapper[4794]: I0216 17:23:45.296028 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.296006602 podStartE2EDuration="4.296006602s" podCreationTimestamp="2026-02-16 17:23:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:23:45.252593857 +0000 UTC m=+1451.200688494" watchObservedRunningTime="2026-02-16 17:23:45.296006602 +0000 UTC m=+1451.244101249" Feb 16 17:23:45 crc kubenswrapper[4794]: I0216 17:23:45.517740 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:23:45 crc kubenswrapper[4794]: I0216 17:23:45.581189 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fx9wp"] Feb 16 17:23:45 crc kubenswrapper[4794]: I0216 17:23:45.584362 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fx9wp" Feb 16 17:23:45 crc kubenswrapper[4794]: I0216 17:23:45.604364 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fx9wp"] Feb 16 17:23:45 crc kubenswrapper[4794]: I0216 17:23:45.682702 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4b4b30a-27b0-4f11-a044-2a646790bc51-utilities\") pod \"redhat-marketplace-fx9wp\" (UID: \"e4b4b30a-27b0-4f11-a044-2a646790bc51\") " pod="openshift-marketplace/redhat-marketplace-fx9wp" Feb 16 17:23:45 crc kubenswrapper[4794]: I0216 17:23:45.682825 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4b4b30a-27b0-4f11-a044-2a646790bc51-catalog-content\") pod \"redhat-marketplace-fx9wp\" (UID: \"e4b4b30a-27b0-4f11-a044-2a646790bc51\") " pod="openshift-marketplace/redhat-marketplace-fx9wp" Feb 16 17:23:45 crc kubenswrapper[4794]: I0216 17:23:45.683105 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dks4\" (UniqueName: \"kubernetes.io/projected/e4b4b30a-27b0-4f11-a044-2a646790bc51-kube-api-access-4dks4\") pod \"redhat-marketplace-fx9wp\" (UID: \"e4b4b30a-27b0-4f11-a044-2a646790bc51\") " pod="openshift-marketplace/redhat-marketplace-fx9wp" Feb 16 17:23:45 crc kubenswrapper[4794]: I0216 17:23:45.791941 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dks4\" (UniqueName: \"kubernetes.io/projected/e4b4b30a-27b0-4f11-a044-2a646790bc51-kube-api-access-4dks4\") pod \"redhat-marketplace-fx9wp\" (UID: \"e4b4b30a-27b0-4f11-a044-2a646790bc51\") " pod="openshift-marketplace/redhat-marketplace-fx9wp" Feb 16 17:23:45 crc kubenswrapper[4794]: I0216 17:23:45.792152 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4b4b30a-27b0-4f11-a044-2a646790bc51-utilities\") pod \"redhat-marketplace-fx9wp\" (UID: \"e4b4b30a-27b0-4f11-a044-2a646790bc51\") " pod="openshift-marketplace/redhat-marketplace-fx9wp" Feb 16 17:23:45 crc kubenswrapper[4794]: I0216 17:23:45.792243 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4b4b30a-27b0-4f11-a044-2a646790bc51-catalog-content\") pod \"redhat-marketplace-fx9wp\" (UID: \"e4b4b30a-27b0-4f11-a044-2a646790bc51\") " pod="openshift-marketplace/redhat-marketplace-fx9wp" Feb 16 17:23:45 crc kubenswrapper[4794]: I0216 17:23:45.792763 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4b4b30a-27b0-4f11-a044-2a646790bc51-utilities\") pod \"redhat-marketplace-fx9wp\" (UID: \"e4b4b30a-27b0-4f11-a044-2a646790bc51\") " pod="openshift-marketplace/redhat-marketplace-fx9wp" Feb 16 17:23:45 crc kubenswrapper[4794]: I0216 17:23:45.792829 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4b4b30a-27b0-4f11-a044-2a646790bc51-catalog-content\") pod \"redhat-marketplace-fx9wp\" (UID: \"e4b4b30a-27b0-4f11-a044-2a646790bc51\") " pod="openshift-marketplace/redhat-marketplace-fx9wp" Feb 16 17:23:45 crc kubenswrapper[4794]: I0216 17:23:45.817395 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dks4\" (UniqueName: \"kubernetes.io/projected/e4b4b30a-27b0-4f11-a044-2a646790bc51-kube-api-access-4dks4\") pod \"redhat-marketplace-fx9wp\" (UID: \"e4b4b30a-27b0-4f11-a044-2a646790bc51\") " pod="openshift-marketplace/redhat-marketplace-fx9wp" Feb 16 17:23:45 crc kubenswrapper[4794]: I0216 17:23:45.922516 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fx9wp" Feb 16 17:23:46 crc kubenswrapper[4794]: I0216 17:23:46.304675 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2cc8fc0-5d5c-402c-9623-d6bd09039197","Type":"ContainerStarted","Data":"28302619ebffcb6ebb8931a0e498971a10a77e10f554dfe980f2a3091b7eacd4"} Feb 16 17:23:46 crc kubenswrapper[4794]: I0216 17:23:46.470755 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fx9wp"] Feb 16 17:23:46 crc kubenswrapper[4794]: I0216 17:23:46.647620 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 17:23:46 crc kubenswrapper[4794]: I0216 17:23:46.647678 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 17:23:46 crc kubenswrapper[4794]: I0216 17:23:46.716003 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 17:23:46 crc kubenswrapper[4794]: I0216 17:23:46.755682 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 17:23:47 crc kubenswrapper[4794]: I0216 17:23:47.321033 4794 generic.go:334] "Generic (PLEG): container finished" podID="e4b4b30a-27b0-4f11-a044-2a646790bc51" containerID="0e1534b003593efe9a555bff13087f90766daebb9c80b3665f4cb4d4e7e97e3f" exitCode=0 Feb 16 17:23:47 crc kubenswrapper[4794]: I0216 17:23:47.321376 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fx9wp" event={"ID":"e4b4b30a-27b0-4f11-a044-2a646790bc51","Type":"ContainerDied","Data":"0e1534b003593efe9a555bff13087f90766daebb9c80b3665f4cb4d4e7e97e3f"} Feb 16 17:23:47 crc kubenswrapper[4794]: I0216 17:23:47.321404 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fx9wp" event={"ID":"e4b4b30a-27b0-4f11-a044-2a646790bc51","Type":"ContainerStarted","Data":"f5b8d52642072334a05277edac456eeb1ed6a4f93f5ba30c9481a875a728599a"} Feb 16 17:23:47 crc kubenswrapper[4794]: I0216 17:23:47.328911 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2cc8fc0-5d5c-402c-9623-d6bd09039197","Type":"ContainerStarted","Data":"0c696ec54da9ce7d43a1e7f447d68a0107328b71833252e4a668aea9fe7cb362"} Feb 16 17:23:47 crc kubenswrapper[4794]: I0216 17:23:47.329905 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 17:23:47 crc kubenswrapper[4794]: I0216 17:23:47.330110 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 17:23:48 crc kubenswrapper[4794]: I0216 17:23:48.351765 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2cc8fc0-5d5c-402c-9623-d6bd09039197","Type":"ContainerStarted","Data":"7f3ec69321d5a4328192a00c3bffd5ec1cf7851b9f8c2470d80859a18f552c57"} Feb 16 17:23:49 crc kubenswrapper[4794]: I0216 17:23:49.370347 4794 generic.go:334] "Generic (PLEG): container finished" podID="e4b4b30a-27b0-4f11-a044-2a646790bc51" containerID="1689fada02d99ba4618bcdb38352ce13549e2cd3f0fd4c0a530753ae952c6f53" exitCode=0 Feb 16 17:23:49 crc kubenswrapper[4794]: I0216 17:23:49.370773 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fx9wp" event={"ID":"e4b4b30a-27b0-4f11-a044-2a646790bc51","Type":"ContainerDied","Data":"1689fada02d99ba4618bcdb38352ce13549e2cd3f0fd4c0a530753ae952c6f53"} Feb 16 17:23:49 crc kubenswrapper[4794]: I0216 17:23:49.425324 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2cc8fc0-5d5c-402c-9623-d6bd09039197","Type":"ContainerStarted","Data":"55b0e4d318440812292b6c3666cee7163ed382fb5a73ea595569d4740ef51297"} Feb 16 17:23:49 crc kubenswrapper[4794]: I0216 17:23:49.425359 4794 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:23:49 crc kubenswrapper[4794]: I0216 17:23:49.425377 4794 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:23:50 crc kubenswrapper[4794]: I0216 17:23:50.141753 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:23:50 crc kubenswrapper[4794]: I0216 17:23:50.142542 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:23:50 crc kubenswrapper[4794]: I0216 17:23:50.442959 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2cc8fc0-5d5c-402c-9623-d6bd09039197","Type":"ContainerStarted","Data":"5de04b8100acf33acfc88c22d2b77e99f5d68167dd2bc7cd74d9b92bfc15f85c"} Feb 16 17:23:50 crc kubenswrapper[4794]: I0216 17:23:50.443328 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 17:23:50 crc kubenswrapper[4794]: I0216 17:23:50.450611 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fx9wp" event={"ID":"e4b4b30a-27b0-4f11-a044-2a646790bc51","Type":"ContainerStarted","Data":"41bbe25a1d5b329338c8c4149b4a984bf9e645678ecda557570da841d312e815"} Feb 16 17:23:50 crc kubenswrapper[4794]: I0216 17:23:50.473866 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.223573818 podStartE2EDuration="6.473847545s" podCreationTimestamp="2026-02-16 17:23:44 +0000 UTC" firstStartedPulling="2026-02-16 17:23:45.52546558 +0000 UTC m=+1451.473560227" lastFinishedPulling="2026-02-16 17:23:49.775739307 +0000 UTC m=+1455.723833954" observedRunningTime="2026-02-16 17:23:50.466311092 +0000 UTC m=+1456.414405739" watchObservedRunningTime="2026-02-16 17:23:50.473847545 +0000 UTC m=+1456.421942192" Feb 16 17:23:50 crc kubenswrapper[4794]: I0216 17:23:50.499386 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fx9wp" podStartSLOduration=3.009720441 podStartE2EDuration="5.499366855s" podCreationTimestamp="2026-02-16 17:23:45 +0000 UTC" firstStartedPulling="2026-02-16 17:23:47.327819661 +0000 UTC m=+1453.275914308" lastFinishedPulling="2026-02-16 17:23:49.817466075 +0000 UTC m=+1455.765560722" observedRunningTime="2026-02-16 17:23:50.48785919 +0000 UTC m=+1456.435953837" watchObservedRunningTime="2026-02-16 17:23:50.499366855 +0000 UTC m=+1456.447461502" Feb 16 17:23:52 crc kubenswrapper[4794]: I0216 17:23:52.417453 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 17:23:52 crc kubenswrapper[4794]: I0216 17:23:52.417783 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 17:23:52 crc kubenswrapper[4794]: I0216 17:23:52.453249 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 17:23:52 crc kubenswrapper[4794]: I0216 17:23:52.468690 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 17:23:52 crc kubenswrapper[4794]: I0216 17:23:52.480796 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 17:23:52 crc kubenswrapper[4794]: I0216 17:23:52.480829 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 17:23:52 crc kubenswrapper[4794]: I0216 17:23:52.528137 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:23:52 crc kubenswrapper[4794]: I0216 17:23:52.528417 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2cc8fc0-5d5c-402c-9623-d6bd09039197" containerName="ceilometer-central-agent" containerID="cri-o://0c696ec54da9ce7d43a1e7f447d68a0107328b71833252e4a668aea9fe7cb362" gracePeriod=30 Feb 16 17:23:52 crc kubenswrapper[4794]: I0216 17:23:52.528551 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2cc8fc0-5d5c-402c-9623-d6bd09039197" containerName="sg-core" containerID="cri-o://55b0e4d318440812292b6c3666cee7163ed382fb5a73ea595569d4740ef51297" gracePeriod=30 Feb 16 17:23:52 crc kubenswrapper[4794]: I0216 17:23:52.528536 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2cc8fc0-5d5c-402c-9623-d6bd09039197" containerName="proxy-httpd" containerID="cri-o://5de04b8100acf33acfc88c22d2b77e99f5d68167dd2bc7cd74d9b92bfc15f85c" gracePeriod=30 Feb 16 17:23:52 crc kubenswrapper[4794]: I0216 17:23:52.528606 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2cc8fc0-5d5c-402c-9623-d6bd09039197" containerName="ceilometer-notification-agent" containerID="cri-o://7f3ec69321d5a4328192a00c3bffd5ec1cf7851b9f8c2470d80859a18f552c57" gracePeriod=30 Feb 16 17:23:52 crc kubenswrapper[4794]: I0216 17:23:52.576702 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 17:23:52 crc kubenswrapper[4794]: I0216 17:23:52.576770 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 17:23:53 crc kubenswrapper[4794]: I0216 17:23:53.491035 4794 generic.go:334] "Generic (PLEG): container finished" podID="f1082952-332b-4c48-b37b-52c919f87f0f" containerID="bfb3f2ff3e912556f3ae81e4ead3c17ca392e240787708cd7bf47fb132d69f7b" exitCode=0 Feb 16 17:23:53 crc kubenswrapper[4794]: I0216 17:23:53.491127 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-599bd89595-29q2j" event={"ID":"f1082952-332b-4c48-b37b-52c919f87f0f","Type":"ContainerDied","Data":"bfb3f2ff3e912556f3ae81e4ead3c17ca392e240787708cd7bf47fb132d69f7b"} Feb 16 17:23:53 crc kubenswrapper[4794]: I0216 17:23:53.500579 4794 generic.go:334] "Generic (PLEG): container finished" podID="f2cc8fc0-5d5c-402c-9623-d6bd09039197" containerID="5de04b8100acf33acfc88c22d2b77e99f5d68167dd2bc7cd74d9b92bfc15f85c" exitCode=0 Feb 16 17:23:53 crc kubenswrapper[4794]: I0216 17:23:53.500608 4794 generic.go:334] "Generic (PLEG): container finished" podID="f2cc8fc0-5d5c-402c-9623-d6bd09039197" containerID="55b0e4d318440812292b6c3666cee7163ed382fb5a73ea595569d4740ef51297" exitCode=2 Feb 16 17:23:53 crc kubenswrapper[4794]: I0216 17:23:53.500615 4794 generic.go:334] "Generic (PLEG): container finished" podID="f2cc8fc0-5d5c-402c-9623-d6bd09039197" containerID="7f3ec69321d5a4328192a00c3bffd5ec1cf7851b9f8c2470d80859a18f552c57" exitCode=0 Feb 16 17:23:53 crc kubenswrapper[4794]: I0216 17:23:53.500657 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2cc8fc0-5d5c-402c-9623-d6bd09039197","Type":"ContainerDied","Data":"5de04b8100acf33acfc88c22d2b77e99f5d68167dd2bc7cd74d9b92bfc15f85c"} Feb 16 17:23:53 crc kubenswrapper[4794]: I0216 17:23:53.500706 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2cc8fc0-5d5c-402c-9623-d6bd09039197","Type":"ContainerDied","Data":"55b0e4d318440812292b6c3666cee7163ed382fb5a73ea595569d4740ef51297"} Feb 16 17:23:53 crc kubenswrapper[4794]: I0216 17:23:53.500719 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2cc8fc0-5d5c-402c-9623-d6bd09039197","Type":"ContainerDied","Data":"7f3ec69321d5a4328192a00c3bffd5ec1cf7851b9f8c2470d80859a18f552c57"} Feb 16 17:23:54 crc kubenswrapper[4794]: I0216 17:23:54.515043 4794 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:23:54 crc kubenswrapper[4794]: I0216 17:23:54.515074 4794 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:23:54 crc kubenswrapper[4794]: I0216 17:23:54.608058 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 17:23:54 crc kubenswrapper[4794]: I0216 17:23:54.609780 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 17:23:54 crc kubenswrapper[4794]: E0216 17:23:54.853624 4794 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bfb3f2ff3e912556f3ae81e4ead3c17ca392e240787708cd7bf47fb132d69f7b is running failed: container process not found" containerID="bfb3f2ff3e912556f3ae81e4ead3c17ca392e240787708cd7bf47fb132d69f7b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 16 17:23:54 crc kubenswrapper[4794]: E0216 17:23:54.854011 4794 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bfb3f2ff3e912556f3ae81e4ead3c17ca392e240787708cd7bf47fb132d69f7b is running failed: container process not found" containerID="bfb3f2ff3e912556f3ae81e4ead3c17ca392e240787708cd7bf47fb132d69f7b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 16 17:23:54 crc kubenswrapper[4794]: E0216 17:23:54.854445 4794 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bfb3f2ff3e912556f3ae81e4ead3c17ca392e240787708cd7bf47fb132d69f7b is running failed: container process not found" containerID="bfb3f2ff3e912556f3ae81e4ead3c17ca392e240787708cd7bf47fb132d69f7b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 16 17:23:54 crc kubenswrapper[4794]: E0216 17:23:54.854482 4794 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bfb3f2ff3e912556f3ae81e4ead3c17ca392e240787708cd7bf47fb132d69f7b is running failed: container process not found" probeType="Readiness" pod="openstack/heat-engine-599bd89595-29q2j" podUID="f1082952-332b-4c48-b37b-52c919f87f0f" containerName="heat-engine" Feb 16 17:23:55 crc kubenswrapper[4794]: I0216 17:23:55.467801 4794 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podc932199b-1077-4aa1-aa88-7867c5c84212"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podc932199b-1077-4aa1-aa88-7867c5c84212] : Timed out while waiting for systemd to remove kubepods-besteffort-podc932199b_1077_4aa1_aa88_7867c5c84212.slice" Feb 16 17:23:55 crc kubenswrapper[4794]: E0216 17:23:55.468210 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podc932199b-1077-4aa1-aa88-7867c5c84212] : unable to destroy cgroup paths for cgroup [kubepods besteffort podc932199b-1077-4aa1-aa88-7867c5c84212] : Timed out while waiting for systemd to remove kubepods-besteffort-podc932199b_1077_4aa1_aa88_7867c5c84212.slice" pod="openstack/dnsmasq-dns-6578955fd5-r78sp" podUID="c932199b-1077-4aa1-aa88-7867c5c84212" Feb 16 17:23:55 crc kubenswrapper[4794]: I0216 17:23:55.532024 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-r78sp" Feb 16 17:23:55 crc kubenswrapper[4794]: I0216 17:23:55.620612 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-r78sp"] Feb 16 17:23:55 crc kubenswrapper[4794]: I0216 17:23:55.635173 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-r78sp"] Feb 16 17:23:55 crc kubenswrapper[4794]: I0216 17:23:55.923682 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fx9wp" Feb 16 17:23:55 crc kubenswrapper[4794]: I0216 17:23:55.923742 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fx9wp" Feb 16 17:23:55 crc kubenswrapper[4794]: I0216 17:23:55.978673 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fx9wp" Feb 16 17:23:56 crc kubenswrapper[4794]: I0216 17:23:56.597397 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fx9wp" Feb 16 17:23:56 crc kubenswrapper[4794]: I0216 17:23:56.652713 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fx9wp"] Feb 16 17:23:56 crc kubenswrapper[4794]: I0216 17:23:56.811319 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c932199b-1077-4aa1-aa88-7867c5c84212" path="/var/lib/kubelet/pods/c932199b-1077-4aa1-aa88-7867c5c84212/volumes" Feb 16 17:23:58 crc kubenswrapper[4794]: I0216 17:23:58.299740 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-599bd89595-29q2j" Feb 16 17:23:58 crc kubenswrapper[4794]: I0216 17:23:58.413072 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1082952-332b-4c48-b37b-52c919f87f0f-combined-ca-bundle\") pod \"f1082952-332b-4c48-b37b-52c919f87f0f\" (UID: \"f1082952-332b-4c48-b37b-52c919f87f0f\") " Feb 16 17:23:58 crc kubenswrapper[4794]: I0216 17:23:58.413114 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1082952-332b-4c48-b37b-52c919f87f0f-config-data\") pod \"f1082952-332b-4c48-b37b-52c919f87f0f\" (UID: \"f1082952-332b-4c48-b37b-52c919f87f0f\") " Feb 16 17:23:58 crc kubenswrapper[4794]: I0216 17:23:58.413260 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgb6f\" (UniqueName: \"kubernetes.io/projected/f1082952-332b-4c48-b37b-52c919f87f0f-kube-api-access-xgb6f\") pod \"f1082952-332b-4c48-b37b-52c919f87f0f\" (UID: \"f1082952-332b-4c48-b37b-52c919f87f0f\") " Feb 16 17:23:58 crc kubenswrapper[4794]: I0216 17:23:58.413385 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1082952-332b-4c48-b37b-52c919f87f0f-config-data-custom\") pod \"f1082952-332b-4c48-b37b-52c919f87f0f\" (UID: \"f1082952-332b-4c48-b37b-52c919f87f0f\") " Feb 16 17:23:58 crc kubenswrapper[4794]: I0216 17:23:58.418291 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1082952-332b-4c48-b37b-52c919f87f0f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f1082952-332b-4c48-b37b-52c919f87f0f" (UID: "f1082952-332b-4c48-b37b-52c919f87f0f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:58 crc kubenswrapper[4794]: I0216 17:23:58.421450 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1082952-332b-4c48-b37b-52c919f87f0f-kube-api-access-xgb6f" (OuterVolumeSpecName: "kube-api-access-xgb6f") pod "f1082952-332b-4c48-b37b-52c919f87f0f" (UID: "f1082952-332b-4c48-b37b-52c919f87f0f"). InnerVolumeSpecName "kube-api-access-xgb6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:23:58 crc kubenswrapper[4794]: I0216 17:23:58.457927 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1082952-332b-4c48-b37b-52c919f87f0f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1082952-332b-4c48-b37b-52c919f87f0f" (UID: "f1082952-332b-4c48-b37b-52c919f87f0f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:58 crc kubenswrapper[4794]: I0216 17:23:58.480568 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1082952-332b-4c48-b37b-52c919f87f0f-config-data" (OuterVolumeSpecName: "config-data") pod "f1082952-332b-4c48-b37b-52c919f87f0f" (UID: "f1082952-332b-4c48-b37b-52c919f87f0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:58 crc kubenswrapper[4794]: I0216 17:23:58.518582 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1082952-332b-4c48-b37b-52c919f87f0f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:58 crc kubenswrapper[4794]: I0216 17:23:58.518634 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1082952-332b-4c48-b37b-52c919f87f0f-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:58 crc kubenswrapper[4794]: I0216 17:23:58.518645 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgb6f\" (UniqueName: \"kubernetes.io/projected/f1082952-332b-4c48-b37b-52c919f87f0f-kube-api-access-xgb6f\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:58 crc kubenswrapper[4794]: I0216 17:23:58.518685 4794 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1082952-332b-4c48-b37b-52c919f87f0f-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:58 crc kubenswrapper[4794]: I0216 17:23:58.590233 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4gn4j" event={"ID":"619e01e3-7fcb-4b21-b1df-07ba70374e09","Type":"ContainerStarted","Data":"fab724d84b281db1c2dea99f457728689590085c962814a33fdffd092056286a"} Feb 16 17:23:58 crc kubenswrapper[4794]: I0216 17:23:58.596825 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-599bd89595-29q2j" Feb 16 17:23:58 crc kubenswrapper[4794]: I0216 17:23:58.596869 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fx9wp" podUID="e4b4b30a-27b0-4f11-a044-2a646790bc51" containerName="registry-server" containerID="cri-o://41bbe25a1d5b329338c8c4149b4a984bf9e645678ecda557570da841d312e815" gracePeriod=2 Feb 16 17:23:58 crc kubenswrapper[4794]: I0216 17:23:58.596948 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-599bd89595-29q2j" event={"ID":"f1082952-332b-4c48-b37b-52c919f87f0f","Type":"ContainerDied","Data":"f968b5c3e3c7f091d2891e3310e9b9e08282c503b94efa1791dbceacf77339b6"} Feb 16 17:23:58 crc kubenswrapper[4794]: I0216 17:23:58.597027 4794 scope.go:117] "RemoveContainer" containerID="bfb3f2ff3e912556f3ae81e4ead3c17ca392e240787708cd7bf47fb132d69f7b" Feb 16 17:23:58 crc kubenswrapper[4794]: I0216 17:23:58.624149 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-4gn4j" podStartSLOduration=2.0398885 podStartE2EDuration="17.624121611s" podCreationTimestamp="2026-02-16 17:23:41 +0000 UTC" firstStartedPulling="2026-02-16 17:23:42.43131667 +0000 UTC m=+1448.379411317" lastFinishedPulling="2026-02-16 17:23:58.015549791 +0000 UTC m=+1463.963644428" observedRunningTime="2026-02-16 17:23:58.614620383 +0000 UTC m=+1464.562715040" watchObservedRunningTime="2026-02-16 17:23:58.624121611 +0000 UTC m=+1464.572216258" Feb 16 17:23:58 crc kubenswrapper[4794]: I0216 17:23:58.658898 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-599bd89595-29q2j"] Feb 16 17:23:58 crc kubenswrapper[4794]: I0216 17:23:58.671769 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-599bd89595-29q2j"] Feb 16 17:23:58 crc kubenswrapper[4794]: I0216 17:23:58.806663 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1082952-332b-4c48-b37b-52c919f87f0f" path="/var/lib/kubelet/pods/f1082952-332b-4c48-b37b-52c919f87f0f/volumes" Feb 16 17:23:59 crc kubenswrapper[4794]: I0216 17:23:59.113456 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fx9wp" Feb 16 17:23:59 crc kubenswrapper[4794]: I0216 17:23:59.236356 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4b4b30a-27b0-4f11-a044-2a646790bc51-catalog-content\") pod \"e4b4b30a-27b0-4f11-a044-2a646790bc51\" (UID: \"e4b4b30a-27b0-4f11-a044-2a646790bc51\") " Feb 16 17:23:59 crc kubenswrapper[4794]: I0216 17:23:59.236425 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dks4\" (UniqueName: \"kubernetes.io/projected/e4b4b30a-27b0-4f11-a044-2a646790bc51-kube-api-access-4dks4\") pod \"e4b4b30a-27b0-4f11-a044-2a646790bc51\" (UID: \"e4b4b30a-27b0-4f11-a044-2a646790bc51\") " Feb 16 17:23:59 crc kubenswrapper[4794]: I0216 17:23:59.236451 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4b4b30a-27b0-4f11-a044-2a646790bc51-utilities\") pod \"e4b4b30a-27b0-4f11-a044-2a646790bc51\" (UID: \"e4b4b30a-27b0-4f11-a044-2a646790bc51\") " Feb 16 17:23:59 crc kubenswrapper[4794]: I0216 17:23:59.237524 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4b4b30a-27b0-4f11-a044-2a646790bc51-utilities" (OuterVolumeSpecName: "utilities") pod "e4b4b30a-27b0-4f11-a044-2a646790bc51" (UID: "e4b4b30a-27b0-4f11-a044-2a646790bc51"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:23:59 crc kubenswrapper[4794]: I0216 17:23:59.256808 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4b4b30a-27b0-4f11-a044-2a646790bc51-kube-api-access-4dks4" (OuterVolumeSpecName: "kube-api-access-4dks4") pod "e4b4b30a-27b0-4f11-a044-2a646790bc51" (UID: "e4b4b30a-27b0-4f11-a044-2a646790bc51"). InnerVolumeSpecName "kube-api-access-4dks4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:23:59 crc kubenswrapper[4794]: I0216 17:23:59.260892 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4b4b30a-27b0-4f11-a044-2a646790bc51-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e4b4b30a-27b0-4f11-a044-2a646790bc51" (UID: "e4b4b30a-27b0-4f11-a044-2a646790bc51"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:23:59 crc kubenswrapper[4794]: I0216 17:23:59.339185 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4b4b30a-27b0-4f11-a044-2a646790bc51-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:59 crc kubenswrapper[4794]: I0216 17:23:59.339223 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dks4\" (UniqueName: \"kubernetes.io/projected/e4b4b30a-27b0-4f11-a044-2a646790bc51-kube-api-access-4dks4\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:59 crc kubenswrapper[4794]: I0216 17:23:59.339235 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4b4b30a-27b0-4f11-a044-2a646790bc51-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:59 crc kubenswrapper[4794]: I0216 17:23:59.616460 4794 generic.go:334] "Generic (PLEG): container finished" podID="e4b4b30a-27b0-4f11-a044-2a646790bc51" containerID="41bbe25a1d5b329338c8c4149b4a984bf9e645678ecda557570da841d312e815" exitCode=0 Feb 16 17:23:59 crc kubenswrapper[4794]: I0216 17:23:59.616808 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fx9wp" Feb 16 17:23:59 crc kubenswrapper[4794]: I0216 17:23:59.616731 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fx9wp" event={"ID":"e4b4b30a-27b0-4f11-a044-2a646790bc51","Type":"ContainerDied","Data":"41bbe25a1d5b329338c8c4149b4a984bf9e645678ecda557570da841d312e815"} Feb 16 17:23:59 crc kubenswrapper[4794]: I0216 17:23:59.616910 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fx9wp" event={"ID":"e4b4b30a-27b0-4f11-a044-2a646790bc51","Type":"ContainerDied","Data":"f5b8d52642072334a05277edac456eeb1ed6a4f93f5ba30c9481a875a728599a"} Feb 16 17:23:59 crc kubenswrapper[4794]: I0216 17:23:59.616934 4794 scope.go:117] "RemoveContainer" containerID="41bbe25a1d5b329338c8c4149b4a984bf9e645678ecda557570da841d312e815" Feb 16 17:23:59 crc kubenswrapper[4794]: I0216 17:23:59.662482 4794 scope.go:117] "RemoveContainer" containerID="1689fada02d99ba4618bcdb38352ce13549e2cd3f0fd4c0a530753ae952c6f53" Feb 16 17:23:59 crc kubenswrapper[4794]: I0216 17:23:59.666636 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fx9wp"] Feb 16 17:23:59 crc kubenswrapper[4794]: I0216 17:23:59.683346 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fx9wp"] Feb 16 17:23:59 crc kubenswrapper[4794]: I0216 17:23:59.748918 4794 scope.go:117] "RemoveContainer" containerID="0e1534b003593efe9a555bff13087f90766daebb9c80b3665f4cb4d4e7e97e3f" Feb 16 17:23:59 crc kubenswrapper[4794]: I0216 17:23:59.792086 4794 scope.go:117] "RemoveContainer" containerID="41bbe25a1d5b329338c8c4149b4a984bf9e645678ecda557570da841d312e815" Feb 16 17:23:59 crc kubenswrapper[4794]: E0216 17:23:59.799421 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41bbe25a1d5b329338c8c4149b4a984bf9e645678ecda557570da841d312e815\": container with ID starting with 41bbe25a1d5b329338c8c4149b4a984bf9e645678ecda557570da841d312e815 not found: ID does not exist" containerID="41bbe25a1d5b329338c8c4149b4a984bf9e645678ecda557570da841d312e815" Feb 16 17:23:59 crc kubenswrapper[4794]: I0216 17:23:59.799467 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41bbe25a1d5b329338c8c4149b4a984bf9e645678ecda557570da841d312e815"} err="failed to get container status \"41bbe25a1d5b329338c8c4149b4a984bf9e645678ecda557570da841d312e815\": rpc error: code = NotFound desc = could not find container \"41bbe25a1d5b329338c8c4149b4a984bf9e645678ecda557570da841d312e815\": container with ID starting with 41bbe25a1d5b329338c8c4149b4a984bf9e645678ecda557570da841d312e815 not found: ID does not exist" Feb 16 17:23:59 crc kubenswrapper[4794]: I0216 17:23:59.799490 4794 scope.go:117] "RemoveContainer" containerID="1689fada02d99ba4618bcdb38352ce13549e2cd3f0fd4c0a530753ae952c6f53" Feb 16 17:23:59 crc kubenswrapper[4794]: E0216 17:23:59.807697 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1689fada02d99ba4618bcdb38352ce13549e2cd3f0fd4c0a530753ae952c6f53\": container with ID starting with 1689fada02d99ba4618bcdb38352ce13549e2cd3f0fd4c0a530753ae952c6f53 not found: ID does not exist" containerID="1689fada02d99ba4618bcdb38352ce13549e2cd3f0fd4c0a530753ae952c6f53" Feb 16 17:23:59 crc kubenswrapper[4794]: I0216 17:23:59.807738 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1689fada02d99ba4618bcdb38352ce13549e2cd3f0fd4c0a530753ae952c6f53"} err="failed to get container status \"1689fada02d99ba4618bcdb38352ce13549e2cd3f0fd4c0a530753ae952c6f53\": rpc error: code = NotFound desc = could not find container \"1689fada02d99ba4618bcdb38352ce13549e2cd3f0fd4c0a530753ae952c6f53\": container with ID starting with 1689fada02d99ba4618bcdb38352ce13549e2cd3f0fd4c0a530753ae952c6f53 not found: ID does not exist" Feb 16 17:23:59 crc kubenswrapper[4794]: I0216 17:23:59.807762 4794 scope.go:117] "RemoveContainer" containerID="0e1534b003593efe9a555bff13087f90766daebb9c80b3665f4cb4d4e7e97e3f" Feb 16 17:23:59 crc kubenswrapper[4794]: E0216 17:23:59.813527 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e1534b003593efe9a555bff13087f90766daebb9c80b3665f4cb4d4e7e97e3f\": container with ID starting with 0e1534b003593efe9a555bff13087f90766daebb9c80b3665f4cb4d4e7e97e3f not found: ID does not exist" containerID="0e1534b003593efe9a555bff13087f90766daebb9c80b3665f4cb4d4e7e97e3f" Feb 16 17:23:59 crc kubenswrapper[4794]: I0216 17:23:59.813566 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e1534b003593efe9a555bff13087f90766daebb9c80b3665f4cb4d4e7e97e3f"} err="failed to get container status \"0e1534b003593efe9a555bff13087f90766daebb9c80b3665f4cb4d4e7e97e3f\": rpc error: code = NotFound desc = could not find container \"0e1534b003593efe9a555bff13087f90766daebb9c80b3665f4cb4d4e7e97e3f\": container with ID starting with 0e1534b003593efe9a555bff13087f90766daebb9c80b3665f4cb4d4e7e97e3f not found: ID does not exist" Feb 16 17:24:00 crc kubenswrapper[4794]: I0216 17:24:00.805196 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4b4b30a-27b0-4f11-a044-2a646790bc51" path="/var/lib/kubelet/pods/e4b4b30a-27b0-4f11-a044-2a646790bc51/volumes" Feb 16 17:24:01 crc kubenswrapper[4794]: I0216 17:24:01.652583 4794 generic.go:334] "Generic (PLEG): container finished" podID="f2cc8fc0-5d5c-402c-9623-d6bd09039197" containerID="0c696ec54da9ce7d43a1e7f447d68a0107328b71833252e4a668aea9fe7cb362" exitCode=0 Feb 16 17:24:01 crc kubenswrapper[4794]: I0216 17:24:01.652781 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2cc8fc0-5d5c-402c-9623-d6bd09039197","Type":"ContainerDied","Data":"0c696ec54da9ce7d43a1e7f447d68a0107328b71833252e4a668aea9fe7cb362"} Feb 16 17:24:01 crc kubenswrapper[4794]: I0216 17:24:01.990852 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.109901 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2cc8fc0-5d5c-402c-9623-d6bd09039197-scripts\") pod \"f2cc8fc0-5d5c-402c-9623-d6bd09039197\" (UID: \"f2cc8fc0-5d5c-402c-9623-d6bd09039197\") " Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.110034 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2cc8fc0-5d5c-402c-9623-d6bd09039197-combined-ca-bundle\") pod \"f2cc8fc0-5d5c-402c-9623-d6bd09039197\" (UID: \"f2cc8fc0-5d5c-402c-9623-d6bd09039197\") " Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.110131 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzrtx\" (UniqueName: \"kubernetes.io/projected/f2cc8fc0-5d5c-402c-9623-d6bd09039197-kube-api-access-dzrtx\") pod \"f2cc8fc0-5d5c-402c-9623-d6bd09039197\" (UID: \"f2cc8fc0-5d5c-402c-9623-d6bd09039197\") " Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.110184 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2cc8fc0-5d5c-402c-9623-d6bd09039197-run-httpd\") pod \"f2cc8fc0-5d5c-402c-9623-d6bd09039197\" (UID: \"f2cc8fc0-5d5c-402c-9623-d6bd09039197\") " Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.110332 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2cc8fc0-5d5c-402c-9623-d6bd09039197-log-httpd\") pod \"f2cc8fc0-5d5c-402c-9623-d6bd09039197\" (UID: \"f2cc8fc0-5d5c-402c-9623-d6bd09039197\") " Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.110385 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2cc8fc0-5d5c-402c-9623-d6bd09039197-config-data\") pod \"f2cc8fc0-5d5c-402c-9623-d6bd09039197\" (UID: \"f2cc8fc0-5d5c-402c-9623-d6bd09039197\") " Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.110420 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2cc8fc0-5d5c-402c-9623-d6bd09039197-sg-core-conf-yaml\") pod \"f2cc8fc0-5d5c-402c-9623-d6bd09039197\" (UID: \"f2cc8fc0-5d5c-402c-9623-d6bd09039197\") " Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.113484 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2cc8fc0-5d5c-402c-9623-d6bd09039197-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f2cc8fc0-5d5c-402c-9623-d6bd09039197" (UID: "f2cc8fc0-5d5c-402c-9623-d6bd09039197"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.113793 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2cc8fc0-5d5c-402c-9623-d6bd09039197-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f2cc8fc0-5d5c-402c-9623-d6bd09039197" (UID: "f2cc8fc0-5d5c-402c-9623-d6bd09039197"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.116172 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2cc8fc0-5d5c-402c-9623-d6bd09039197-scripts" (OuterVolumeSpecName: "scripts") pod "f2cc8fc0-5d5c-402c-9623-d6bd09039197" (UID: "f2cc8fc0-5d5c-402c-9623-d6bd09039197"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.118317 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2cc8fc0-5d5c-402c-9623-d6bd09039197-kube-api-access-dzrtx" (OuterVolumeSpecName: "kube-api-access-dzrtx") pod "f2cc8fc0-5d5c-402c-9623-d6bd09039197" (UID: "f2cc8fc0-5d5c-402c-9623-d6bd09039197"). InnerVolumeSpecName "kube-api-access-dzrtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.148555 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2cc8fc0-5d5c-402c-9623-d6bd09039197-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f2cc8fc0-5d5c-402c-9623-d6bd09039197" (UID: "f2cc8fc0-5d5c-402c-9623-d6bd09039197"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.213541 4794 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2cc8fc0-5d5c-402c-9623-d6bd09039197-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.213578 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzrtx\" (UniqueName: \"kubernetes.io/projected/f2cc8fc0-5d5c-402c-9623-d6bd09039197-kube-api-access-dzrtx\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.213590 4794 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2cc8fc0-5d5c-402c-9623-d6bd09039197-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.213599 4794 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2cc8fc0-5d5c-402c-9623-d6bd09039197-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.213606 4794 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2cc8fc0-5d5c-402c-9623-d6bd09039197-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.231782 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2cc8fc0-5d5c-402c-9623-d6bd09039197-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2cc8fc0-5d5c-402c-9623-d6bd09039197" (UID: "f2cc8fc0-5d5c-402c-9623-d6bd09039197"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.253271 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2cc8fc0-5d5c-402c-9623-d6bd09039197-config-data" (OuterVolumeSpecName: "config-data") pod "f2cc8fc0-5d5c-402c-9623-d6bd09039197" (UID: "f2cc8fc0-5d5c-402c-9623-d6bd09039197"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.316009 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2cc8fc0-5d5c-402c-9623-d6bd09039197-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.316056 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2cc8fc0-5d5c-402c-9623-d6bd09039197-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.680289 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2cc8fc0-5d5c-402c-9623-d6bd09039197","Type":"ContainerDied","Data":"28302619ebffcb6ebb8931a0e498971a10a77e10f554dfe980f2a3091b7eacd4"} Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.680676 4794 scope.go:117] "RemoveContainer" containerID="5de04b8100acf33acfc88c22d2b77e99f5d68167dd2bc7cd74d9b92bfc15f85c" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.680852 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.715153 4794 scope.go:117] "RemoveContainer" containerID="55b0e4d318440812292b6c3666cee7163ed382fb5a73ea595569d4740ef51297" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.725899 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.735954 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.745171 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:24:02 crc kubenswrapper[4794]: E0216 17:24:02.745614 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2cc8fc0-5d5c-402c-9623-d6bd09039197" containerName="sg-core" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.745648 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2cc8fc0-5d5c-402c-9623-d6bd09039197" containerName="sg-core" Feb 16 17:24:02 crc kubenswrapper[4794]: E0216 17:24:02.745673 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2cc8fc0-5d5c-402c-9623-d6bd09039197" containerName="ceilometer-notification-agent" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.745679 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2cc8fc0-5d5c-402c-9623-d6bd09039197" containerName="ceilometer-notification-agent" Feb 16 17:24:02 crc kubenswrapper[4794]: E0216 17:24:02.745689 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2cc8fc0-5d5c-402c-9623-d6bd09039197" containerName="ceilometer-central-agent" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.745705 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2cc8fc0-5d5c-402c-9623-d6bd09039197" containerName="ceilometer-central-agent" Feb 16 17:24:02 crc kubenswrapper[4794]: E0216 17:24:02.745721 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1082952-332b-4c48-b37b-52c919f87f0f" containerName="heat-engine" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.745727 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1082952-332b-4c48-b37b-52c919f87f0f" containerName="heat-engine" Feb 16 17:24:02 crc kubenswrapper[4794]: E0216 17:24:02.745742 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4b4b30a-27b0-4f11-a044-2a646790bc51" containerName="extract-content" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.745750 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4b4b30a-27b0-4f11-a044-2a646790bc51" containerName="extract-content" Feb 16 17:24:02 crc kubenswrapper[4794]: E0216 17:24:02.745760 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2cc8fc0-5d5c-402c-9623-d6bd09039197" containerName="proxy-httpd" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.745765 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2cc8fc0-5d5c-402c-9623-d6bd09039197" containerName="proxy-httpd" Feb 16 17:24:02 crc kubenswrapper[4794]: E0216 17:24:02.745774 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4b4b30a-27b0-4f11-a044-2a646790bc51" containerName="extract-utilities" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.745780 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4b4b30a-27b0-4f11-a044-2a646790bc51" containerName="extract-utilities" Feb 16 17:24:02 crc kubenswrapper[4794]: E0216 17:24:02.745796 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4b4b30a-27b0-4f11-a044-2a646790bc51" containerName="registry-server" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.745801 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4b4b30a-27b0-4f11-a044-2a646790bc51" containerName="registry-server" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.745989 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2cc8fc0-5d5c-402c-9623-d6bd09039197" containerName="ceilometer-central-agent" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.746000 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2cc8fc0-5d5c-402c-9623-d6bd09039197" containerName="sg-core" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.746016 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1082952-332b-4c48-b37b-52c919f87f0f" containerName="heat-engine" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.746026 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4b4b30a-27b0-4f11-a044-2a646790bc51" containerName="registry-server" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.746034 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2cc8fc0-5d5c-402c-9623-d6bd09039197" containerName="ceilometer-notification-agent" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.746043 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2cc8fc0-5d5c-402c-9623-d6bd09039197" containerName="proxy-httpd" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.749232 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.751793 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.751870 4794 scope.go:117] "RemoveContainer" containerID="7f3ec69321d5a4328192a00c3bffd5ec1cf7851b9f8c2470d80859a18f552c57" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.763477 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.789439 4794 scope.go:117] "RemoveContainer" containerID="0c696ec54da9ce7d43a1e7f447d68a0107328b71833252e4a668aea9fe7cb362" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.789651 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.809548 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2cc8fc0-5d5c-402c-9623-d6bd09039197" path="/var/lib/kubelet/pods/f2cc8fc0-5d5c-402c-9623-d6bd09039197/volumes" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.829661 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca440010-9acc-4aea-b64a-3f1a500571a1-run-httpd\") pod \"ceilometer-0\" (UID: \"ca440010-9acc-4aea-b64a-3f1a500571a1\") " pod="openstack/ceilometer-0" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.829805 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca440010-9acc-4aea-b64a-3f1a500571a1-log-httpd\") pod \"ceilometer-0\" (UID: \"ca440010-9acc-4aea-b64a-3f1a500571a1\") " pod="openstack/ceilometer-0" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.829832 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca440010-9acc-4aea-b64a-3f1a500571a1-scripts\") pod \"ceilometer-0\" (UID: \"ca440010-9acc-4aea-b64a-3f1a500571a1\") " pod="openstack/ceilometer-0" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.829858 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ca440010-9acc-4aea-b64a-3f1a500571a1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ca440010-9acc-4aea-b64a-3f1a500571a1\") " pod="openstack/ceilometer-0" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.830072 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-465t9\" (UniqueName: \"kubernetes.io/projected/ca440010-9acc-4aea-b64a-3f1a500571a1-kube-api-access-465t9\") pod \"ceilometer-0\" (UID: \"ca440010-9acc-4aea-b64a-3f1a500571a1\") " pod="openstack/ceilometer-0" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.830192 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca440010-9acc-4aea-b64a-3f1a500571a1-config-data\") pod \"ceilometer-0\" (UID: \"ca440010-9acc-4aea-b64a-3f1a500571a1\") " pod="openstack/ceilometer-0" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.830276 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca440010-9acc-4aea-b64a-3f1a500571a1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ca440010-9acc-4aea-b64a-3f1a500571a1\") " pod="openstack/ceilometer-0" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.932263 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-465t9\" (UniqueName: \"kubernetes.io/projected/ca440010-9acc-4aea-b64a-3f1a500571a1-kube-api-access-465t9\") pod \"ceilometer-0\" (UID: \"ca440010-9acc-4aea-b64a-3f1a500571a1\") " pod="openstack/ceilometer-0" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.932434 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca440010-9acc-4aea-b64a-3f1a500571a1-config-data\") pod \"ceilometer-0\" (UID: \"ca440010-9acc-4aea-b64a-3f1a500571a1\") " pod="openstack/ceilometer-0" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.932512 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca440010-9acc-4aea-b64a-3f1a500571a1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ca440010-9acc-4aea-b64a-3f1a500571a1\") " pod="openstack/ceilometer-0" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.932638 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca440010-9acc-4aea-b64a-3f1a500571a1-run-httpd\") pod \"ceilometer-0\" (UID: \"ca440010-9acc-4aea-b64a-3f1a500571a1\") " pod="openstack/ceilometer-0" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.933164 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca440010-9acc-4aea-b64a-3f1a500571a1-log-httpd\") pod \"ceilometer-0\" (UID: \"ca440010-9acc-4aea-b64a-3f1a500571a1\") " pod="openstack/ceilometer-0" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.933201 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca440010-9acc-4aea-b64a-3f1a500571a1-scripts\") pod \"ceilometer-0\" (UID: \"ca440010-9acc-4aea-b64a-3f1a500571a1\") " pod="openstack/ceilometer-0" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.933250 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ca440010-9acc-4aea-b64a-3f1a500571a1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ca440010-9acc-4aea-b64a-3f1a500571a1\") " pod="openstack/ceilometer-0" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.933539 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca440010-9acc-4aea-b64a-3f1a500571a1-run-httpd\") pod \"ceilometer-0\" (UID: \"ca440010-9acc-4aea-b64a-3f1a500571a1\") " pod="openstack/ceilometer-0" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.933948 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca440010-9acc-4aea-b64a-3f1a500571a1-log-httpd\") pod \"ceilometer-0\" (UID: \"ca440010-9acc-4aea-b64a-3f1a500571a1\") " pod="openstack/ceilometer-0" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.937701 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ca440010-9acc-4aea-b64a-3f1a500571a1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ca440010-9acc-4aea-b64a-3f1a500571a1\") " pod="openstack/ceilometer-0" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.937994 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca440010-9acc-4aea-b64a-3f1a500571a1-scripts\") pod \"ceilometer-0\" (UID: \"ca440010-9acc-4aea-b64a-3f1a500571a1\") " pod="openstack/ceilometer-0" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.939453 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca440010-9acc-4aea-b64a-3f1a500571a1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ca440010-9acc-4aea-b64a-3f1a500571a1\") " pod="openstack/ceilometer-0" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.954119 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca440010-9acc-4aea-b64a-3f1a500571a1-config-data\") pod \"ceilometer-0\" (UID: \"ca440010-9acc-4aea-b64a-3f1a500571a1\") " pod="openstack/ceilometer-0" Feb 16 17:24:02 crc kubenswrapper[4794]: I0216 17:24:02.954712 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-465t9\" (UniqueName: \"kubernetes.io/projected/ca440010-9acc-4aea-b64a-3f1a500571a1-kube-api-access-465t9\") pod \"ceilometer-0\" (UID: \"ca440010-9acc-4aea-b64a-3f1a500571a1\") " pod="openstack/ceilometer-0" Feb 16 17:24:03 crc kubenswrapper[4794]: I0216 17:24:03.080455 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:24:03 crc kubenswrapper[4794]: W0216 17:24:03.567068 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca440010_9acc_4aea_b64a_3f1a500571a1.slice/crio-2e7bd29c9a72fb5f630368aa553d897cbe2acda3b4a06f46b416c69a72e82361 WatchSource:0}: Error finding container 2e7bd29c9a72fb5f630368aa553d897cbe2acda3b4a06f46b416c69a72e82361: Status 404 returned error can't find the container with id 2e7bd29c9a72fb5f630368aa553d897cbe2acda3b4a06f46b416c69a72e82361 Feb 16 17:24:03 crc kubenswrapper[4794]: I0216 17:24:03.583711 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:24:03 crc kubenswrapper[4794]: I0216 17:24:03.698031 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca440010-9acc-4aea-b64a-3f1a500571a1","Type":"ContainerStarted","Data":"2e7bd29c9a72fb5f630368aa553d897cbe2acda3b4a06f46b416c69a72e82361"} Feb 16 17:24:04 crc kubenswrapper[4794]: I0216 17:24:04.715027 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca440010-9acc-4aea-b64a-3f1a500571a1","Type":"ContainerStarted","Data":"3aa36a5c34833ee40309aed4a3b2cb76b2d1a8edbaa13bab80671c6b6623a432"} Feb 16 17:24:05 crc kubenswrapper[4794]: I0216 17:24:05.733855 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca440010-9acc-4aea-b64a-3f1a500571a1","Type":"ContainerStarted","Data":"06751d712288d1af8320103dca2ec8dc264d422a69437f183f6f5ec3a553b846"} Feb 16 17:24:05 crc kubenswrapper[4794]: I0216 17:24:05.735238 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca440010-9acc-4aea-b64a-3f1a500571a1","Type":"ContainerStarted","Data":"5281fa4dd554a1db8a0e184f813957398dc5a361f9265b2f85819a54e085156b"} Feb 16 17:24:07 crc kubenswrapper[4794]: I0216 17:24:07.759232 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca440010-9acc-4aea-b64a-3f1a500571a1","Type":"ContainerStarted","Data":"73ed452ed6a453b75d793415e56eaa0baf856e6ae776a4fa19fc4424f362e031"} Feb 16 17:24:07 crc kubenswrapper[4794]: I0216 17:24:07.759845 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 17:24:07 crc kubenswrapper[4794]: I0216 17:24:07.795369 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9000685929999999 podStartE2EDuration="5.795347479s" podCreationTimestamp="2026-02-16 17:24:02 +0000 UTC" firstStartedPulling="2026-02-16 17:24:03.569878252 +0000 UTC m=+1469.517972899" lastFinishedPulling="2026-02-16 17:24:07.465157138 +0000 UTC m=+1473.413251785" observedRunningTime="2026-02-16 17:24:07.780562632 +0000 UTC m=+1473.728657289" watchObservedRunningTime="2026-02-16 17:24:07.795347479 +0000 UTC m=+1473.743442136" Feb 16 17:24:08 crc kubenswrapper[4794]: I0216 17:24:08.772632 4794 generic.go:334] "Generic (PLEG): container finished" podID="619e01e3-7fcb-4b21-b1df-07ba70374e09" containerID="fab724d84b281db1c2dea99f457728689590085c962814a33fdffd092056286a" exitCode=0 Feb 16 17:24:08 crc kubenswrapper[4794]: I0216 17:24:08.772710 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4gn4j" event={"ID":"619e01e3-7fcb-4b21-b1df-07ba70374e09","Type":"ContainerDied","Data":"fab724d84b281db1c2dea99f457728689590085c962814a33fdffd092056286a"} Feb 16 17:24:10 crc kubenswrapper[4794]: I0216 17:24:10.260399 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4gn4j" Feb 16 17:24:10 crc kubenswrapper[4794]: I0216 17:24:10.421382 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/619e01e3-7fcb-4b21-b1df-07ba70374e09-combined-ca-bundle\") pod \"619e01e3-7fcb-4b21-b1df-07ba70374e09\" (UID: \"619e01e3-7fcb-4b21-b1df-07ba70374e09\") " Feb 16 17:24:10 crc kubenswrapper[4794]: I0216 17:24:10.421672 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntvjp\" (UniqueName: \"kubernetes.io/projected/619e01e3-7fcb-4b21-b1df-07ba70374e09-kube-api-access-ntvjp\") pod \"619e01e3-7fcb-4b21-b1df-07ba70374e09\" (UID: \"619e01e3-7fcb-4b21-b1df-07ba70374e09\") " Feb 16 17:24:10 crc kubenswrapper[4794]: I0216 17:24:10.421874 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/619e01e3-7fcb-4b21-b1df-07ba70374e09-config-data\") pod \"619e01e3-7fcb-4b21-b1df-07ba70374e09\" (UID: \"619e01e3-7fcb-4b21-b1df-07ba70374e09\") " Feb 16 17:24:10 crc kubenswrapper[4794]: I0216 17:24:10.421937 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/619e01e3-7fcb-4b21-b1df-07ba70374e09-scripts\") pod \"619e01e3-7fcb-4b21-b1df-07ba70374e09\" (UID: \"619e01e3-7fcb-4b21-b1df-07ba70374e09\") " Feb 16 17:24:10 crc kubenswrapper[4794]: I0216 17:24:10.432135 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/619e01e3-7fcb-4b21-b1df-07ba70374e09-scripts" (OuterVolumeSpecName: "scripts") pod "619e01e3-7fcb-4b21-b1df-07ba70374e09" (UID: "619e01e3-7fcb-4b21-b1df-07ba70374e09"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:10 crc kubenswrapper[4794]: I0216 17:24:10.448331 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/619e01e3-7fcb-4b21-b1df-07ba70374e09-kube-api-access-ntvjp" (OuterVolumeSpecName: "kube-api-access-ntvjp") pod "619e01e3-7fcb-4b21-b1df-07ba70374e09" (UID: "619e01e3-7fcb-4b21-b1df-07ba70374e09"). InnerVolumeSpecName "kube-api-access-ntvjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:24:10 crc kubenswrapper[4794]: I0216 17:24:10.470624 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/619e01e3-7fcb-4b21-b1df-07ba70374e09-config-data" (OuterVolumeSpecName: "config-data") pod "619e01e3-7fcb-4b21-b1df-07ba70374e09" (UID: "619e01e3-7fcb-4b21-b1df-07ba70374e09"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:10 crc kubenswrapper[4794]: I0216 17:24:10.480479 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/619e01e3-7fcb-4b21-b1df-07ba70374e09-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "619e01e3-7fcb-4b21-b1df-07ba70374e09" (UID: "619e01e3-7fcb-4b21-b1df-07ba70374e09"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:10 crc kubenswrapper[4794]: I0216 17:24:10.525454 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/619e01e3-7fcb-4b21-b1df-07ba70374e09-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:10 crc kubenswrapper[4794]: I0216 17:24:10.525627 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntvjp\" (UniqueName: \"kubernetes.io/projected/619e01e3-7fcb-4b21-b1df-07ba70374e09-kube-api-access-ntvjp\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:10 crc kubenswrapper[4794]: I0216 17:24:10.525723 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/619e01e3-7fcb-4b21-b1df-07ba70374e09-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:10 crc kubenswrapper[4794]: I0216 17:24:10.525812 4794 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/619e01e3-7fcb-4b21-b1df-07ba70374e09-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:10 crc kubenswrapper[4794]: I0216 17:24:10.797687 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4gn4j" Feb 16 17:24:10 crc kubenswrapper[4794]: I0216 17:24:10.804117 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4gn4j" event={"ID":"619e01e3-7fcb-4b21-b1df-07ba70374e09","Type":"ContainerDied","Data":"9aebbce0e022da03763509bc0aa4b1334bc0e436b4857cd508e37f8bb1876b52"} Feb 16 17:24:10 crc kubenswrapper[4794]: I0216 17:24:10.804168 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9aebbce0e022da03763509bc0aa4b1334bc0e436b4857cd508e37f8bb1876b52" Feb 16 17:24:10 crc kubenswrapper[4794]: I0216 17:24:10.902211 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 17:24:10 crc kubenswrapper[4794]: E0216 17:24:10.902790 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="619e01e3-7fcb-4b21-b1df-07ba70374e09" containerName="nova-cell0-conductor-db-sync" Feb 16 17:24:10 crc kubenswrapper[4794]: I0216 17:24:10.902812 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="619e01e3-7fcb-4b21-b1df-07ba70374e09" containerName="nova-cell0-conductor-db-sync" Feb 16 17:24:10 crc kubenswrapper[4794]: I0216 17:24:10.903126 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="619e01e3-7fcb-4b21-b1df-07ba70374e09" containerName="nova-cell0-conductor-db-sync" Feb 16 17:24:10 crc kubenswrapper[4794]: I0216 17:24:10.904087 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 17:24:10 crc kubenswrapper[4794]: I0216 17:24:10.917901 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-b4ckg" Feb 16 17:24:10 crc kubenswrapper[4794]: I0216 17:24:10.918158 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 17:24:10 crc kubenswrapper[4794]: I0216 17:24:10.947685 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br9cx\" (UniqueName: \"kubernetes.io/projected/7beb845f-ab40-4f39-82eb-dff623435a03-kube-api-access-br9cx\") pod \"nova-cell0-conductor-0\" (UID: \"7beb845f-ab40-4f39-82eb-dff623435a03\") " pod="openstack/nova-cell0-conductor-0" Feb 16 17:24:10 crc kubenswrapper[4794]: I0216 17:24:10.948108 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7beb845f-ab40-4f39-82eb-dff623435a03-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7beb845f-ab40-4f39-82eb-dff623435a03\") " pod="openstack/nova-cell0-conductor-0" Feb 16 17:24:10 crc kubenswrapper[4794]: I0216 17:24:10.948349 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7beb845f-ab40-4f39-82eb-dff623435a03-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7beb845f-ab40-4f39-82eb-dff623435a03\") " pod="openstack/nova-cell0-conductor-0" Feb 16 17:24:10 crc kubenswrapper[4794]: I0216 17:24:10.949967 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 17:24:11 crc kubenswrapper[4794]: I0216 17:24:11.050669 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br9cx\" (UniqueName: \"kubernetes.io/projected/7beb845f-ab40-4f39-82eb-dff623435a03-kube-api-access-br9cx\") pod \"nova-cell0-conductor-0\" (UID: \"7beb845f-ab40-4f39-82eb-dff623435a03\") " pod="openstack/nova-cell0-conductor-0" Feb 16 17:24:11 crc kubenswrapper[4794]: I0216 17:24:11.050831 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7beb845f-ab40-4f39-82eb-dff623435a03-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7beb845f-ab40-4f39-82eb-dff623435a03\") " pod="openstack/nova-cell0-conductor-0" Feb 16 17:24:11 crc kubenswrapper[4794]: I0216 17:24:11.050932 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7beb845f-ab40-4f39-82eb-dff623435a03-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7beb845f-ab40-4f39-82eb-dff623435a03\") " pod="openstack/nova-cell0-conductor-0" Feb 16 17:24:11 crc kubenswrapper[4794]: I0216 17:24:11.059942 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7beb845f-ab40-4f39-82eb-dff623435a03-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7beb845f-ab40-4f39-82eb-dff623435a03\") " pod="openstack/nova-cell0-conductor-0" Feb 16 17:24:11 crc kubenswrapper[4794]: I0216 17:24:11.060080 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7beb845f-ab40-4f39-82eb-dff623435a03-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7beb845f-ab40-4f39-82eb-dff623435a03\") " pod="openstack/nova-cell0-conductor-0" Feb 16 17:24:11 crc kubenswrapper[4794]: I0216 17:24:11.114074 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br9cx\" (UniqueName: \"kubernetes.io/projected/7beb845f-ab40-4f39-82eb-dff623435a03-kube-api-access-br9cx\") pod \"nova-cell0-conductor-0\" (UID: \"7beb845f-ab40-4f39-82eb-dff623435a03\") " pod="openstack/nova-cell0-conductor-0" Feb 16 17:24:11 crc kubenswrapper[4794]: I0216 17:24:11.224886 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 17:24:11 crc kubenswrapper[4794]: I0216 17:24:11.310153 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:24:11 crc kubenswrapper[4794]: I0216 17:24:11.310436 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ca440010-9acc-4aea-b64a-3f1a500571a1" containerName="ceilometer-central-agent" containerID="cri-o://3aa36a5c34833ee40309aed4a3b2cb76b2d1a8edbaa13bab80671c6b6623a432" gracePeriod=30 Feb 16 17:24:11 crc kubenswrapper[4794]: I0216 17:24:11.311287 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ca440010-9acc-4aea-b64a-3f1a500571a1" containerName="proxy-httpd" containerID="cri-o://73ed452ed6a453b75d793415e56eaa0baf856e6ae776a4fa19fc4424f362e031" gracePeriod=30 Feb 16 17:24:11 crc kubenswrapper[4794]: I0216 17:24:11.311382 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ca440010-9acc-4aea-b64a-3f1a500571a1" containerName="sg-core" containerID="cri-o://06751d712288d1af8320103dca2ec8dc264d422a69437f183f6f5ec3a553b846" gracePeriod=30 Feb 16 17:24:11 crc kubenswrapper[4794]: I0216 17:24:11.311425 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ca440010-9acc-4aea-b64a-3f1a500571a1" containerName="ceilometer-notification-agent" containerID="cri-o://5281fa4dd554a1db8a0e184f813957398dc5a361f9265b2f85819a54e085156b" gracePeriod=30 Feb 16 17:24:11 crc kubenswrapper[4794]: I0216 17:24:11.746010 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 17:24:11 crc kubenswrapper[4794]: W0216 17:24:11.754192 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7beb845f_ab40_4f39_82eb_dff623435a03.slice/crio-e848c96fffa5feb479b5c4e51b17db907c114bf10af89a44abf853041d0fbd2d WatchSource:0}: Error finding container e848c96fffa5feb479b5c4e51b17db907c114bf10af89a44abf853041d0fbd2d: Status 404 returned error can't find the container with id e848c96fffa5feb479b5c4e51b17db907c114bf10af89a44abf853041d0fbd2d Feb 16 17:24:11 crc kubenswrapper[4794]: I0216 17:24:11.808789 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7beb845f-ab40-4f39-82eb-dff623435a03","Type":"ContainerStarted","Data":"e848c96fffa5feb479b5c4e51b17db907c114bf10af89a44abf853041d0fbd2d"} Feb 16 17:24:11 crc kubenswrapper[4794]: I0216 17:24:11.811055 4794 generic.go:334] "Generic (PLEG): container finished" podID="ca440010-9acc-4aea-b64a-3f1a500571a1" containerID="73ed452ed6a453b75d793415e56eaa0baf856e6ae776a4fa19fc4424f362e031" exitCode=0 Feb 16 17:24:11 crc kubenswrapper[4794]: I0216 17:24:11.811081 4794 generic.go:334] "Generic (PLEG): container finished" podID="ca440010-9acc-4aea-b64a-3f1a500571a1" containerID="06751d712288d1af8320103dca2ec8dc264d422a69437f183f6f5ec3a553b846" exitCode=2 Feb 16 17:24:11 crc kubenswrapper[4794]: I0216 17:24:11.811088 4794 generic.go:334] "Generic (PLEG): container finished" podID="ca440010-9acc-4aea-b64a-3f1a500571a1" containerID="5281fa4dd554a1db8a0e184f813957398dc5a361f9265b2f85819a54e085156b" exitCode=0 Feb 16 17:24:11 crc kubenswrapper[4794]: I0216 17:24:11.811104 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca440010-9acc-4aea-b64a-3f1a500571a1","Type":"ContainerDied","Data":"73ed452ed6a453b75d793415e56eaa0baf856e6ae776a4fa19fc4424f362e031"} Feb 16 17:24:11 crc kubenswrapper[4794]: I0216 17:24:11.811124 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca440010-9acc-4aea-b64a-3f1a500571a1","Type":"ContainerDied","Data":"06751d712288d1af8320103dca2ec8dc264d422a69437f183f6f5ec3a553b846"} Feb 16 17:24:11 crc kubenswrapper[4794]: I0216 17:24:11.811135 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca440010-9acc-4aea-b64a-3f1a500571a1","Type":"ContainerDied","Data":"5281fa4dd554a1db8a0e184f813957398dc5a361f9265b2f85819a54e085156b"} Feb 16 17:24:12 crc kubenswrapper[4794]: I0216 17:24:12.826066 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7beb845f-ab40-4f39-82eb-dff623435a03","Type":"ContainerStarted","Data":"49f5b6764e4e080b548739b7c47317307c7d5cdc08c38e529506a2cc9ffbdf36"} Feb 16 17:24:12 crc kubenswrapper[4794]: I0216 17:24:12.826741 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 16 17:24:12 crc kubenswrapper[4794]: I0216 17:24:12.855063 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.855001796 podStartE2EDuration="2.855001796s" podCreationTimestamp="2026-02-16 17:24:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:24:12.843726908 +0000 UTC m=+1478.791821555" watchObservedRunningTime="2026-02-16 17:24:12.855001796 +0000 UTC m=+1478.803096443" Feb 16 17:24:14 crc kubenswrapper[4794]: I0216 17:24:14.851165 4794 generic.go:334] "Generic (PLEG): container finished" podID="ca440010-9acc-4aea-b64a-3f1a500571a1" containerID="3aa36a5c34833ee40309aed4a3b2cb76b2d1a8edbaa13bab80671c6b6623a432" exitCode=0 Feb 16 17:24:14 crc kubenswrapper[4794]: I0216 17:24:14.851540 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca440010-9acc-4aea-b64a-3f1a500571a1","Type":"ContainerDied","Data":"3aa36a5c34833ee40309aed4a3b2cb76b2d1a8edbaa13bab80671c6b6623a432"} Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.240995 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.273363 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca440010-9acc-4aea-b64a-3f1a500571a1-log-httpd\") pod \"ca440010-9acc-4aea-b64a-3f1a500571a1\" (UID: \"ca440010-9acc-4aea-b64a-3f1a500571a1\") " Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.273599 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ca440010-9acc-4aea-b64a-3f1a500571a1-sg-core-conf-yaml\") pod \"ca440010-9acc-4aea-b64a-3f1a500571a1\" (UID: \"ca440010-9acc-4aea-b64a-3f1a500571a1\") " Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.273710 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca440010-9acc-4aea-b64a-3f1a500571a1-run-httpd\") pod \"ca440010-9acc-4aea-b64a-3f1a500571a1\" (UID: \"ca440010-9acc-4aea-b64a-3f1a500571a1\") " Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.273774 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca440010-9acc-4aea-b64a-3f1a500571a1-config-data\") pod \"ca440010-9acc-4aea-b64a-3f1a500571a1\" (UID: \"ca440010-9acc-4aea-b64a-3f1a500571a1\") " Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.273826 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-465t9\" (UniqueName: \"kubernetes.io/projected/ca440010-9acc-4aea-b64a-3f1a500571a1-kube-api-access-465t9\") pod \"ca440010-9acc-4aea-b64a-3f1a500571a1\" (UID: \"ca440010-9acc-4aea-b64a-3f1a500571a1\") " Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.273857 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca440010-9acc-4aea-b64a-3f1a500571a1-scripts\") pod \"ca440010-9acc-4aea-b64a-3f1a500571a1\" (UID: \"ca440010-9acc-4aea-b64a-3f1a500571a1\") " Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.274011 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca440010-9acc-4aea-b64a-3f1a500571a1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ca440010-9acc-4aea-b64a-3f1a500571a1" (UID: "ca440010-9acc-4aea-b64a-3f1a500571a1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.274027 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca440010-9acc-4aea-b64a-3f1a500571a1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ca440010-9acc-4aea-b64a-3f1a500571a1" (UID: "ca440010-9acc-4aea-b64a-3f1a500571a1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.274006 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca440010-9acc-4aea-b64a-3f1a500571a1-combined-ca-bundle\") pod \"ca440010-9acc-4aea-b64a-3f1a500571a1\" (UID: \"ca440010-9acc-4aea-b64a-3f1a500571a1\") " Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.275079 4794 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca440010-9acc-4aea-b64a-3f1a500571a1-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.275099 4794 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca440010-9acc-4aea-b64a-3f1a500571a1-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.278611 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca440010-9acc-4aea-b64a-3f1a500571a1-kube-api-access-465t9" (OuterVolumeSpecName: "kube-api-access-465t9") pod "ca440010-9acc-4aea-b64a-3f1a500571a1" (UID: "ca440010-9acc-4aea-b64a-3f1a500571a1"). InnerVolumeSpecName "kube-api-access-465t9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.287517 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca440010-9acc-4aea-b64a-3f1a500571a1-scripts" (OuterVolumeSpecName: "scripts") pod "ca440010-9acc-4aea-b64a-3f1a500571a1" (UID: "ca440010-9acc-4aea-b64a-3f1a500571a1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.310419 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca440010-9acc-4aea-b64a-3f1a500571a1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ca440010-9acc-4aea-b64a-3f1a500571a1" (UID: "ca440010-9acc-4aea-b64a-3f1a500571a1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.377511 4794 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ca440010-9acc-4aea-b64a-3f1a500571a1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.377537 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-465t9\" (UniqueName: \"kubernetes.io/projected/ca440010-9acc-4aea-b64a-3f1a500571a1-kube-api-access-465t9\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.377547 4794 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca440010-9acc-4aea-b64a-3f1a500571a1-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.384956 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca440010-9acc-4aea-b64a-3f1a500571a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca440010-9acc-4aea-b64a-3f1a500571a1" (UID: "ca440010-9acc-4aea-b64a-3f1a500571a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.407973 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca440010-9acc-4aea-b64a-3f1a500571a1-config-data" (OuterVolumeSpecName: "config-data") pod "ca440010-9acc-4aea-b64a-3f1a500571a1" (UID: "ca440010-9acc-4aea-b64a-3f1a500571a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.479089 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca440010-9acc-4aea-b64a-3f1a500571a1-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.479341 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca440010-9acc-4aea-b64a-3f1a500571a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.866510 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca440010-9acc-4aea-b64a-3f1a500571a1","Type":"ContainerDied","Data":"2e7bd29c9a72fb5f630368aa553d897cbe2acda3b4a06f46b416c69a72e82361"} Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.866573 4794 scope.go:117] "RemoveContainer" containerID="73ed452ed6a453b75d793415e56eaa0baf856e6ae776a4fa19fc4424f362e031" Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.866584 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.893959 4794 scope.go:117] "RemoveContainer" containerID="06751d712288d1af8320103dca2ec8dc264d422a69437f183f6f5ec3a553b846" Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.907509 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.916835 4794 scope.go:117] "RemoveContainer" containerID="5281fa4dd554a1db8a0e184f813957398dc5a361f9265b2f85819a54e085156b" Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.925520 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.939284 4794 scope.go:117] "RemoveContainer" containerID="3aa36a5c34833ee40309aed4a3b2cb76b2d1a8edbaa13bab80671c6b6623a432" Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.947145 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:24:15 crc kubenswrapper[4794]: E0216 17:24:15.947711 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca440010-9acc-4aea-b64a-3f1a500571a1" containerName="ceilometer-notification-agent" Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.947737 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca440010-9acc-4aea-b64a-3f1a500571a1" containerName="ceilometer-notification-agent" Feb 16 17:24:15 crc kubenswrapper[4794]: E0216 17:24:15.947751 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca440010-9acc-4aea-b64a-3f1a500571a1" containerName="sg-core" Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.947758 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca440010-9acc-4aea-b64a-3f1a500571a1" containerName="sg-core" Feb 16 17:24:15 crc kubenswrapper[4794]: E0216 17:24:15.947777 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca440010-9acc-4aea-b64a-3f1a500571a1" containerName="proxy-httpd" Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.947785 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca440010-9acc-4aea-b64a-3f1a500571a1" containerName="proxy-httpd" Feb 16 17:24:15 crc kubenswrapper[4794]: E0216 17:24:15.947817 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca440010-9acc-4aea-b64a-3f1a500571a1" containerName="ceilometer-central-agent" Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.947826 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca440010-9acc-4aea-b64a-3f1a500571a1" containerName="ceilometer-central-agent" Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.948081 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca440010-9acc-4aea-b64a-3f1a500571a1" containerName="ceilometer-notification-agent" Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.948120 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca440010-9acc-4aea-b64a-3f1a500571a1" containerName="ceilometer-central-agent" Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.948745 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca440010-9acc-4aea-b64a-3f1a500571a1" containerName="sg-core" Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.948767 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca440010-9acc-4aea-b64a-3f1a500571a1" containerName="proxy-httpd" Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.952934 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.956890 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.959740 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.980284 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.995478 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2a990777-4ada-4e8d-ac0f-451a616ec3bc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2a990777-4ada-4e8d-ac0f-451a616ec3bc\") " pod="openstack/ceilometer-0" Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.995829 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a990777-4ada-4e8d-ac0f-451a616ec3bc-run-httpd\") pod \"ceilometer-0\" (UID: \"2a990777-4ada-4e8d-ac0f-451a616ec3bc\") " pod="openstack/ceilometer-0" Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.995970 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a990777-4ada-4e8d-ac0f-451a616ec3bc-log-httpd\") pod \"ceilometer-0\" (UID: \"2a990777-4ada-4e8d-ac0f-451a616ec3bc\") " pod="openstack/ceilometer-0" Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.996216 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a990777-4ada-4e8d-ac0f-451a616ec3bc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2a990777-4ada-4e8d-ac0f-451a616ec3bc\") " pod="openstack/ceilometer-0" Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.996276 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a990777-4ada-4e8d-ac0f-451a616ec3bc-config-data\") pod \"ceilometer-0\" (UID: \"2a990777-4ada-4e8d-ac0f-451a616ec3bc\") " pod="openstack/ceilometer-0" Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.996371 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5swd6\" (UniqueName: \"kubernetes.io/projected/2a990777-4ada-4e8d-ac0f-451a616ec3bc-kube-api-access-5swd6\") pod \"ceilometer-0\" (UID: \"2a990777-4ada-4e8d-ac0f-451a616ec3bc\") " pod="openstack/ceilometer-0" Feb 16 17:24:15 crc kubenswrapper[4794]: I0216 17:24:15.996439 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a990777-4ada-4e8d-ac0f-451a616ec3bc-scripts\") pod \"ceilometer-0\" (UID: \"2a990777-4ada-4e8d-ac0f-451a616ec3bc\") " pod="openstack/ceilometer-0" Feb 16 17:24:16 crc kubenswrapper[4794]: I0216 17:24:16.098491 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a990777-4ada-4e8d-ac0f-451a616ec3bc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2a990777-4ada-4e8d-ac0f-451a616ec3bc\") " pod="openstack/ceilometer-0" Feb 16 17:24:16 crc kubenswrapper[4794]: I0216 17:24:16.099859 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a990777-4ada-4e8d-ac0f-451a616ec3bc-config-data\") pod \"ceilometer-0\" (UID: \"2a990777-4ada-4e8d-ac0f-451a616ec3bc\") " pod="openstack/ceilometer-0" Feb 16 17:24:16 crc kubenswrapper[4794]: I0216 17:24:16.099950 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5swd6\" (UniqueName: \"kubernetes.io/projected/2a990777-4ada-4e8d-ac0f-451a616ec3bc-kube-api-access-5swd6\") pod \"ceilometer-0\" (UID: \"2a990777-4ada-4e8d-ac0f-451a616ec3bc\") " pod="openstack/ceilometer-0" Feb 16 17:24:16 crc kubenswrapper[4794]: I0216 17:24:16.100055 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a990777-4ada-4e8d-ac0f-451a616ec3bc-scripts\") pod \"ceilometer-0\" (UID: \"2a990777-4ada-4e8d-ac0f-451a616ec3bc\") " pod="openstack/ceilometer-0" Feb 16 17:24:16 crc kubenswrapper[4794]: I0216 17:24:16.100150 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2a990777-4ada-4e8d-ac0f-451a616ec3bc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2a990777-4ada-4e8d-ac0f-451a616ec3bc\") " pod="openstack/ceilometer-0" Feb 16 17:24:16 crc kubenswrapper[4794]: I0216 17:24:16.100541 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a990777-4ada-4e8d-ac0f-451a616ec3bc-run-httpd\") pod \"ceilometer-0\" (UID: \"2a990777-4ada-4e8d-ac0f-451a616ec3bc\") " pod="openstack/ceilometer-0" Feb 16 17:24:16 crc kubenswrapper[4794]: I0216 17:24:16.100642 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a990777-4ada-4e8d-ac0f-451a616ec3bc-log-httpd\") pod \"ceilometer-0\" (UID: \"2a990777-4ada-4e8d-ac0f-451a616ec3bc\") " pod="openstack/ceilometer-0" Feb 16 17:24:16 crc kubenswrapper[4794]: I0216 17:24:16.101120 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a990777-4ada-4e8d-ac0f-451a616ec3bc-log-httpd\") pod \"ceilometer-0\" (UID: \"2a990777-4ada-4e8d-ac0f-451a616ec3bc\") " pod="openstack/ceilometer-0" Feb 16 17:24:16 crc kubenswrapper[4794]: I0216 17:24:16.101155 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a990777-4ada-4e8d-ac0f-451a616ec3bc-run-httpd\") pod \"ceilometer-0\" (UID: \"2a990777-4ada-4e8d-ac0f-451a616ec3bc\") " pod="openstack/ceilometer-0" Feb 16 17:24:16 crc kubenswrapper[4794]: I0216 17:24:16.103323 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a990777-4ada-4e8d-ac0f-451a616ec3bc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2a990777-4ada-4e8d-ac0f-451a616ec3bc\") " pod="openstack/ceilometer-0" Feb 16 17:24:16 crc kubenswrapper[4794]: I0216 17:24:16.103482 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2a990777-4ada-4e8d-ac0f-451a616ec3bc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2a990777-4ada-4e8d-ac0f-451a616ec3bc\") " pod="openstack/ceilometer-0" Feb 16 17:24:16 crc kubenswrapper[4794]: I0216 17:24:16.104431 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a990777-4ada-4e8d-ac0f-451a616ec3bc-config-data\") pod \"ceilometer-0\" (UID: \"2a990777-4ada-4e8d-ac0f-451a616ec3bc\") " pod="openstack/ceilometer-0" Feb 16 17:24:16 crc kubenswrapper[4794]: I0216 17:24:16.105371 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a990777-4ada-4e8d-ac0f-451a616ec3bc-scripts\") pod \"ceilometer-0\" (UID: \"2a990777-4ada-4e8d-ac0f-451a616ec3bc\") " pod="openstack/ceilometer-0" Feb 16 17:24:16 crc kubenswrapper[4794]: I0216 17:24:16.129752 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5swd6\" (UniqueName: \"kubernetes.io/projected/2a990777-4ada-4e8d-ac0f-451a616ec3bc-kube-api-access-5swd6\") pod \"ceilometer-0\" (UID: \"2a990777-4ada-4e8d-ac0f-451a616ec3bc\") " pod="openstack/ceilometer-0" Feb 16 17:24:16 crc kubenswrapper[4794]: I0216 17:24:16.257375 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 16 17:24:16 crc kubenswrapper[4794]: I0216 17:24:16.282450 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:24:16 crc kubenswrapper[4794]: W0216 17:24:16.777574 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a990777_4ada_4e8d_ac0f_451a616ec3bc.slice/crio-2934d791b98b7b131b86d01aff68a92acb8c945c86934436a42d5c24c4bb20b7 WatchSource:0}: Error finding container 2934d791b98b7b131b86d01aff68a92acb8c945c86934436a42d5c24c4bb20b7: Status 404 returned error can't find the container with id 2934d791b98b7b131b86d01aff68a92acb8c945c86934436a42d5c24c4bb20b7 Feb 16 17:24:16 crc kubenswrapper[4794]: I0216 17:24:16.779259 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:24:16 crc kubenswrapper[4794]: I0216 17:24:16.807370 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca440010-9acc-4aea-b64a-3f1a500571a1" path="/var/lib/kubelet/pods/ca440010-9acc-4aea-b64a-3f1a500571a1/volumes" Feb 16 17:24:16 crc kubenswrapper[4794]: I0216 17:24:16.881341 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a990777-4ada-4e8d-ac0f-451a616ec3bc","Type":"ContainerStarted","Data":"2934d791b98b7b131b86d01aff68a92acb8c945c86934436a42d5c24c4bb20b7"} Feb 16 17:24:16 crc kubenswrapper[4794]: I0216 17:24:16.962391 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-4kglx"] Feb 16 17:24:16 crc kubenswrapper[4794]: I0216 17:24:16.964817 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4kglx" Feb 16 17:24:16 crc kubenswrapper[4794]: I0216 17:24:16.972215 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 16 17:24:16 crc kubenswrapper[4794]: I0216 17:24:16.972400 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 16 17:24:16 crc kubenswrapper[4794]: I0216 17:24:16.990873 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-4kglx"] Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.022167 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnklw\" (UniqueName: \"kubernetes.io/projected/c08d48e2-27f0-44e5-a13a-815719c3f5dc-kube-api-access-hnklw\") pod \"nova-cell0-cell-mapping-4kglx\" (UID: \"c08d48e2-27f0-44e5-a13a-815719c3f5dc\") " pod="openstack/nova-cell0-cell-mapping-4kglx" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.022290 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c08d48e2-27f0-44e5-a13a-815719c3f5dc-scripts\") pod \"nova-cell0-cell-mapping-4kglx\" (UID: \"c08d48e2-27f0-44e5-a13a-815719c3f5dc\") " pod="openstack/nova-cell0-cell-mapping-4kglx" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.022403 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c08d48e2-27f0-44e5-a13a-815719c3f5dc-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4kglx\" (UID: \"c08d48e2-27f0-44e5-a13a-815719c3f5dc\") " pod="openstack/nova-cell0-cell-mapping-4kglx" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.022513 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c08d48e2-27f0-44e5-a13a-815719c3f5dc-config-data\") pod \"nova-cell0-cell-mapping-4kglx\" (UID: \"c08d48e2-27f0-44e5-a13a-815719c3f5dc\") " pod="openstack/nova-cell0-cell-mapping-4kglx" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.125022 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c08d48e2-27f0-44e5-a13a-815719c3f5dc-scripts\") pod \"nova-cell0-cell-mapping-4kglx\" (UID: \"c08d48e2-27f0-44e5-a13a-815719c3f5dc\") " pod="openstack/nova-cell0-cell-mapping-4kglx" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.125119 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c08d48e2-27f0-44e5-a13a-815719c3f5dc-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4kglx\" (UID: \"c08d48e2-27f0-44e5-a13a-815719c3f5dc\") " pod="openstack/nova-cell0-cell-mapping-4kglx" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.125198 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c08d48e2-27f0-44e5-a13a-815719c3f5dc-config-data\") pod \"nova-cell0-cell-mapping-4kglx\" (UID: \"c08d48e2-27f0-44e5-a13a-815719c3f5dc\") " pod="openstack/nova-cell0-cell-mapping-4kglx" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.125274 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnklw\" (UniqueName: \"kubernetes.io/projected/c08d48e2-27f0-44e5-a13a-815719c3f5dc-kube-api-access-hnklw\") pod \"nova-cell0-cell-mapping-4kglx\" (UID: \"c08d48e2-27f0-44e5-a13a-815719c3f5dc\") " pod="openstack/nova-cell0-cell-mapping-4kglx" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.146247 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c08d48e2-27f0-44e5-a13a-815719c3f5dc-config-data\") pod \"nova-cell0-cell-mapping-4kglx\" (UID: \"c08d48e2-27f0-44e5-a13a-815719c3f5dc\") " pod="openstack/nova-cell0-cell-mapping-4kglx" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.146727 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c08d48e2-27f0-44e5-a13a-815719c3f5dc-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4kglx\" (UID: \"c08d48e2-27f0-44e5-a13a-815719c3f5dc\") " pod="openstack/nova-cell0-cell-mapping-4kglx" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.159867 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c08d48e2-27f0-44e5-a13a-815719c3f5dc-scripts\") pod \"nova-cell0-cell-mapping-4kglx\" (UID: \"c08d48e2-27f0-44e5-a13a-815719c3f5dc\") " pod="openstack/nova-cell0-cell-mapping-4kglx" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.165347 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.174035 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.182616 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnklw\" (UniqueName: \"kubernetes.io/projected/c08d48e2-27f0-44e5-a13a-815719c3f5dc-kube-api-access-hnklw\") pod \"nova-cell0-cell-mapping-4kglx\" (UID: \"c08d48e2-27f0-44e5-a13a-815719c3f5dc\") " pod="openstack/nova-cell0-cell-mapping-4kglx" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.187741 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.226920 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.228073 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/863be36f-716a-4890-9790-1e82c5542f1f-logs\") pod \"nova-api-0\" (UID: \"863be36f-716a-4890-9790-1e82c5542f1f\") " pod="openstack/nova-api-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.228243 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn2rw\" (UniqueName: \"kubernetes.io/projected/863be36f-716a-4890-9790-1e82c5542f1f-kube-api-access-kn2rw\") pod \"nova-api-0\" (UID: \"863be36f-716a-4890-9790-1e82c5542f1f\") " pod="openstack/nova-api-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.228279 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/863be36f-716a-4890-9790-1e82c5542f1f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"863be36f-716a-4890-9790-1e82c5542f1f\") " pod="openstack/nova-api-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.228418 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/863be36f-716a-4890-9790-1e82c5542f1f-config-data\") pod \"nova-api-0\" (UID: \"863be36f-716a-4890-9790-1e82c5542f1f\") " pod="openstack/nova-api-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.296570 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.317972 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.302879 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4kglx" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.329678 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.340055 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn2rw\" (UniqueName: \"kubernetes.io/projected/863be36f-716a-4890-9790-1e82c5542f1f-kube-api-access-kn2rw\") pod \"nova-api-0\" (UID: \"863be36f-716a-4890-9790-1e82c5542f1f\") " pod="openstack/nova-api-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.340108 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/863be36f-716a-4890-9790-1e82c5542f1f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"863be36f-716a-4890-9790-1e82c5542f1f\") " pod="openstack/nova-api-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.340197 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/863be36f-716a-4890-9790-1e82c5542f1f-config-data\") pod \"nova-api-0\" (UID: \"863be36f-716a-4890-9790-1e82c5542f1f\") " pod="openstack/nova-api-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.340259 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/863be36f-716a-4890-9790-1e82c5542f1f-logs\") pod \"nova-api-0\" (UID: \"863be36f-716a-4890-9790-1e82c5542f1f\") " pod="openstack/nova-api-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.340995 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/863be36f-716a-4890-9790-1e82c5542f1f-logs\") pod \"nova-api-0\" (UID: \"863be36f-716a-4890-9790-1e82c5542f1f\") " pod="openstack/nova-api-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.345475 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/863be36f-716a-4890-9790-1e82c5542f1f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"863be36f-716a-4890-9790-1e82c5542f1f\") " pod="openstack/nova-api-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.346935 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.356986 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/863be36f-716a-4890-9790-1e82c5542f1f-config-data\") pod \"nova-api-0\" (UID: \"863be36f-716a-4890-9790-1e82c5542f1f\") " pod="openstack/nova-api-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.411850 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn2rw\" (UniqueName: \"kubernetes.io/projected/863be36f-716a-4890-9790-1e82c5542f1f-kube-api-access-kn2rw\") pod \"nova-api-0\" (UID: \"863be36f-716a-4890-9790-1e82c5542f1f\") " pod="openstack/nova-api-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.420097 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.442121 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d32ddf99-3213-4ddb-9916-695dc1f70dfc-logs\") pod \"nova-metadata-0\" (UID: \"d32ddf99-3213-4ddb-9916-695dc1f70dfc\") " pod="openstack/nova-metadata-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.442191 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d32ddf99-3213-4ddb-9916-695dc1f70dfc-config-data\") pod \"nova-metadata-0\" (UID: \"d32ddf99-3213-4ddb-9916-695dc1f70dfc\") " pod="openstack/nova-metadata-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.442269 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d32ddf99-3213-4ddb-9916-695dc1f70dfc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d32ddf99-3213-4ddb-9916-695dc1f70dfc\") " pod="openstack/nova-metadata-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.442360 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qhvz\" (UniqueName: \"kubernetes.io/projected/d32ddf99-3213-4ddb-9916-695dc1f70dfc-kube-api-access-8qhvz\") pod \"nova-metadata-0\" (UID: \"d32ddf99-3213-4ddb-9916-695dc1f70dfc\") " pod="openstack/nova-metadata-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.544797 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d32ddf99-3213-4ddb-9916-695dc1f70dfc-logs\") pod \"nova-metadata-0\" (UID: \"d32ddf99-3213-4ddb-9916-695dc1f70dfc\") " pod="openstack/nova-metadata-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.545123 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d32ddf99-3213-4ddb-9916-695dc1f70dfc-config-data\") pod \"nova-metadata-0\" (UID: \"d32ddf99-3213-4ddb-9916-695dc1f70dfc\") " pod="openstack/nova-metadata-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.545197 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d32ddf99-3213-4ddb-9916-695dc1f70dfc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d32ddf99-3213-4ddb-9916-695dc1f70dfc\") " pod="openstack/nova-metadata-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.545241 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qhvz\" (UniqueName: \"kubernetes.io/projected/d32ddf99-3213-4ddb-9916-695dc1f70dfc-kube-api-access-8qhvz\") pod \"nova-metadata-0\" (UID: \"d32ddf99-3213-4ddb-9916-695dc1f70dfc\") " pod="openstack/nova-metadata-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.549145 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-kdxjb"] Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.550971 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-kdxjb" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.585404 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d32ddf99-3213-4ddb-9916-695dc1f70dfc-logs\") pod \"nova-metadata-0\" (UID: \"d32ddf99-3213-4ddb-9916-695dc1f70dfc\") " pod="openstack/nova-metadata-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.585895 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d32ddf99-3213-4ddb-9916-695dc1f70dfc-config-data\") pod \"nova-metadata-0\" (UID: \"d32ddf99-3213-4ddb-9916-695dc1f70dfc\") " pod="openstack/nova-metadata-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.593070 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d32ddf99-3213-4ddb-9916-695dc1f70dfc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d32ddf99-3213-4ddb-9916-695dc1f70dfc\") " pod="openstack/nova-metadata-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.602724 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qhvz\" (UniqueName: \"kubernetes.io/projected/d32ddf99-3213-4ddb-9916-695dc1f70dfc-kube-api-access-8qhvz\") pod \"nova-metadata-0\" (UID: \"d32ddf99-3213-4ddb-9916-695dc1f70dfc\") " pod="openstack/nova-metadata-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.602786 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.604296 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.636453 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.645109 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-kdxjb"] Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.647375 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43f61c36-3ecb-43dc-a9cd-d713af555005-config-data\") pod \"nova-scheduler-0\" (UID: \"43f61c36-3ecb-43dc-a9cd-d713af555005\") " pod="openstack/nova-scheduler-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.647439 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fbebaa3-8aa2-4ace-a9c9-558bc3964430-config\") pod \"dnsmasq-dns-568d7fd7cf-kdxjb\" (UID: \"3fbebaa3-8aa2-4ace-a9c9-558bc3964430\") " pod="openstack/dnsmasq-dns-568d7fd7cf-kdxjb" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.647538 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjmbx\" (UniqueName: \"kubernetes.io/projected/43f61c36-3ecb-43dc-a9cd-d713af555005-kube-api-access-qjmbx\") pod \"nova-scheduler-0\" (UID: \"43f61c36-3ecb-43dc-a9cd-d713af555005\") " pod="openstack/nova-scheduler-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.647643 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5spq\" (UniqueName: \"kubernetes.io/projected/3fbebaa3-8aa2-4ace-a9c9-558bc3964430-kube-api-access-q5spq\") pod \"dnsmasq-dns-568d7fd7cf-kdxjb\" (UID: \"3fbebaa3-8aa2-4ace-a9c9-558bc3964430\") " pod="openstack/dnsmasq-dns-568d7fd7cf-kdxjb" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.647716 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fbebaa3-8aa2-4ace-a9c9-558bc3964430-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-kdxjb\" (UID: \"3fbebaa3-8aa2-4ace-a9c9-558bc3964430\") " pod="openstack/dnsmasq-dns-568d7fd7cf-kdxjb" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.647794 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f61c36-3ecb-43dc-a9cd-d713af555005-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"43f61c36-3ecb-43dc-a9cd-d713af555005\") " pod="openstack/nova-scheduler-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.647861 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fbebaa3-8aa2-4ace-a9c9-558bc3964430-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-kdxjb\" (UID: \"3fbebaa3-8aa2-4ace-a9c9-558bc3964430\") " pod="openstack/dnsmasq-dns-568d7fd7cf-kdxjb" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.647901 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fbebaa3-8aa2-4ace-a9c9-558bc3964430-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-kdxjb\" (UID: \"3fbebaa3-8aa2-4ace-a9c9-558bc3964430\") " pod="openstack/dnsmasq-dns-568d7fd7cf-kdxjb" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.647935 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fbebaa3-8aa2-4ace-a9c9-558bc3964430-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-kdxjb\" (UID: \"3fbebaa3-8aa2-4ace-a9c9-558bc3964430\") " pod="openstack/dnsmasq-dns-568d7fd7cf-kdxjb" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.739687 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.747521 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.749510 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fbebaa3-8aa2-4ace-a9c9-558bc3964430-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-kdxjb\" (UID: \"3fbebaa3-8aa2-4ace-a9c9-558bc3964430\") " pod="openstack/dnsmasq-dns-568d7fd7cf-kdxjb" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.749554 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f61c36-3ecb-43dc-a9cd-d713af555005-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"43f61c36-3ecb-43dc-a9cd-d713af555005\") " pod="openstack/nova-scheduler-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.749594 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fbebaa3-8aa2-4ace-a9c9-558bc3964430-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-kdxjb\" (UID: \"3fbebaa3-8aa2-4ace-a9c9-558bc3964430\") " pod="openstack/dnsmasq-dns-568d7fd7cf-kdxjb" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.749617 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fbebaa3-8aa2-4ace-a9c9-558bc3964430-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-kdxjb\" (UID: \"3fbebaa3-8aa2-4ace-a9c9-558bc3964430\") " pod="openstack/dnsmasq-dns-568d7fd7cf-kdxjb" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.749638 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fbebaa3-8aa2-4ace-a9c9-558bc3964430-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-kdxjb\" (UID: \"3fbebaa3-8aa2-4ace-a9c9-558bc3964430\") " pod="openstack/dnsmasq-dns-568d7fd7cf-kdxjb" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.749657 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43f61c36-3ecb-43dc-a9cd-d713af555005-config-data\") pod \"nova-scheduler-0\" (UID: \"43f61c36-3ecb-43dc-a9cd-d713af555005\") " pod="openstack/nova-scheduler-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.749674 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fbebaa3-8aa2-4ace-a9c9-558bc3964430-config\") pod \"dnsmasq-dns-568d7fd7cf-kdxjb\" (UID: \"3fbebaa3-8aa2-4ace-a9c9-558bc3964430\") " pod="openstack/dnsmasq-dns-568d7fd7cf-kdxjb" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.749732 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjmbx\" (UniqueName: \"kubernetes.io/projected/43f61c36-3ecb-43dc-a9cd-d713af555005-kube-api-access-qjmbx\") pod \"nova-scheduler-0\" (UID: \"43f61c36-3ecb-43dc-a9cd-d713af555005\") " pod="openstack/nova-scheduler-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.749801 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5spq\" (UniqueName: \"kubernetes.io/projected/3fbebaa3-8aa2-4ace-a9c9-558bc3964430-kube-api-access-q5spq\") pod \"dnsmasq-dns-568d7fd7cf-kdxjb\" (UID: \"3fbebaa3-8aa2-4ace-a9c9-558bc3964430\") " pod="openstack/dnsmasq-dns-568d7fd7cf-kdxjb" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.756208 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fbebaa3-8aa2-4ace-a9c9-558bc3964430-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-kdxjb\" (UID: \"3fbebaa3-8aa2-4ace-a9c9-558bc3964430\") " pod="openstack/dnsmasq-dns-568d7fd7cf-kdxjb" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.757163 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fbebaa3-8aa2-4ace-a9c9-558bc3964430-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-kdxjb\" (UID: \"3fbebaa3-8aa2-4ace-a9c9-558bc3964430\") " pod="openstack/dnsmasq-dns-568d7fd7cf-kdxjb" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.757672 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fbebaa3-8aa2-4ace-a9c9-558bc3964430-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-kdxjb\" (UID: \"3fbebaa3-8aa2-4ace-a9c9-558bc3964430\") " pod="openstack/dnsmasq-dns-568d7fd7cf-kdxjb" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.759001 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fbebaa3-8aa2-4ace-a9c9-558bc3964430-config\") pod \"dnsmasq-dns-568d7fd7cf-kdxjb\" (UID: \"3fbebaa3-8aa2-4ace-a9c9-558bc3964430\") " pod="openstack/dnsmasq-dns-568d7fd7cf-kdxjb" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.762265 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fbebaa3-8aa2-4ace-a9c9-558bc3964430-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-kdxjb\" (UID: \"3fbebaa3-8aa2-4ace-a9c9-558bc3964430\") " pod="openstack/dnsmasq-dns-568d7fd7cf-kdxjb" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.790118 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjmbx\" (UniqueName: \"kubernetes.io/projected/43f61c36-3ecb-43dc-a9cd-d713af555005-kube-api-access-qjmbx\") pod \"nova-scheduler-0\" (UID: \"43f61c36-3ecb-43dc-a9cd-d713af555005\") " pod="openstack/nova-scheduler-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.791944 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f61c36-3ecb-43dc-a9cd-d713af555005-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"43f61c36-3ecb-43dc-a9cd-d713af555005\") " pod="openstack/nova-scheduler-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.796119 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43f61c36-3ecb-43dc-a9cd-d713af555005-config-data\") pod \"nova-scheduler-0\" (UID: \"43f61c36-3ecb-43dc-a9cd-d713af555005\") " pod="openstack/nova-scheduler-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.821028 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5spq\" (UniqueName: \"kubernetes.io/projected/3fbebaa3-8aa2-4ace-a9c9-558bc3964430-kube-api-access-q5spq\") pod \"dnsmasq-dns-568d7fd7cf-kdxjb\" (UID: \"3fbebaa3-8aa2-4ace-a9c9-558bc3964430\") " pod="openstack/dnsmasq-dns-568d7fd7cf-kdxjb" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.838581 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.840322 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.844764 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.918819 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-kdxjb" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.953434 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86v64\" (UniqueName: \"kubernetes.io/projected/7ef6f08e-df36-49a2-b05e-b62545488b2d-kube-api-access-86v64\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ef6f08e-df36-49a2-b05e-b62545488b2d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.953710 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ef6f08e-df36-49a2-b05e-b62545488b2d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ef6f08e-df36-49a2-b05e-b62545488b2d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.953981 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ef6f08e-df36-49a2-b05e-b62545488b2d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ef6f08e-df36-49a2-b05e-b62545488b2d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.965138 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 17:24:17 crc kubenswrapper[4794]: I0216 17:24:17.988970 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 17:24:18 crc kubenswrapper[4794]: I0216 17:24:18.055341 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ef6f08e-df36-49a2-b05e-b62545488b2d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ef6f08e-df36-49a2-b05e-b62545488b2d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:24:18 crc kubenswrapper[4794]: I0216 17:24:18.055655 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86v64\" (UniqueName: \"kubernetes.io/projected/7ef6f08e-df36-49a2-b05e-b62545488b2d-kube-api-access-86v64\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ef6f08e-df36-49a2-b05e-b62545488b2d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:24:18 crc kubenswrapper[4794]: I0216 17:24:18.055678 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ef6f08e-df36-49a2-b05e-b62545488b2d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ef6f08e-df36-49a2-b05e-b62545488b2d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:24:18 crc kubenswrapper[4794]: I0216 17:24:18.092941 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ef6f08e-df36-49a2-b05e-b62545488b2d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ef6f08e-df36-49a2-b05e-b62545488b2d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:24:18 crc kubenswrapper[4794]: I0216 17:24:18.093391 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ef6f08e-df36-49a2-b05e-b62545488b2d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ef6f08e-df36-49a2-b05e-b62545488b2d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:24:18 crc kubenswrapper[4794]: I0216 17:24:18.110889 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86v64\" (UniqueName: \"kubernetes.io/projected/7ef6f08e-df36-49a2-b05e-b62545488b2d-kube-api-access-86v64\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ef6f08e-df36-49a2-b05e-b62545488b2d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:24:18 crc kubenswrapper[4794]: I0216 17:24:18.230513 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:24:18 crc kubenswrapper[4794]: I0216 17:24:18.278084 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-4kglx"] Feb 16 17:24:18 crc kubenswrapper[4794]: I0216 17:24:18.580446 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:24:18 crc kubenswrapper[4794]: I0216 17:24:18.598897 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:24:18 crc kubenswrapper[4794]: W0216 17:24:18.610165 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod863be36f_716a_4890_9790_1e82c5542f1f.slice/crio-2aab68831be65f4245248e5c1e565267914c6ec33ee10f0cfdd19761f1030557 WatchSource:0}: Error finding container 2aab68831be65f4245248e5c1e565267914c6ec33ee10f0cfdd19761f1030557: Status 404 returned error can't find the container with id 2aab68831be65f4245248e5c1e565267914c6ec33ee10f0cfdd19761f1030557 Feb 16 17:24:18 crc kubenswrapper[4794]: I0216 17:24:18.897884 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-kdxjb"] Feb 16 17:24:18 crc kubenswrapper[4794]: I0216 17:24:18.976973 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4kglx" event={"ID":"c08d48e2-27f0-44e5-a13a-815719c3f5dc","Type":"ContainerStarted","Data":"e8abf350a47b29c3209ffe1180e17a1433efc2a261f3f0546d5ea8c697b07457"} Feb 16 17:24:18 crc kubenswrapper[4794]: I0216 17:24:18.977280 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4kglx" event={"ID":"c08d48e2-27f0-44e5-a13a-815719c3f5dc","Type":"ContainerStarted","Data":"773775cfc4fe23a88620cab0e828787740766b5b5b144083038be4beb5ea6f70"} Feb 16 17:24:18 crc kubenswrapper[4794]: I0216 17:24:18.982698 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a990777-4ada-4e8d-ac0f-451a616ec3bc","Type":"ContainerStarted","Data":"151a0e6387a7a8b32580253febd49a76190d33f8308bbf9108816837b1049ad1"} Feb 16 17:24:18 crc kubenswrapper[4794]: I0216 17:24:18.998040 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:24:19 crc kubenswrapper[4794]: I0216 17:24:19.008082 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d32ddf99-3213-4ddb-9916-695dc1f70dfc","Type":"ContainerStarted","Data":"f31f8afee8d1749566e1aa849524fe89812addac9d3dcd4a29dc6f92bcf3f7e5"} Feb 16 17:24:19 crc kubenswrapper[4794]: I0216 17:24:19.015388 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"863be36f-716a-4890-9790-1e82c5542f1f","Type":"ContainerStarted","Data":"2aab68831be65f4245248e5c1e565267914c6ec33ee10f0cfdd19761f1030557"} Feb 16 17:24:19 crc kubenswrapper[4794]: I0216 17:24:19.016251 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-4kglx" podStartSLOduration=3.016229341 podStartE2EDuration="3.016229341s" podCreationTimestamp="2026-02-16 17:24:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:24:18.998280354 +0000 UTC m=+1484.946375001" watchObservedRunningTime="2026-02-16 17:24:19.016229341 +0000 UTC m=+1484.964323988" Feb 16 17:24:19 crc kubenswrapper[4794]: I0216 17:24:19.031155 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-kdxjb" event={"ID":"3fbebaa3-8aa2-4ace-a9c9-558bc3964430","Type":"ContainerStarted","Data":"ff9256edb661d883a8e9fc31aeda100e031fa8dd89b1fff14b8ce121c17bac47"} Feb 16 17:24:19 crc kubenswrapper[4794]: I0216 17:24:19.121065 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 17:24:19 crc kubenswrapper[4794]: I0216 17:24:19.729936 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9j8qt"] Feb 16 17:24:19 crc kubenswrapper[4794]: I0216 17:24:19.734427 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9j8qt" Feb 16 17:24:19 crc kubenswrapper[4794]: I0216 17:24:19.745727 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9j8qt"] Feb 16 17:24:19 crc kubenswrapper[4794]: I0216 17:24:19.747739 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 16 17:24:19 crc kubenswrapper[4794]: I0216 17:24:19.747911 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 16 17:24:19 crc kubenswrapper[4794]: I0216 17:24:19.923663 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xj75\" (UniqueName: \"kubernetes.io/projected/521b6a44-f328-4e6e-926b-f27a9b9810ad-kube-api-access-9xj75\") pod \"nova-cell1-conductor-db-sync-9j8qt\" (UID: \"521b6a44-f328-4e6e-926b-f27a9b9810ad\") " pod="openstack/nova-cell1-conductor-db-sync-9j8qt" Feb 16 17:24:19 crc kubenswrapper[4794]: I0216 17:24:19.923769 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/521b6a44-f328-4e6e-926b-f27a9b9810ad-config-data\") pod \"nova-cell1-conductor-db-sync-9j8qt\" (UID: \"521b6a44-f328-4e6e-926b-f27a9b9810ad\") " pod="openstack/nova-cell1-conductor-db-sync-9j8qt" Feb 16 17:24:19 crc kubenswrapper[4794]: I0216 17:24:19.923917 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/521b6a44-f328-4e6e-926b-f27a9b9810ad-scripts\") pod \"nova-cell1-conductor-db-sync-9j8qt\" (UID: \"521b6a44-f328-4e6e-926b-f27a9b9810ad\") " pod="openstack/nova-cell1-conductor-db-sync-9j8qt" Feb 16 17:24:19 crc kubenswrapper[4794]: I0216 17:24:19.923949 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/521b6a44-f328-4e6e-926b-f27a9b9810ad-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9j8qt\" (UID: \"521b6a44-f328-4e6e-926b-f27a9b9810ad\") " pod="openstack/nova-cell1-conductor-db-sync-9j8qt" Feb 16 17:24:20 crc kubenswrapper[4794]: I0216 17:24:20.026312 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xj75\" (UniqueName: \"kubernetes.io/projected/521b6a44-f328-4e6e-926b-f27a9b9810ad-kube-api-access-9xj75\") pod \"nova-cell1-conductor-db-sync-9j8qt\" (UID: \"521b6a44-f328-4e6e-926b-f27a9b9810ad\") " pod="openstack/nova-cell1-conductor-db-sync-9j8qt" Feb 16 17:24:20 crc kubenswrapper[4794]: I0216 17:24:20.026728 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/521b6a44-f328-4e6e-926b-f27a9b9810ad-config-data\") pod \"nova-cell1-conductor-db-sync-9j8qt\" (UID: \"521b6a44-f328-4e6e-926b-f27a9b9810ad\") " pod="openstack/nova-cell1-conductor-db-sync-9j8qt" Feb 16 17:24:20 crc kubenswrapper[4794]: I0216 17:24:20.026834 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/521b6a44-f328-4e6e-926b-f27a9b9810ad-scripts\") pod \"nova-cell1-conductor-db-sync-9j8qt\" (UID: \"521b6a44-f328-4e6e-926b-f27a9b9810ad\") " pod="openstack/nova-cell1-conductor-db-sync-9j8qt" Feb 16 17:24:20 crc kubenswrapper[4794]: I0216 17:24:20.026863 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/521b6a44-f328-4e6e-926b-f27a9b9810ad-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9j8qt\" (UID: \"521b6a44-f328-4e6e-926b-f27a9b9810ad\") " pod="openstack/nova-cell1-conductor-db-sync-9j8qt" Feb 16 17:24:20 crc kubenswrapper[4794]: I0216 17:24:20.033744 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/521b6a44-f328-4e6e-926b-f27a9b9810ad-config-data\") pod \"nova-cell1-conductor-db-sync-9j8qt\" (UID: \"521b6a44-f328-4e6e-926b-f27a9b9810ad\") " pod="openstack/nova-cell1-conductor-db-sync-9j8qt" Feb 16 17:24:20 crc kubenswrapper[4794]: I0216 17:24:20.033813 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/521b6a44-f328-4e6e-926b-f27a9b9810ad-scripts\") pod \"nova-cell1-conductor-db-sync-9j8qt\" (UID: \"521b6a44-f328-4e6e-926b-f27a9b9810ad\") " pod="openstack/nova-cell1-conductor-db-sync-9j8qt" Feb 16 17:24:20 crc kubenswrapper[4794]: I0216 17:24:20.046279 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/521b6a44-f328-4e6e-926b-f27a9b9810ad-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9j8qt\" (UID: \"521b6a44-f328-4e6e-926b-f27a9b9810ad\") " pod="openstack/nova-cell1-conductor-db-sync-9j8qt" Feb 16 17:24:20 crc kubenswrapper[4794]: I0216 17:24:20.057233 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xj75\" (UniqueName: \"kubernetes.io/projected/521b6a44-f328-4e6e-926b-f27a9b9810ad-kube-api-access-9xj75\") pod \"nova-cell1-conductor-db-sync-9j8qt\" (UID: \"521b6a44-f328-4e6e-926b-f27a9b9810ad\") " pod="openstack/nova-cell1-conductor-db-sync-9j8qt" Feb 16 17:24:20 crc kubenswrapper[4794]: I0216 17:24:20.058561 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7ef6f08e-df36-49a2-b05e-b62545488b2d","Type":"ContainerStarted","Data":"de3938786386d6912af785e3fc893d9dd13710d5399a97c82c605b6154fa62c4"} Feb 16 17:24:20 crc kubenswrapper[4794]: I0216 17:24:20.069681 4794 generic.go:334] "Generic (PLEG): container finished" podID="3fbebaa3-8aa2-4ace-a9c9-558bc3964430" containerID="128e84ee994db10b71ef37c8025aa78608235ade22a8dc2863eec2584b1dd6b5" exitCode=0 Feb 16 17:24:20 crc kubenswrapper[4794]: I0216 17:24:20.069762 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-kdxjb" event={"ID":"3fbebaa3-8aa2-4ace-a9c9-558bc3964430","Type":"ContainerDied","Data":"128e84ee994db10b71ef37c8025aa78608235ade22a8dc2863eec2584b1dd6b5"} Feb 16 17:24:20 crc kubenswrapper[4794]: I0216 17:24:20.075562 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9j8qt" Feb 16 17:24:20 crc kubenswrapper[4794]: I0216 17:24:20.092754 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a990777-4ada-4e8d-ac0f-451a616ec3bc","Type":"ContainerStarted","Data":"32a49dad1d023460fed090486f413901c58862ab95b018902e8f1343c3efceea"} Feb 16 17:24:20 crc kubenswrapper[4794]: I0216 17:24:20.101496 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"43f61c36-3ecb-43dc-a9cd-d713af555005","Type":"ContainerStarted","Data":"b1144509806ec8559531176fd9266e4d3dae26701e06b81c2b4e0e12709d7806"} Feb 16 17:24:20 crc kubenswrapper[4794]: I0216 17:24:20.146127 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:24:20 crc kubenswrapper[4794]: I0216 17:24:20.146198 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:24:20 crc kubenswrapper[4794]: I0216 17:24:20.146277 4794 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" Feb 16 17:24:20 crc kubenswrapper[4794]: I0216 17:24:20.147816 4794 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6e22a7be8018d748f49f3c871459e97f76ee04f859d3d1d0d46b4bbb2dd36691"} pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 17:24:20 crc kubenswrapper[4794]: I0216 17:24:20.147881 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" containerID="cri-o://6e22a7be8018d748f49f3c871459e97f76ee04f859d3d1d0d46b4bbb2dd36691" gracePeriod=600 Feb 16 17:24:20 crc kubenswrapper[4794]: E0216 17:24:20.316960 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:24:20 crc kubenswrapper[4794]: I0216 17:24:20.673328 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9j8qt"] Feb 16 17:24:21 crc kubenswrapper[4794]: I0216 17:24:21.019938 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:24:21 crc kubenswrapper[4794]: I0216 17:24:21.031526 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 17:24:21 crc kubenswrapper[4794]: I0216 17:24:21.122485 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-kdxjb" event={"ID":"3fbebaa3-8aa2-4ace-a9c9-558bc3964430","Type":"ContainerStarted","Data":"a00b53ad46b822a70c9339195ca2a4b34915849555540ce220adb1a6c8f851a8"} Feb 16 17:24:21 crc kubenswrapper[4794]: I0216 17:24:21.122548 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-568d7fd7cf-kdxjb" Feb 16 17:24:21 crc kubenswrapper[4794]: I0216 17:24:21.129724 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a990777-4ada-4e8d-ac0f-451a616ec3bc","Type":"ContainerStarted","Data":"eca50d3d6fed1fb508bf608a4b887e8e035a6a3a9f39e4ace7c87216bc41269a"} Feb 16 17:24:21 crc kubenswrapper[4794]: I0216 17:24:21.132851 4794 generic.go:334] "Generic (PLEG): container finished" podID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerID="6e22a7be8018d748f49f3c871459e97f76ee04f859d3d1d0d46b4bbb2dd36691" exitCode=0 Feb 16 17:24:21 crc kubenswrapper[4794]: I0216 17:24:21.132904 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerDied","Data":"6e22a7be8018d748f49f3c871459e97f76ee04f859d3d1d0d46b4bbb2dd36691"} Feb 16 17:24:21 crc kubenswrapper[4794]: I0216 17:24:21.132939 4794 scope.go:117] "RemoveContainer" containerID="07948fa6ee2afc937a020c1d294030183c36d82f2764d0a7fd3e60ea347005ea" Feb 16 17:24:21 crc kubenswrapper[4794]: I0216 17:24:21.133810 4794 scope.go:117] "RemoveContainer" containerID="6e22a7be8018d748f49f3c871459e97f76ee04f859d3d1d0d46b4bbb2dd36691" Feb 16 17:24:21 crc kubenswrapper[4794]: E0216 17:24:21.134190 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:24:21 crc kubenswrapper[4794]: I0216 17:24:21.140918 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9j8qt" event={"ID":"521b6a44-f328-4e6e-926b-f27a9b9810ad","Type":"ContainerStarted","Data":"3d989728a6c6473563fd329699220c7c5105ee4d0614aa7048d2de1a9a071282"} Feb 16 17:24:21 crc kubenswrapper[4794]: I0216 17:24:21.203353 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-568d7fd7cf-kdxjb" podStartSLOduration=4.203279323 podStartE2EDuration="4.203279323s" podCreationTimestamp="2026-02-16 17:24:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:24:21.147892389 +0000 UTC m=+1487.095987046" watchObservedRunningTime="2026-02-16 17:24:21.203279323 +0000 UTC m=+1487.151373970" Feb 16 17:24:21 crc kubenswrapper[4794]: I0216 17:24:21.506679 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:24:23 crc kubenswrapper[4794]: I0216 17:24:23.164839 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9j8qt" event={"ID":"521b6a44-f328-4e6e-926b-f27a9b9810ad","Type":"ContainerStarted","Data":"cef756b523489089cdfc52fe85cf59247cde121a8515537da9a4a1f17ba2c217"} Feb 16 17:24:23 crc kubenswrapper[4794]: I0216 17:24:23.184353 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-9j8qt" podStartSLOduration=4.184333309 podStartE2EDuration="4.184333309s" podCreationTimestamp="2026-02-16 17:24:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:24:23.178734341 +0000 UTC m=+1489.126828988" watchObservedRunningTime="2026-02-16 17:24:23.184333309 +0000 UTC m=+1489.132427956" Feb 16 17:24:24 crc kubenswrapper[4794]: I0216 17:24:24.198750 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"863be36f-716a-4890-9790-1e82c5542f1f","Type":"ContainerStarted","Data":"7da1c55f756dd17a88b39d85dc53a6648ed8d9966956f0f579ccd7603fac65b7"} Feb 16 17:24:24 crc kubenswrapper[4794]: I0216 17:24:24.203157 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a990777-4ada-4e8d-ac0f-451a616ec3bc","Type":"ContainerStarted","Data":"ddc537f70607c175d12073de1959c9cea69b371ff4837fbf5068cc3ed7ac8cbb"} Feb 16 17:24:24 crc kubenswrapper[4794]: I0216 17:24:24.203481 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2a990777-4ada-4e8d-ac0f-451a616ec3bc" containerName="ceilometer-central-agent" containerID="cri-o://151a0e6387a7a8b32580253febd49a76190d33f8308bbf9108816837b1049ad1" gracePeriod=30 Feb 16 17:24:24 crc kubenswrapper[4794]: I0216 17:24:24.203645 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2a990777-4ada-4e8d-ac0f-451a616ec3bc" containerName="proxy-httpd" containerID="cri-o://ddc537f70607c175d12073de1959c9cea69b371ff4837fbf5068cc3ed7ac8cbb" gracePeriod=30 Feb 16 17:24:24 crc kubenswrapper[4794]: I0216 17:24:24.203683 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2a990777-4ada-4e8d-ac0f-451a616ec3bc" containerName="ceilometer-notification-agent" containerID="cri-o://32a49dad1d023460fed090486f413901c58862ab95b018902e8f1343c3efceea" gracePeriod=30 Feb 16 17:24:24 crc kubenswrapper[4794]: I0216 17:24:24.203605 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2a990777-4ada-4e8d-ac0f-451a616ec3bc" containerName="sg-core" containerID="cri-o://eca50d3d6fed1fb508bf608a4b887e8e035a6a3a9f39e4ace7c87216bc41269a" gracePeriod=30 Feb 16 17:24:24 crc kubenswrapper[4794]: I0216 17:24:24.210693 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d32ddf99-3213-4ddb-9916-695dc1f70dfc","Type":"ContainerStarted","Data":"6893c8b477ba0d9e376c0d987a5da925ab307383c55f48346d050ab22236a63f"} Feb 16 17:24:24 crc kubenswrapper[4794]: I0216 17:24:24.215745 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"43f61c36-3ecb-43dc-a9cd-d713af555005","Type":"ContainerStarted","Data":"6a13c70a904a7546f869de189ad3d3f65091bd7b1966f754e6cf7722f8723eb5"} Feb 16 17:24:24 crc kubenswrapper[4794]: I0216 17:24:24.217150 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7ef6f08e-df36-49a2-b05e-b62545488b2d","Type":"ContainerStarted","Data":"2079d767c31a67dd91206f1ef52ef209de8fe0db27ad7f46c94629596d1cac5e"} Feb 16 17:24:24 crc kubenswrapper[4794]: I0216 17:24:24.217356 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="7ef6f08e-df36-49a2-b05e-b62545488b2d" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://2079d767c31a67dd91206f1ef52ef209de8fe0db27ad7f46c94629596d1cac5e" gracePeriod=30 Feb 16 17:24:24 crc kubenswrapper[4794]: I0216 17:24:24.239747 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.799853942 podStartE2EDuration="9.239722593s" podCreationTimestamp="2026-02-16 17:24:15 +0000 UTC" firstStartedPulling="2026-02-16 17:24:16.780693971 +0000 UTC m=+1482.728788638" lastFinishedPulling="2026-02-16 17:24:23.220562642 +0000 UTC m=+1489.168657289" observedRunningTime="2026-02-16 17:24:24.227340624 +0000 UTC m=+1490.175435271" watchObservedRunningTime="2026-02-16 17:24:24.239722593 +0000 UTC m=+1490.187817240" Feb 16 17:24:24 crc kubenswrapper[4794]: I0216 17:24:24.259370 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.116655496 podStartE2EDuration="7.259351527s" podCreationTimestamp="2026-02-16 17:24:17 +0000 UTC" firstStartedPulling="2026-02-16 17:24:19.066969153 +0000 UTC m=+1485.015063800" lastFinishedPulling="2026-02-16 17:24:23.209665174 +0000 UTC m=+1489.157759831" observedRunningTime="2026-02-16 17:24:24.254240213 +0000 UTC m=+1490.202334860" watchObservedRunningTime="2026-02-16 17:24:24.259351527 +0000 UTC m=+1490.207446174" Feb 16 17:24:24 crc kubenswrapper[4794]: I0216 17:24:24.285935 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.211764542 podStartE2EDuration="7.285917177s" podCreationTimestamp="2026-02-16 17:24:17 +0000 UTC" firstStartedPulling="2026-02-16 17:24:19.142249579 +0000 UTC m=+1485.090344226" lastFinishedPulling="2026-02-16 17:24:23.216402214 +0000 UTC m=+1489.164496861" observedRunningTime="2026-02-16 17:24:24.28388692 +0000 UTC m=+1490.231981567" watchObservedRunningTime="2026-02-16 17:24:24.285917177 +0000 UTC m=+1490.234011824" Feb 16 17:24:25 crc kubenswrapper[4794]: I0216 17:24:25.230748 4794 generic.go:334] "Generic (PLEG): container finished" podID="2a990777-4ada-4e8d-ac0f-451a616ec3bc" containerID="ddc537f70607c175d12073de1959c9cea69b371ff4837fbf5068cc3ed7ac8cbb" exitCode=0 Feb 16 17:24:25 crc kubenswrapper[4794]: I0216 17:24:25.231083 4794 generic.go:334] "Generic (PLEG): container finished" podID="2a990777-4ada-4e8d-ac0f-451a616ec3bc" containerID="eca50d3d6fed1fb508bf608a4b887e8e035a6a3a9f39e4ace7c87216bc41269a" exitCode=2 Feb 16 17:24:25 crc kubenswrapper[4794]: I0216 17:24:25.231095 4794 generic.go:334] "Generic (PLEG): container finished" podID="2a990777-4ada-4e8d-ac0f-451a616ec3bc" containerID="32a49dad1d023460fed090486f413901c58862ab95b018902e8f1343c3efceea" exitCode=0 Feb 16 17:24:25 crc kubenswrapper[4794]: I0216 17:24:25.230824 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a990777-4ada-4e8d-ac0f-451a616ec3bc","Type":"ContainerDied","Data":"ddc537f70607c175d12073de1959c9cea69b371ff4837fbf5068cc3ed7ac8cbb"} Feb 16 17:24:25 crc kubenswrapper[4794]: I0216 17:24:25.231160 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a990777-4ada-4e8d-ac0f-451a616ec3bc","Type":"ContainerDied","Data":"eca50d3d6fed1fb508bf608a4b887e8e035a6a3a9f39e4ace7c87216bc41269a"} Feb 16 17:24:25 crc kubenswrapper[4794]: I0216 17:24:25.231174 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a990777-4ada-4e8d-ac0f-451a616ec3bc","Type":"ContainerDied","Data":"32a49dad1d023460fed090486f413901c58862ab95b018902e8f1343c3efceea"} Feb 16 17:24:25 crc kubenswrapper[4794]: I0216 17:24:25.234815 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d32ddf99-3213-4ddb-9916-695dc1f70dfc","Type":"ContainerStarted","Data":"245d1fdd850a50164a2aa4dd4d753826ca9451243c0351732a0e0f009e5ada10"} Feb 16 17:24:25 crc kubenswrapper[4794]: I0216 17:24:25.235334 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d32ddf99-3213-4ddb-9916-695dc1f70dfc" containerName="nova-metadata-metadata" containerID="cri-o://245d1fdd850a50164a2aa4dd4d753826ca9451243c0351732a0e0f009e5ada10" gracePeriod=30 Feb 16 17:24:25 crc kubenswrapper[4794]: I0216 17:24:25.235264 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d32ddf99-3213-4ddb-9916-695dc1f70dfc" containerName="nova-metadata-log" containerID="cri-o://6893c8b477ba0d9e376c0d987a5da925ab307383c55f48346d050ab22236a63f" gracePeriod=30 Feb 16 17:24:25 crc kubenswrapper[4794]: I0216 17:24:25.243455 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"863be36f-716a-4890-9790-1e82c5542f1f","Type":"ContainerStarted","Data":"76159a0e72c392553cae5906836fd1e7b1067347b69184a2998a3d394b57803b"} Feb 16 17:24:25 crc kubenswrapper[4794]: I0216 17:24:25.266071 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.658418443 podStartE2EDuration="8.266051829s" podCreationTimestamp="2026-02-16 17:24:17 +0000 UTC" firstStartedPulling="2026-02-16 17:24:18.60883722 +0000 UTC m=+1484.556931867" lastFinishedPulling="2026-02-16 17:24:23.216470606 +0000 UTC m=+1489.164565253" observedRunningTime="2026-02-16 17:24:25.263153747 +0000 UTC m=+1491.211248404" watchObservedRunningTime="2026-02-16 17:24:25.266051829 +0000 UTC m=+1491.214146466" Feb 16 17:24:25 crc kubenswrapper[4794]: I0216 17:24:25.295390 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.729513873 podStartE2EDuration="8.295367489s" podCreationTimestamp="2026-02-16 17:24:17 +0000 UTC" firstStartedPulling="2026-02-16 17:24:18.629594326 +0000 UTC m=+1484.577688973" lastFinishedPulling="2026-02-16 17:24:23.195447942 +0000 UTC m=+1489.143542589" observedRunningTime="2026-02-16 17:24:25.292394225 +0000 UTC m=+1491.240488892" watchObservedRunningTime="2026-02-16 17:24:25.295367489 +0000 UTC m=+1491.243462136" Feb 16 17:24:25 crc kubenswrapper[4794]: I0216 17:24:25.972433 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.114328 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d32ddf99-3213-4ddb-9916-695dc1f70dfc-logs\") pod \"d32ddf99-3213-4ddb-9916-695dc1f70dfc\" (UID: \"d32ddf99-3213-4ddb-9916-695dc1f70dfc\") " Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.114446 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d32ddf99-3213-4ddb-9916-695dc1f70dfc-config-data\") pod \"d32ddf99-3213-4ddb-9916-695dc1f70dfc\" (UID: \"d32ddf99-3213-4ddb-9916-695dc1f70dfc\") " Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.114560 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d32ddf99-3213-4ddb-9916-695dc1f70dfc-combined-ca-bundle\") pod \"d32ddf99-3213-4ddb-9916-695dc1f70dfc\" (UID: \"d32ddf99-3213-4ddb-9916-695dc1f70dfc\") " Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.114692 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d32ddf99-3213-4ddb-9916-695dc1f70dfc-logs" (OuterVolumeSpecName: "logs") pod "d32ddf99-3213-4ddb-9916-695dc1f70dfc" (UID: "d32ddf99-3213-4ddb-9916-695dc1f70dfc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.114757 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qhvz\" (UniqueName: \"kubernetes.io/projected/d32ddf99-3213-4ddb-9916-695dc1f70dfc-kube-api-access-8qhvz\") pod \"d32ddf99-3213-4ddb-9916-695dc1f70dfc\" (UID: \"d32ddf99-3213-4ddb-9916-695dc1f70dfc\") " Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.115519 4794 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d32ddf99-3213-4ddb-9916-695dc1f70dfc-logs\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.133702 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d32ddf99-3213-4ddb-9916-695dc1f70dfc-kube-api-access-8qhvz" (OuterVolumeSpecName: "kube-api-access-8qhvz") pod "d32ddf99-3213-4ddb-9916-695dc1f70dfc" (UID: "d32ddf99-3213-4ddb-9916-695dc1f70dfc"). InnerVolumeSpecName "kube-api-access-8qhvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.145617 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d32ddf99-3213-4ddb-9916-695dc1f70dfc-config-data" (OuterVolumeSpecName: "config-data") pod "d32ddf99-3213-4ddb-9916-695dc1f70dfc" (UID: "d32ddf99-3213-4ddb-9916-695dc1f70dfc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.179446 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d32ddf99-3213-4ddb-9916-695dc1f70dfc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d32ddf99-3213-4ddb-9916-695dc1f70dfc" (UID: "d32ddf99-3213-4ddb-9916-695dc1f70dfc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.217965 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d32ddf99-3213-4ddb-9916-695dc1f70dfc-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.217996 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d32ddf99-3213-4ddb-9916-695dc1f70dfc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.218007 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qhvz\" (UniqueName: \"kubernetes.io/projected/d32ddf99-3213-4ddb-9916-695dc1f70dfc-kube-api-access-8qhvz\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.260446 4794 generic.go:334] "Generic (PLEG): container finished" podID="d32ddf99-3213-4ddb-9916-695dc1f70dfc" containerID="245d1fdd850a50164a2aa4dd4d753826ca9451243c0351732a0e0f009e5ada10" exitCode=0 Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.260494 4794 generic.go:334] "Generic (PLEG): container finished" podID="d32ddf99-3213-4ddb-9916-695dc1f70dfc" containerID="6893c8b477ba0d9e376c0d987a5da925ab307383c55f48346d050ab22236a63f" exitCode=143 Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.260518 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.260532 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d32ddf99-3213-4ddb-9916-695dc1f70dfc","Type":"ContainerDied","Data":"245d1fdd850a50164a2aa4dd4d753826ca9451243c0351732a0e0f009e5ada10"} Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.260571 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d32ddf99-3213-4ddb-9916-695dc1f70dfc","Type":"ContainerDied","Data":"6893c8b477ba0d9e376c0d987a5da925ab307383c55f48346d050ab22236a63f"} Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.260582 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d32ddf99-3213-4ddb-9916-695dc1f70dfc","Type":"ContainerDied","Data":"f31f8afee8d1749566e1aa849524fe89812addac9d3dcd4a29dc6f92bcf3f7e5"} Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.260606 4794 scope.go:117] "RemoveContainer" containerID="245d1fdd850a50164a2aa4dd4d753826ca9451243c0351732a0e0f009e5ada10" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.306864 4794 scope.go:117] "RemoveContainer" containerID="6893c8b477ba0d9e376c0d987a5da925ab307383c55f48346d050ab22236a63f" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.320077 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.341349 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.346448 4794 scope.go:117] "RemoveContainer" containerID="245d1fdd850a50164a2aa4dd4d753826ca9451243c0351732a0e0f009e5ada10" Feb 16 17:24:26 crc kubenswrapper[4794]: E0216 17:24:26.349606 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"245d1fdd850a50164a2aa4dd4d753826ca9451243c0351732a0e0f009e5ada10\": container with ID starting with 245d1fdd850a50164a2aa4dd4d753826ca9451243c0351732a0e0f009e5ada10 not found: ID does not exist" containerID="245d1fdd850a50164a2aa4dd4d753826ca9451243c0351732a0e0f009e5ada10" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.349650 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"245d1fdd850a50164a2aa4dd4d753826ca9451243c0351732a0e0f009e5ada10"} err="failed to get container status \"245d1fdd850a50164a2aa4dd4d753826ca9451243c0351732a0e0f009e5ada10\": rpc error: code = NotFound desc = could not find container \"245d1fdd850a50164a2aa4dd4d753826ca9451243c0351732a0e0f009e5ada10\": container with ID starting with 245d1fdd850a50164a2aa4dd4d753826ca9451243c0351732a0e0f009e5ada10 not found: ID does not exist" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.349677 4794 scope.go:117] "RemoveContainer" containerID="6893c8b477ba0d9e376c0d987a5da925ab307383c55f48346d050ab22236a63f" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.357464 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:24:26 crc kubenswrapper[4794]: E0216 17:24:26.357944 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d32ddf99-3213-4ddb-9916-695dc1f70dfc" containerName="nova-metadata-metadata" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.357960 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="d32ddf99-3213-4ddb-9916-695dc1f70dfc" containerName="nova-metadata-metadata" Feb 16 17:24:26 crc kubenswrapper[4794]: E0216 17:24:26.357994 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d32ddf99-3213-4ddb-9916-695dc1f70dfc" containerName="nova-metadata-log" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.358001 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="d32ddf99-3213-4ddb-9916-695dc1f70dfc" containerName="nova-metadata-log" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.358213 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="d32ddf99-3213-4ddb-9916-695dc1f70dfc" containerName="nova-metadata-metadata" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.358231 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="d32ddf99-3213-4ddb-9916-695dc1f70dfc" containerName="nova-metadata-log" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.359408 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.362198 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.362372 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 17:24:26 crc kubenswrapper[4794]: E0216 17:24:26.378403 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6893c8b477ba0d9e376c0d987a5da925ab307383c55f48346d050ab22236a63f\": container with ID starting with 6893c8b477ba0d9e376c0d987a5da925ab307383c55f48346d050ab22236a63f not found: ID does not exist" containerID="6893c8b477ba0d9e376c0d987a5da925ab307383c55f48346d050ab22236a63f" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.378443 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6893c8b477ba0d9e376c0d987a5da925ab307383c55f48346d050ab22236a63f"} err="failed to get container status \"6893c8b477ba0d9e376c0d987a5da925ab307383c55f48346d050ab22236a63f\": rpc error: code = NotFound desc = could not find container \"6893c8b477ba0d9e376c0d987a5da925ab307383c55f48346d050ab22236a63f\": container with ID starting with 6893c8b477ba0d9e376c0d987a5da925ab307383c55f48346d050ab22236a63f not found: ID does not exist" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.378473 4794 scope.go:117] "RemoveContainer" containerID="245d1fdd850a50164a2aa4dd4d753826ca9451243c0351732a0e0f009e5ada10" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.386001 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"245d1fdd850a50164a2aa4dd4d753826ca9451243c0351732a0e0f009e5ada10"} err="failed to get container status \"245d1fdd850a50164a2aa4dd4d753826ca9451243c0351732a0e0f009e5ada10\": rpc error: code = NotFound desc = could not find container \"245d1fdd850a50164a2aa4dd4d753826ca9451243c0351732a0e0f009e5ada10\": container with ID starting with 245d1fdd850a50164a2aa4dd4d753826ca9451243c0351732a0e0f009e5ada10 not found: ID does not exist" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.386031 4794 scope.go:117] "RemoveContainer" containerID="6893c8b477ba0d9e376c0d987a5da925ab307383c55f48346d050ab22236a63f" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.386831 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6893c8b477ba0d9e376c0d987a5da925ab307383c55f48346d050ab22236a63f"} err="failed to get container status \"6893c8b477ba0d9e376c0d987a5da925ab307383c55f48346d050ab22236a63f\": rpc error: code = NotFound desc = could not find container \"6893c8b477ba0d9e376c0d987a5da925ab307383c55f48346d050ab22236a63f\": container with ID starting with 6893c8b477ba0d9e376c0d987a5da925ab307383c55f48346d050ab22236a63f not found: ID does not exist" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.393410 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.524036 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m2ls\" (UniqueName: \"kubernetes.io/projected/60bc3506-fc79-458a-bae4-cedfe5f09450-kube-api-access-5m2ls\") pod \"nova-metadata-0\" (UID: \"60bc3506-fc79-458a-bae4-cedfe5f09450\") " pod="openstack/nova-metadata-0" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.524398 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/60bc3506-fc79-458a-bae4-cedfe5f09450-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"60bc3506-fc79-458a-bae4-cedfe5f09450\") " pod="openstack/nova-metadata-0" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.524535 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60bc3506-fc79-458a-bae4-cedfe5f09450-config-data\") pod \"nova-metadata-0\" (UID: \"60bc3506-fc79-458a-bae4-cedfe5f09450\") " pod="openstack/nova-metadata-0" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.524577 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60bc3506-fc79-458a-bae4-cedfe5f09450-logs\") pod \"nova-metadata-0\" (UID: \"60bc3506-fc79-458a-bae4-cedfe5f09450\") " pod="openstack/nova-metadata-0" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.524641 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60bc3506-fc79-458a-bae4-cedfe5f09450-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"60bc3506-fc79-458a-bae4-cedfe5f09450\") " pod="openstack/nova-metadata-0" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.626557 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60bc3506-fc79-458a-bae4-cedfe5f09450-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"60bc3506-fc79-458a-bae4-cedfe5f09450\") " pod="openstack/nova-metadata-0" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.626645 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m2ls\" (UniqueName: \"kubernetes.io/projected/60bc3506-fc79-458a-bae4-cedfe5f09450-kube-api-access-5m2ls\") pod \"nova-metadata-0\" (UID: \"60bc3506-fc79-458a-bae4-cedfe5f09450\") " pod="openstack/nova-metadata-0" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.626681 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/60bc3506-fc79-458a-bae4-cedfe5f09450-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"60bc3506-fc79-458a-bae4-cedfe5f09450\") " pod="openstack/nova-metadata-0" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.626804 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60bc3506-fc79-458a-bae4-cedfe5f09450-config-data\") pod \"nova-metadata-0\" (UID: \"60bc3506-fc79-458a-bae4-cedfe5f09450\") " pod="openstack/nova-metadata-0" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.626852 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60bc3506-fc79-458a-bae4-cedfe5f09450-logs\") pod \"nova-metadata-0\" (UID: \"60bc3506-fc79-458a-bae4-cedfe5f09450\") " pod="openstack/nova-metadata-0" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.627439 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60bc3506-fc79-458a-bae4-cedfe5f09450-logs\") pod \"nova-metadata-0\" (UID: \"60bc3506-fc79-458a-bae4-cedfe5f09450\") " pod="openstack/nova-metadata-0" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.630738 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/60bc3506-fc79-458a-bae4-cedfe5f09450-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"60bc3506-fc79-458a-bae4-cedfe5f09450\") " pod="openstack/nova-metadata-0" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.631851 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60bc3506-fc79-458a-bae4-cedfe5f09450-config-data\") pod \"nova-metadata-0\" (UID: \"60bc3506-fc79-458a-bae4-cedfe5f09450\") " pod="openstack/nova-metadata-0" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.631866 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60bc3506-fc79-458a-bae4-cedfe5f09450-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"60bc3506-fc79-458a-bae4-cedfe5f09450\") " pod="openstack/nova-metadata-0" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.659895 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m2ls\" (UniqueName: \"kubernetes.io/projected/60bc3506-fc79-458a-bae4-cedfe5f09450-kube-api-access-5m2ls\") pod \"nova-metadata-0\" (UID: \"60bc3506-fc79-458a-bae4-cedfe5f09450\") " pod="openstack/nova-metadata-0" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.703628 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:24:26 crc kubenswrapper[4794]: I0216 17:24:26.832958 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d32ddf99-3213-4ddb-9916-695dc1f70dfc" path="/var/lib/kubelet/pods/d32ddf99-3213-4ddb-9916-695dc1f70dfc/volumes" Feb 16 17:24:27 crc kubenswrapper[4794]: I0216 17:24:27.242096 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:24:27 crc kubenswrapper[4794]: I0216 17:24:27.275952 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"60bc3506-fc79-458a-bae4-cedfe5f09450","Type":"ContainerStarted","Data":"a7159595670de318a94b8def56b4cdf01cb8a5e4e6ad0fcc7ec09718f5965dcd"} Feb 16 17:24:27 crc kubenswrapper[4794]: I0216 17:24:27.421514 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 17:24:27 crc kubenswrapper[4794]: I0216 17:24:27.422258 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 17:24:27 crc kubenswrapper[4794]: I0216 17:24:27.572682 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-rn6n9"] Feb 16 17:24:27 crc kubenswrapper[4794]: I0216 17:24:27.575026 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-rn6n9" Feb 16 17:24:27 crc kubenswrapper[4794]: I0216 17:24:27.625013 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-4618-account-create-update-s8vpk"] Feb 16 17:24:27 crc kubenswrapper[4794]: I0216 17:24:27.628056 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-4618-account-create-update-s8vpk" Feb 16 17:24:27 crc kubenswrapper[4794]: I0216 17:24:27.629767 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Feb 16 17:24:27 crc kubenswrapper[4794]: I0216 17:24:27.648403 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-rn6n9"] Feb 16 17:24:27 crc kubenswrapper[4794]: I0216 17:24:27.655520 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78df36ef-5c86-41c8-9085-7ce98caad880-operator-scripts\") pod \"aodh-db-create-rn6n9\" (UID: \"78df36ef-5c86-41c8-9085-7ce98caad880\") " pod="openstack/aodh-db-create-rn6n9" Feb 16 17:24:27 crc kubenswrapper[4794]: I0216 17:24:27.655616 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5qx7\" (UniqueName: \"kubernetes.io/projected/78df36ef-5c86-41c8-9085-7ce98caad880-kube-api-access-g5qx7\") pod \"aodh-db-create-rn6n9\" (UID: \"78df36ef-5c86-41c8-9085-7ce98caad880\") " pod="openstack/aodh-db-create-rn6n9" Feb 16 17:24:27 crc kubenswrapper[4794]: I0216 17:24:27.668145 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-4618-account-create-update-s8vpk"] Feb 16 17:24:27 crc kubenswrapper[4794]: I0216 17:24:27.761811 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6bd641d-034e-45b5-9379-422fe35d0054-operator-scripts\") pod \"aodh-4618-account-create-update-s8vpk\" (UID: \"a6bd641d-034e-45b5-9379-422fe35d0054\") " pod="openstack/aodh-4618-account-create-update-s8vpk" Feb 16 17:24:27 crc kubenswrapper[4794]: I0216 17:24:27.762025 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78df36ef-5c86-41c8-9085-7ce98caad880-operator-scripts\") pod \"aodh-db-create-rn6n9\" (UID: \"78df36ef-5c86-41c8-9085-7ce98caad880\") " pod="openstack/aodh-db-create-rn6n9" Feb 16 17:24:27 crc kubenswrapper[4794]: I0216 17:24:27.763317 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78df36ef-5c86-41c8-9085-7ce98caad880-operator-scripts\") pod \"aodh-db-create-rn6n9\" (UID: \"78df36ef-5c86-41c8-9085-7ce98caad880\") " pod="openstack/aodh-db-create-rn6n9" Feb 16 17:24:27 crc kubenswrapper[4794]: I0216 17:24:27.763648 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5qx7\" (UniqueName: \"kubernetes.io/projected/78df36ef-5c86-41c8-9085-7ce98caad880-kube-api-access-g5qx7\") pod \"aodh-db-create-rn6n9\" (UID: \"78df36ef-5c86-41c8-9085-7ce98caad880\") " pod="openstack/aodh-db-create-rn6n9" Feb 16 17:24:27 crc kubenswrapper[4794]: I0216 17:24:27.763709 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpcjl\" (UniqueName: \"kubernetes.io/projected/a6bd641d-034e-45b5-9379-422fe35d0054-kube-api-access-qpcjl\") pod \"aodh-4618-account-create-update-s8vpk\" (UID: \"a6bd641d-034e-45b5-9379-422fe35d0054\") " pod="openstack/aodh-4618-account-create-update-s8vpk" Feb 16 17:24:27 crc kubenswrapper[4794]: I0216 17:24:27.786871 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5qx7\" (UniqueName: \"kubernetes.io/projected/78df36ef-5c86-41c8-9085-7ce98caad880-kube-api-access-g5qx7\") pod \"aodh-db-create-rn6n9\" (UID: \"78df36ef-5c86-41c8-9085-7ce98caad880\") " pod="openstack/aodh-db-create-rn6n9" Feb 16 17:24:27 crc kubenswrapper[4794]: I0216 17:24:27.868991 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpcjl\" (UniqueName: \"kubernetes.io/projected/a6bd641d-034e-45b5-9379-422fe35d0054-kube-api-access-qpcjl\") pod \"aodh-4618-account-create-update-s8vpk\" (UID: \"a6bd641d-034e-45b5-9379-422fe35d0054\") " pod="openstack/aodh-4618-account-create-update-s8vpk" Feb 16 17:24:27 crc kubenswrapper[4794]: I0216 17:24:27.870456 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6bd641d-034e-45b5-9379-422fe35d0054-operator-scripts\") pod \"aodh-4618-account-create-update-s8vpk\" (UID: \"a6bd641d-034e-45b5-9379-422fe35d0054\") " pod="openstack/aodh-4618-account-create-update-s8vpk" Feb 16 17:24:27 crc kubenswrapper[4794]: I0216 17:24:27.873367 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6bd641d-034e-45b5-9379-422fe35d0054-operator-scripts\") pod \"aodh-4618-account-create-update-s8vpk\" (UID: \"a6bd641d-034e-45b5-9379-422fe35d0054\") " pod="openstack/aodh-4618-account-create-update-s8vpk" Feb 16 17:24:27 crc kubenswrapper[4794]: I0216 17:24:27.892923 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpcjl\" (UniqueName: \"kubernetes.io/projected/a6bd641d-034e-45b5-9379-422fe35d0054-kube-api-access-qpcjl\") pod \"aodh-4618-account-create-update-s8vpk\" (UID: \"a6bd641d-034e-45b5-9379-422fe35d0054\") " pod="openstack/aodh-4618-account-create-update-s8vpk" Feb 16 17:24:27 crc kubenswrapper[4794]: I0216 17:24:27.921518 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-568d7fd7cf-kdxjb" Feb 16 17:24:27 crc kubenswrapper[4794]: I0216 17:24:27.967737 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 17:24:27 crc kubenswrapper[4794]: I0216 17:24:27.967798 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 17:24:27 crc kubenswrapper[4794]: I0216 17:24:27.977898 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-rn6n9" Feb 16 17:24:28 crc kubenswrapper[4794]: I0216 17:24:28.001527 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 17:24:28 crc kubenswrapper[4794]: I0216 17:24:28.008742 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-4pz6j"] Feb 16 17:24:28 crc kubenswrapper[4794]: I0216 17:24:28.008976 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688b9f5b49-4pz6j" podUID="fb9be534-e864-414a-8dcd-6c9457f6f0bc" containerName="dnsmasq-dns" containerID="cri-o://5f4596ab47307f80837b2a4ca53f5e88933927c6306098f0ab7184132bd7c176" gracePeriod=10 Feb 16 17:24:28 crc kubenswrapper[4794]: I0216 17:24:28.036408 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-4618-account-create-update-s8vpk" Feb 16 17:24:28 crc kubenswrapper[4794]: I0216 17:24:28.232988 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:24:28 crc kubenswrapper[4794]: I0216 17:24:28.333852 4794 generic.go:334] "Generic (PLEG): container finished" podID="fb9be534-e864-414a-8dcd-6c9457f6f0bc" containerID="5f4596ab47307f80837b2a4ca53f5e88933927c6306098f0ab7184132bd7c176" exitCode=0 Feb 16 17:24:28 crc kubenswrapper[4794]: I0216 17:24:28.333914 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-4pz6j" event={"ID":"fb9be534-e864-414a-8dcd-6c9457f6f0bc","Type":"ContainerDied","Data":"5f4596ab47307f80837b2a4ca53f5e88933927c6306098f0ab7184132bd7c176"} Feb 16 17:24:28 crc kubenswrapper[4794]: I0216 17:24:28.359681 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"60bc3506-fc79-458a-bae4-cedfe5f09450","Type":"ContainerStarted","Data":"16b72ec76fdd869daf9a12348f14e96cba0987dfceaf9c4b9c9435ce5e8459ad"} Feb 16 17:24:28 crc kubenswrapper[4794]: I0216 17:24:28.359742 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"60bc3506-fc79-458a-bae4-cedfe5f09450","Type":"ContainerStarted","Data":"84c3bc688cf374fd20db486406c752c2adcfb9c52b250cea2bae1ffd583931f3"} Feb 16 17:24:28 crc kubenswrapper[4794]: I0216 17:24:28.404751 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.404728051 podStartE2EDuration="2.404728051s" podCreationTimestamp="2026-02-16 17:24:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:24:28.39374808 +0000 UTC m=+1494.341842727" watchObservedRunningTime="2026-02-16 17:24:28.404728051 +0000 UTC m=+1494.352822698" Feb 16 17:24:28 crc kubenswrapper[4794]: I0216 17:24:28.509198 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="863be36f-716a-4890-9790-1e82c5542f1f" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.238:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 17:24:28 crc kubenswrapper[4794]: I0216 17:24:28.509759 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="863be36f-716a-4890-9790-1e82c5542f1f" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.238:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 17:24:28 crc kubenswrapper[4794]: I0216 17:24:28.526589 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 17:24:28 crc kubenswrapper[4794]: I0216 17:24:28.756536 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-4618-account-create-update-s8vpk"] Feb 16 17:24:28 crc kubenswrapper[4794]: W0216 17:24:28.950342 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78df36ef_5c86_41c8_9085_7ce98caad880.slice/crio-d9d8276317afe3dd18c1c4cd5d1c572b2509335901ce946fa37fe8ff336e877f WatchSource:0}: Error finding container d9d8276317afe3dd18c1c4cd5d1c572b2509335901ce946fa37fe8ff336e877f: Status 404 returned error can't find the container with id d9d8276317afe3dd18c1c4cd5d1c572b2509335901ce946fa37fe8ff336e877f Feb 16 17:24:28 crc kubenswrapper[4794]: I0216 17:24:28.960049 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-rn6n9"] Feb 16 17:24:29 crc kubenswrapper[4794]: I0216 17:24:29.387089 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-4pz6j" Feb 16 17:24:29 crc kubenswrapper[4794]: I0216 17:24:29.421696 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-4pz6j" event={"ID":"fb9be534-e864-414a-8dcd-6c9457f6f0bc","Type":"ContainerDied","Data":"672d404498dbe189f44ab1a5f7b8057cfa93ced2d96270c1a240d21472c3607f"} Feb 16 17:24:29 crc kubenswrapper[4794]: I0216 17:24:29.421746 4794 scope.go:117] "RemoveContainer" containerID="5f4596ab47307f80837b2a4ca53f5e88933927c6306098f0ab7184132bd7c176" Feb 16 17:24:29 crc kubenswrapper[4794]: I0216 17:24:29.478678 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-4618-account-create-update-s8vpk" event={"ID":"a6bd641d-034e-45b5-9379-422fe35d0054","Type":"ContainerStarted","Data":"12f17d5ac32e08af8912b4dd207c6af189a74299ceefe361e64172031e650797"} Feb 16 17:24:29 crc kubenswrapper[4794]: I0216 17:24:29.478766 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-4618-account-create-update-s8vpk" event={"ID":"a6bd641d-034e-45b5-9379-422fe35d0054","Type":"ContainerStarted","Data":"d4f765b374f88e9160ed2aac652f625bdd7ae0674be7c9fc44c25d05b1fdcf0d"} Feb 16 17:24:29 crc kubenswrapper[4794]: I0216 17:24:29.498171 4794 generic.go:334] "Generic (PLEG): container finished" podID="c08d48e2-27f0-44e5-a13a-815719c3f5dc" containerID="e8abf350a47b29c3209ffe1180e17a1433efc2a261f3f0546d5ea8c697b07457" exitCode=0 Feb 16 17:24:29 crc kubenswrapper[4794]: I0216 17:24:29.498250 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4kglx" event={"ID":"c08d48e2-27f0-44e5-a13a-815719c3f5dc","Type":"ContainerDied","Data":"e8abf350a47b29c3209ffe1180e17a1433efc2a261f3f0546d5ea8c697b07457"} Feb 16 17:24:29 crc kubenswrapper[4794]: I0216 17:24:29.513261 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-rn6n9" event={"ID":"78df36ef-5c86-41c8-9085-7ce98caad880","Type":"ContainerStarted","Data":"d9d8276317afe3dd18c1c4cd5d1c572b2509335901ce946fa37fe8ff336e877f"} Feb 16 17:24:29 crc kubenswrapper[4794]: I0216 17:24:29.525967 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb9be534-e864-414a-8dcd-6c9457f6f0bc-config\") pod \"fb9be534-e864-414a-8dcd-6c9457f6f0bc\" (UID: \"fb9be534-e864-414a-8dcd-6c9457f6f0bc\") " Feb 16 17:24:29 crc kubenswrapper[4794]: I0216 17:24:29.526032 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb9be534-e864-414a-8dcd-6c9457f6f0bc-ovsdbserver-nb\") pod \"fb9be534-e864-414a-8dcd-6c9457f6f0bc\" (UID: \"fb9be534-e864-414a-8dcd-6c9457f6f0bc\") " Feb 16 17:24:29 crc kubenswrapper[4794]: I0216 17:24:29.526164 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb9be534-e864-414a-8dcd-6c9457f6f0bc-ovsdbserver-sb\") pod \"fb9be534-e864-414a-8dcd-6c9457f6f0bc\" (UID: \"fb9be534-e864-414a-8dcd-6c9457f6f0bc\") " Feb 16 17:24:29 crc kubenswrapper[4794]: I0216 17:24:29.526221 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdfd2\" (UniqueName: \"kubernetes.io/projected/fb9be534-e864-414a-8dcd-6c9457f6f0bc-kube-api-access-zdfd2\") pod \"fb9be534-e864-414a-8dcd-6c9457f6f0bc\" (UID: \"fb9be534-e864-414a-8dcd-6c9457f6f0bc\") " Feb 16 17:24:29 crc kubenswrapper[4794]: I0216 17:24:29.526265 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fb9be534-e864-414a-8dcd-6c9457f6f0bc-dns-swift-storage-0\") pod \"fb9be534-e864-414a-8dcd-6c9457f6f0bc\" (UID: \"fb9be534-e864-414a-8dcd-6c9457f6f0bc\") " Feb 16 17:24:29 crc kubenswrapper[4794]: I0216 17:24:29.537708 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb9be534-e864-414a-8dcd-6c9457f6f0bc-dns-svc\") pod \"fb9be534-e864-414a-8dcd-6c9457f6f0bc\" (UID: \"fb9be534-e864-414a-8dcd-6c9457f6f0bc\") " Feb 16 17:24:29 crc kubenswrapper[4794]: I0216 17:24:29.553633 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb9be534-e864-414a-8dcd-6c9457f6f0bc-kube-api-access-zdfd2" (OuterVolumeSpecName: "kube-api-access-zdfd2") pod "fb9be534-e864-414a-8dcd-6c9457f6f0bc" (UID: "fb9be534-e864-414a-8dcd-6c9457f6f0bc"). InnerVolumeSpecName "kube-api-access-zdfd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:24:29 crc kubenswrapper[4794]: I0216 17:24:29.556491 4794 scope.go:117] "RemoveContainer" containerID="2f9a52264662941bf1ae701a008247ee70c257c8840bd22422657d8b15faeb55" Feb 16 17:24:29 crc kubenswrapper[4794]: I0216 17:24:29.561836 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-4618-account-create-update-s8vpk" podStartSLOduration=2.561792623 podStartE2EDuration="2.561792623s" podCreationTimestamp="2026-02-16 17:24:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:24:29.517154939 +0000 UTC m=+1495.465249586" watchObservedRunningTime="2026-02-16 17:24:29.561792623 +0000 UTC m=+1495.509887270" Feb 16 17:24:29 crc kubenswrapper[4794]: I0216 17:24:29.666432 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdfd2\" (UniqueName: \"kubernetes.io/projected/fb9be534-e864-414a-8dcd-6c9457f6f0bc-kube-api-access-zdfd2\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:29 crc kubenswrapper[4794]: I0216 17:24:29.715328 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb9be534-e864-414a-8dcd-6c9457f6f0bc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fb9be534-e864-414a-8dcd-6c9457f6f0bc" (UID: "fb9be534-e864-414a-8dcd-6c9457f6f0bc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:24:29 crc kubenswrapper[4794]: I0216 17:24:29.717464 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb9be534-e864-414a-8dcd-6c9457f6f0bc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fb9be534-e864-414a-8dcd-6c9457f6f0bc" (UID: "fb9be534-e864-414a-8dcd-6c9457f6f0bc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:24:29 crc kubenswrapper[4794]: I0216 17:24:29.719007 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb9be534-e864-414a-8dcd-6c9457f6f0bc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fb9be534-e864-414a-8dcd-6c9457f6f0bc" (UID: "fb9be534-e864-414a-8dcd-6c9457f6f0bc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:24:29 crc kubenswrapper[4794]: I0216 17:24:29.761564 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb9be534-e864-414a-8dcd-6c9457f6f0bc-config" (OuterVolumeSpecName: "config") pod "fb9be534-e864-414a-8dcd-6c9457f6f0bc" (UID: "fb9be534-e864-414a-8dcd-6c9457f6f0bc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:24:29 crc kubenswrapper[4794]: I0216 17:24:29.765698 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb9be534-e864-414a-8dcd-6c9457f6f0bc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fb9be534-e864-414a-8dcd-6c9457f6f0bc" (UID: "fb9be534-e864-414a-8dcd-6c9457f6f0bc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:24:29 crc kubenswrapper[4794]: I0216 17:24:29.771921 4794 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb9be534-e864-414a-8dcd-6c9457f6f0bc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:29 crc kubenswrapper[4794]: I0216 17:24:29.771952 4794 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb9be534-e864-414a-8dcd-6c9457f6f0bc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:29 crc kubenswrapper[4794]: I0216 17:24:29.771961 4794 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fb9be534-e864-414a-8dcd-6c9457f6f0bc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:29 crc kubenswrapper[4794]: I0216 17:24:29.771971 4794 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb9be534-e864-414a-8dcd-6c9457f6f0bc-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:29 crc kubenswrapper[4794]: I0216 17:24:29.771980 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb9be534-e864-414a-8dcd-6c9457f6f0bc-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.535080 4794 generic.go:334] "Generic (PLEG): container finished" podID="78df36ef-5c86-41c8-9085-7ce98caad880" containerID="db728f45a7d72303db0063e0c79c648d9723af4783855140e9fc45dad0d2b4ea" exitCode=0 Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.535155 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-rn6n9" event={"ID":"78df36ef-5c86-41c8-9085-7ce98caad880","Type":"ContainerDied","Data":"db728f45a7d72303db0063e0c79c648d9723af4783855140e9fc45dad0d2b4ea"} Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.538332 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-4pz6j" Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.546547 4794 generic.go:334] "Generic (PLEG): container finished" podID="a6bd641d-034e-45b5-9379-422fe35d0054" containerID="12f17d5ac32e08af8912b4dd207c6af189a74299ceefe361e64172031e650797" exitCode=0 Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.546684 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.546694 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-4618-account-create-update-s8vpk" event={"ID":"a6bd641d-034e-45b5-9379-422fe35d0054","Type":"ContainerDied","Data":"12f17d5ac32e08af8912b4dd207c6af189a74299ceefe361e64172031e650797"} Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.554119 4794 generic.go:334] "Generic (PLEG): container finished" podID="2a990777-4ada-4e8d-ac0f-451a616ec3bc" containerID="151a0e6387a7a8b32580253febd49a76190d33f8308bbf9108816837b1049ad1" exitCode=0 Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.554203 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a990777-4ada-4e8d-ac0f-451a616ec3bc","Type":"ContainerDied","Data":"151a0e6387a7a8b32580253febd49a76190d33f8308bbf9108816837b1049ad1"} Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.554380 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a990777-4ada-4e8d-ac0f-451a616ec3bc","Type":"ContainerDied","Data":"2934d791b98b7b131b86d01aff68a92acb8c945c86934436a42d5c24c4bb20b7"} Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.554412 4794 scope.go:117] "RemoveContainer" containerID="ddc537f70607c175d12073de1959c9cea69b371ff4837fbf5068cc3ed7ac8cbb" Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.591072 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a990777-4ada-4e8d-ac0f-451a616ec3bc-log-httpd\") pod \"2a990777-4ada-4e8d-ac0f-451a616ec3bc\" (UID: \"2a990777-4ada-4e8d-ac0f-451a616ec3bc\") " Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.591147 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5swd6\" (UniqueName: \"kubernetes.io/projected/2a990777-4ada-4e8d-ac0f-451a616ec3bc-kube-api-access-5swd6\") pod \"2a990777-4ada-4e8d-ac0f-451a616ec3bc\" (UID: \"2a990777-4ada-4e8d-ac0f-451a616ec3bc\") " Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.591210 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2a990777-4ada-4e8d-ac0f-451a616ec3bc-sg-core-conf-yaml\") pod \"2a990777-4ada-4e8d-ac0f-451a616ec3bc\" (UID: \"2a990777-4ada-4e8d-ac0f-451a616ec3bc\") " Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.591361 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a990777-4ada-4e8d-ac0f-451a616ec3bc-scripts\") pod \"2a990777-4ada-4e8d-ac0f-451a616ec3bc\" (UID: \"2a990777-4ada-4e8d-ac0f-451a616ec3bc\") " Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.591448 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a990777-4ada-4e8d-ac0f-451a616ec3bc-run-httpd\") pod \"2a990777-4ada-4e8d-ac0f-451a616ec3bc\" (UID: \"2a990777-4ada-4e8d-ac0f-451a616ec3bc\") " Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.591534 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a990777-4ada-4e8d-ac0f-451a616ec3bc-config-data\") pod \"2a990777-4ada-4e8d-ac0f-451a616ec3bc\" (UID: \"2a990777-4ada-4e8d-ac0f-451a616ec3bc\") " Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.591598 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a990777-4ada-4e8d-ac0f-451a616ec3bc-combined-ca-bundle\") pod \"2a990777-4ada-4e8d-ac0f-451a616ec3bc\" (UID: \"2a990777-4ada-4e8d-ac0f-451a616ec3bc\") " Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.591605 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a990777-4ada-4e8d-ac0f-451a616ec3bc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2a990777-4ada-4e8d-ac0f-451a616ec3bc" (UID: "2a990777-4ada-4e8d-ac0f-451a616ec3bc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.592207 4794 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a990777-4ada-4e8d-ac0f-451a616ec3bc-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.594591 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a990777-4ada-4e8d-ac0f-451a616ec3bc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2a990777-4ada-4e8d-ac0f-451a616ec3bc" (UID: "2a990777-4ada-4e8d-ac0f-451a616ec3bc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.602675 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a990777-4ada-4e8d-ac0f-451a616ec3bc-kube-api-access-5swd6" (OuterVolumeSpecName: "kube-api-access-5swd6") pod "2a990777-4ada-4e8d-ac0f-451a616ec3bc" (UID: "2a990777-4ada-4e8d-ac0f-451a616ec3bc"). InnerVolumeSpecName "kube-api-access-5swd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.604489 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a990777-4ada-4e8d-ac0f-451a616ec3bc-scripts" (OuterVolumeSpecName: "scripts") pod "2a990777-4ada-4e8d-ac0f-451a616ec3bc" (UID: "2a990777-4ada-4e8d-ac0f-451a616ec3bc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.618777 4794 scope.go:117] "RemoveContainer" containerID="eca50d3d6fed1fb508bf608a4b887e8e035a6a3a9f39e4ace7c87216bc41269a" Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.632282 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-4pz6j"] Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.636725 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a990777-4ada-4e8d-ac0f-451a616ec3bc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2a990777-4ada-4e8d-ac0f-451a616ec3bc" (UID: "2a990777-4ada-4e8d-ac0f-451a616ec3bc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.641827 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-4pz6j"] Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.683710 4794 scope.go:117] "RemoveContainer" containerID="32a49dad1d023460fed090486f413901c58862ab95b018902e8f1343c3efceea" Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.700664 4794 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a990777-4ada-4e8d-ac0f-451a616ec3bc-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.700854 4794 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a990777-4ada-4e8d-ac0f-451a616ec3bc-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.700927 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5swd6\" (UniqueName: \"kubernetes.io/projected/2a990777-4ada-4e8d-ac0f-451a616ec3bc-kube-api-access-5swd6\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.700989 4794 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2a990777-4ada-4e8d-ac0f-451a616ec3bc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.716989 4794 scope.go:117] "RemoveContainer" containerID="151a0e6387a7a8b32580253febd49a76190d33f8308bbf9108816837b1049ad1" Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.722387 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a990777-4ada-4e8d-ac0f-451a616ec3bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a990777-4ada-4e8d-ac0f-451a616ec3bc" (UID: "2a990777-4ada-4e8d-ac0f-451a616ec3bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.749441 4794 scope.go:117] "RemoveContainer" containerID="ddc537f70607c175d12073de1959c9cea69b371ff4837fbf5068cc3ed7ac8cbb" Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.753438 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a990777-4ada-4e8d-ac0f-451a616ec3bc-config-data" (OuterVolumeSpecName: "config-data") pod "2a990777-4ada-4e8d-ac0f-451a616ec3bc" (UID: "2a990777-4ada-4e8d-ac0f-451a616ec3bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:30 crc kubenswrapper[4794]: E0216 17:24:30.753653 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddc537f70607c175d12073de1959c9cea69b371ff4837fbf5068cc3ed7ac8cbb\": container with ID starting with ddc537f70607c175d12073de1959c9cea69b371ff4837fbf5068cc3ed7ac8cbb not found: ID does not exist" containerID="ddc537f70607c175d12073de1959c9cea69b371ff4837fbf5068cc3ed7ac8cbb" Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.753773 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddc537f70607c175d12073de1959c9cea69b371ff4837fbf5068cc3ed7ac8cbb"} err="failed to get container status \"ddc537f70607c175d12073de1959c9cea69b371ff4837fbf5068cc3ed7ac8cbb\": rpc error: code = NotFound desc = could not find container \"ddc537f70607c175d12073de1959c9cea69b371ff4837fbf5068cc3ed7ac8cbb\": container with ID starting with ddc537f70607c175d12073de1959c9cea69b371ff4837fbf5068cc3ed7ac8cbb not found: ID does not exist" Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.753874 4794 scope.go:117] "RemoveContainer" containerID="eca50d3d6fed1fb508bf608a4b887e8e035a6a3a9f39e4ace7c87216bc41269a" Feb 16 17:24:30 crc kubenswrapper[4794]: E0216 17:24:30.755446 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eca50d3d6fed1fb508bf608a4b887e8e035a6a3a9f39e4ace7c87216bc41269a\": container with ID starting with eca50d3d6fed1fb508bf608a4b887e8e035a6a3a9f39e4ace7c87216bc41269a not found: ID does not exist" containerID="eca50d3d6fed1fb508bf608a4b887e8e035a6a3a9f39e4ace7c87216bc41269a" Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.755485 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eca50d3d6fed1fb508bf608a4b887e8e035a6a3a9f39e4ace7c87216bc41269a"} err="failed to get container status \"eca50d3d6fed1fb508bf608a4b887e8e035a6a3a9f39e4ace7c87216bc41269a\": rpc error: code = NotFound desc = could not find container \"eca50d3d6fed1fb508bf608a4b887e8e035a6a3a9f39e4ace7c87216bc41269a\": container with ID starting with eca50d3d6fed1fb508bf608a4b887e8e035a6a3a9f39e4ace7c87216bc41269a not found: ID does not exist" Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.755509 4794 scope.go:117] "RemoveContainer" containerID="32a49dad1d023460fed090486f413901c58862ab95b018902e8f1343c3efceea" Feb 16 17:24:30 crc kubenswrapper[4794]: E0216 17:24:30.756811 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32a49dad1d023460fed090486f413901c58862ab95b018902e8f1343c3efceea\": container with ID starting with 32a49dad1d023460fed090486f413901c58862ab95b018902e8f1343c3efceea not found: ID does not exist" containerID="32a49dad1d023460fed090486f413901c58862ab95b018902e8f1343c3efceea" Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.756839 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32a49dad1d023460fed090486f413901c58862ab95b018902e8f1343c3efceea"} err="failed to get container status \"32a49dad1d023460fed090486f413901c58862ab95b018902e8f1343c3efceea\": rpc error: code = NotFound desc = could not find container \"32a49dad1d023460fed090486f413901c58862ab95b018902e8f1343c3efceea\": container with ID starting with 32a49dad1d023460fed090486f413901c58862ab95b018902e8f1343c3efceea not found: ID does not exist" Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.756856 4794 scope.go:117] "RemoveContainer" containerID="151a0e6387a7a8b32580253febd49a76190d33f8308bbf9108816837b1049ad1" Feb 16 17:24:30 crc kubenswrapper[4794]: E0216 17:24:30.757344 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"151a0e6387a7a8b32580253febd49a76190d33f8308bbf9108816837b1049ad1\": container with ID starting with 151a0e6387a7a8b32580253febd49a76190d33f8308bbf9108816837b1049ad1 not found: ID does not exist" containerID="151a0e6387a7a8b32580253febd49a76190d33f8308bbf9108816837b1049ad1" Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.757504 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"151a0e6387a7a8b32580253febd49a76190d33f8308bbf9108816837b1049ad1"} err="failed to get container status \"151a0e6387a7a8b32580253febd49a76190d33f8308bbf9108816837b1049ad1\": rpc error: code = NotFound desc = could not find container \"151a0e6387a7a8b32580253febd49a76190d33f8308bbf9108816837b1049ad1\": container with ID starting with 151a0e6387a7a8b32580253febd49a76190d33f8308bbf9108816837b1049ad1 not found: ID does not exist" Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.804205 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a990777-4ada-4e8d-ac0f-451a616ec3bc-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.804252 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a990777-4ada-4e8d-ac0f-451a616ec3bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:30 crc kubenswrapper[4794]: I0216 17:24:30.816169 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb9be534-e864-414a-8dcd-6c9457f6f0bc" path="/var/lib/kubelet/pods/fb9be534-e864-414a-8dcd-6c9457f6f0bc/volumes" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.073265 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4kglx" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.213089 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c08d48e2-27f0-44e5-a13a-815719c3f5dc-config-data\") pod \"c08d48e2-27f0-44e5-a13a-815719c3f5dc\" (UID: \"c08d48e2-27f0-44e5-a13a-815719c3f5dc\") " Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.213190 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c08d48e2-27f0-44e5-a13a-815719c3f5dc-scripts\") pod \"c08d48e2-27f0-44e5-a13a-815719c3f5dc\" (UID: \"c08d48e2-27f0-44e5-a13a-815719c3f5dc\") " Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.213223 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c08d48e2-27f0-44e5-a13a-815719c3f5dc-combined-ca-bundle\") pod \"c08d48e2-27f0-44e5-a13a-815719c3f5dc\" (UID: \"c08d48e2-27f0-44e5-a13a-815719c3f5dc\") " Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.213511 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnklw\" (UniqueName: \"kubernetes.io/projected/c08d48e2-27f0-44e5-a13a-815719c3f5dc-kube-api-access-hnklw\") pod \"c08d48e2-27f0-44e5-a13a-815719c3f5dc\" (UID: \"c08d48e2-27f0-44e5-a13a-815719c3f5dc\") " Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.217537 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c08d48e2-27f0-44e5-a13a-815719c3f5dc-kube-api-access-hnklw" (OuterVolumeSpecName: "kube-api-access-hnklw") pod "c08d48e2-27f0-44e5-a13a-815719c3f5dc" (UID: "c08d48e2-27f0-44e5-a13a-815719c3f5dc"). InnerVolumeSpecName "kube-api-access-hnklw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.233451 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c08d48e2-27f0-44e5-a13a-815719c3f5dc-scripts" (OuterVolumeSpecName: "scripts") pod "c08d48e2-27f0-44e5-a13a-815719c3f5dc" (UID: "c08d48e2-27f0-44e5-a13a-815719c3f5dc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.246375 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c08d48e2-27f0-44e5-a13a-815719c3f5dc-config-data" (OuterVolumeSpecName: "config-data") pod "c08d48e2-27f0-44e5-a13a-815719c3f5dc" (UID: "c08d48e2-27f0-44e5-a13a-815719c3f5dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.250519 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c08d48e2-27f0-44e5-a13a-815719c3f5dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c08d48e2-27f0-44e5-a13a-815719c3f5dc" (UID: "c08d48e2-27f0-44e5-a13a-815719c3f5dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.316495 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c08d48e2-27f0-44e5-a13a-815719c3f5dc-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.316533 4794 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c08d48e2-27f0-44e5-a13a-815719c3f5dc-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.316545 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c08d48e2-27f0-44e5-a13a-815719c3f5dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.316559 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnklw\" (UniqueName: \"kubernetes.io/projected/c08d48e2-27f0-44e5-a13a-815719c3f5dc-kube-api-access-hnklw\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.566708 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4kglx" event={"ID":"c08d48e2-27f0-44e5-a13a-815719c3f5dc","Type":"ContainerDied","Data":"773775cfc4fe23a88620cab0e828787740766b5b5b144083038be4beb5ea6f70"} Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.566775 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="773775cfc4fe23a88620cab0e828787740766b5b5b144083038be4beb5ea6f70" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.566787 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4kglx" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.568858 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.628034 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.647584 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.671379 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:24:31 crc kubenswrapper[4794]: E0216 17:24:31.672008 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a990777-4ada-4e8d-ac0f-451a616ec3bc" containerName="ceilometer-notification-agent" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.672033 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a990777-4ada-4e8d-ac0f-451a616ec3bc" containerName="ceilometer-notification-agent" Feb 16 17:24:31 crc kubenswrapper[4794]: E0216 17:24:31.672052 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a990777-4ada-4e8d-ac0f-451a616ec3bc" containerName="sg-core" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.672060 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a990777-4ada-4e8d-ac0f-451a616ec3bc" containerName="sg-core" Feb 16 17:24:31 crc kubenswrapper[4794]: E0216 17:24:31.672081 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a990777-4ada-4e8d-ac0f-451a616ec3bc" containerName="proxy-httpd" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.672091 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a990777-4ada-4e8d-ac0f-451a616ec3bc" containerName="proxy-httpd" Feb 16 17:24:31 crc kubenswrapper[4794]: E0216 17:24:31.672109 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb9be534-e864-414a-8dcd-6c9457f6f0bc" containerName="dnsmasq-dns" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.672116 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb9be534-e864-414a-8dcd-6c9457f6f0bc" containerName="dnsmasq-dns" Feb 16 17:24:31 crc kubenswrapper[4794]: E0216 17:24:31.672132 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c08d48e2-27f0-44e5-a13a-815719c3f5dc" containerName="nova-manage" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.672138 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="c08d48e2-27f0-44e5-a13a-815719c3f5dc" containerName="nova-manage" Feb 16 17:24:31 crc kubenswrapper[4794]: E0216 17:24:31.672163 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb9be534-e864-414a-8dcd-6c9457f6f0bc" containerName="init" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.672169 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb9be534-e864-414a-8dcd-6c9457f6f0bc" containerName="init" Feb 16 17:24:31 crc kubenswrapper[4794]: E0216 17:24:31.672188 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a990777-4ada-4e8d-ac0f-451a616ec3bc" containerName="ceilometer-central-agent" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.672195 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a990777-4ada-4e8d-ac0f-451a616ec3bc" containerName="ceilometer-central-agent" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.672500 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a990777-4ada-4e8d-ac0f-451a616ec3bc" containerName="proxy-httpd" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.672525 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="c08d48e2-27f0-44e5-a13a-815719c3f5dc" containerName="nova-manage" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.672543 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a990777-4ada-4e8d-ac0f-451a616ec3bc" containerName="sg-core" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.672557 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a990777-4ada-4e8d-ac0f-451a616ec3bc" containerName="ceilometer-central-agent" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.672584 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a990777-4ada-4e8d-ac0f-451a616ec3bc" containerName="ceilometer-notification-agent" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.672596 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb9be534-e864-414a-8dcd-6c9457f6f0bc" containerName="dnsmasq-dns" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.675291 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.679988 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.680223 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.683737 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.704017 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.707229 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.724089 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbgk7\" (UniqueName: \"kubernetes.io/projected/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-kube-api-access-cbgk7\") pod \"ceilometer-0\" (UID: \"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94\") " pod="openstack/ceilometer-0" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.724148 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-config-data\") pod \"ceilometer-0\" (UID: \"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94\") " pod="openstack/ceilometer-0" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.724193 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-log-httpd\") pod \"ceilometer-0\" (UID: \"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94\") " pod="openstack/ceilometer-0" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.724283 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-run-httpd\") pod \"ceilometer-0\" (UID: \"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94\") " pod="openstack/ceilometer-0" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.724325 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-scripts\") pod \"ceilometer-0\" (UID: \"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94\") " pod="openstack/ceilometer-0" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.724344 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94\") " pod="openstack/ceilometer-0" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.724457 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94\") " pod="openstack/ceilometer-0" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.753693 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.754086 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="863be36f-716a-4890-9790-1e82c5542f1f" containerName="nova-api-api" containerID="cri-o://76159a0e72c392553cae5906836fd1e7b1067347b69184a2998a3d394b57803b" gracePeriod=30 Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.753942 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="863be36f-716a-4890-9790-1e82c5542f1f" containerName="nova-api-log" containerID="cri-o://7da1c55f756dd17a88b39d85dc53a6648ed8d9966956f0f579ccd7603fac65b7" gracePeriod=30 Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.798151 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.798392 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="43f61c36-3ecb-43dc-a9cd-d713af555005" containerName="nova-scheduler-scheduler" containerID="cri-o://6a13c70a904a7546f869de189ad3d3f65091bd7b1966f754e6cf7722f8723eb5" gracePeriod=30 Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.825846 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94\") " pod="openstack/ceilometer-0" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.825919 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbgk7\" (UniqueName: \"kubernetes.io/projected/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-kube-api-access-cbgk7\") pod \"ceilometer-0\" (UID: \"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94\") " pod="openstack/ceilometer-0" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.825950 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-config-data\") pod \"ceilometer-0\" (UID: \"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94\") " pod="openstack/ceilometer-0" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.825971 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-log-httpd\") pod \"ceilometer-0\" (UID: \"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94\") " pod="openstack/ceilometer-0" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.826058 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-scripts\") pod \"ceilometer-0\" (UID: \"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94\") " pod="openstack/ceilometer-0" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.826075 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-run-httpd\") pod \"ceilometer-0\" (UID: \"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94\") " pod="openstack/ceilometer-0" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.826095 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94\") " pod="openstack/ceilometer-0" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.831162 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-log-httpd\") pod \"ceilometer-0\" (UID: \"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94\") " pod="openstack/ceilometer-0" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.831934 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-run-httpd\") pod \"ceilometer-0\" (UID: \"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94\") " pod="openstack/ceilometer-0" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.836985 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94\") " pod="openstack/ceilometer-0" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.838149 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-config-data\") pod \"ceilometer-0\" (UID: \"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94\") " pod="openstack/ceilometer-0" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.840769 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-scripts\") pod \"ceilometer-0\" (UID: \"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94\") " pod="openstack/ceilometer-0" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.841429 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94\") " pod="openstack/ceilometer-0" Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.842962 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:24:31 crc kubenswrapper[4794]: I0216 17:24:31.854124 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbgk7\" (UniqueName: \"kubernetes.io/projected/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-kube-api-access-cbgk7\") pod \"ceilometer-0\" (UID: \"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94\") " pod="openstack/ceilometer-0" Feb 16 17:24:32 crc kubenswrapper[4794]: I0216 17:24:32.008826 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:24:32 crc kubenswrapper[4794]: I0216 17:24:32.232667 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-rn6n9" Feb 16 17:24:32 crc kubenswrapper[4794]: I0216 17:24:32.346062 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78df36ef-5c86-41c8-9085-7ce98caad880-operator-scripts\") pod \"78df36ef-5c86-41c8-9085-7ce98caad880\" (UID: \"78df36ef-5c86-41c8-9085-7ce98caad880\") " Feb 16 17:24:32 crc kubenswrapper[4794]: I0216 17:24:32.346171 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5qx7\" (UniqueName: \"kubernetes.io/projected/78df36ef-5c86-41c8-9085-7ce98caad880-kube-api-access-g5qx7\") pod \"78df36ef-5c86-41c8-9085-7ce98caad880\" (UID: \"78df36ef-5c86-41c8-9085-7ce98caad880\") " Feb 16 17:24:32 crc kubenswrapper[4794]: I0216 17:24:32.347242 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78df36ef-5c86-41c8-9085-7ce98caad880-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "78df36ef-5c86-41c8-9085-7ce98caad880" (UID: "78df36ef-5c86-41c8-9085-7ce98caad880"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:24:32 crc kubenswrapper[4794]: I0216 17:24:32.358514 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78df36ef-5c86-41c8-9085-7ce98caad880-kube-api-access-g5qx7" (OuterVolumeSpecName: "kube-api-access-g5qx7") pod "78df36ef-5c86-41c8-9085-7ce98caad880" (UID: "78df36ef-5c86-41c8-9085-7ce98caad880"). InnerVolumeSpecName "kube-api-access-g5qx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:24:32 crc kubenswrapper[4794]: I0216 17:24:32.449712 4794 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78df36ef-5c86-41c8-9085-7ce98caad880-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:32 crc kubenswrapper[4794]: I0216 17:24:32.450046 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5qx7\" (UniqueName: \"kubernetes.io/projected/78df36ef-5c86-41c8-9085-7ce98caad880-kube-api-access-g5qx7\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:32 crc kubenswrapper[4794]: I0216 17:24:32.453679 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-4618-account-create-update-s8vpk" Feb 16 17:24:32 crc kubenswrapper[4794]: I0216 17:24:32.565632 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpcjl\" (UniqueName: \"kubernetes.io/projected/a6bd641d-034e-45b5-9379-422fe35d0054-kube-api-access-qpcjl\") pod \"a6bd641d-034e-45b5-9379-422fe35d0054\" (UID: \"a6bd641d-034e-45b5-9379-422fe35d0054\") " Feb 16 17:24:32 crc kubenswrapper[4794]: I0216 17:24:32.565741 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6bd641d-034e-45b5-9379-422fe35d0054-operator-scripts\") pod \"a6bd641d-034e-45b5-9379-422fe35d0054\" (UID: \"a6bd641d-034e-45b5-9379-422fe35d0054\") " Feb 16 17:24:32 crc kubenswrapper[4794]: I0216 17:24:32.574896 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6bd641d-034e-45b5-9379-422fe35d0054-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a6bd641d-034e-45b5-9379-422fe35d0054" (UID: "a6bd641d-034e-45b5-9379-422fe35d0054"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:24:32 crc kubenswrapper[4794]: I0216 17:24:32.575048 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6bd641d-034e-45b5-9379-422fe35d0054-kube-api-access-qpcjl" (OuterVolumeSpecName: "kube-api-access-qpcjl") pod "a6bd641d-034e-45b5-9379-422fe35d0054" (UID: "a6bd641d-034e-45b5-9379-422fe35d0054"). InnerVolumeSpecName "kube-api-access-qpcjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:24:32 crc kubenswrapper[4794]: I0216 17:24:32.594475 4794 generic.go:334] "Generic (PLEG): container finished" podID="863be36f-716a-4890-9790-1e82c5542f1f" containerID="7da1c55f756dd17a88b39d85dc53a6648ed8d9966956f0f579ccd7603fac65b7" exitCode=143 Feb 16 17:24:32 crc kubenswrapper[4794]: I0216 17:24:32.594542 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"863be36f-716a-4890-9790-1e82c5542f1f","Type":"ContainerDied","Data":"7da1c55f756dd17a88b39d85dc53a6648ed8d9966956f0f579ccd7603fac65b7"} Feb 16 17:24:32 crc kubenswrapper[4794]: I0216 17:24:32.595647 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-4618-account-create-update-s8vpk" event={"ID":"a6bd641d-034e-45b5-9379-422fe35d0054","Type":"ContainerDied","Data":"d4f765b374f88e9160ed2aac652f625bdd7ae0674be7c9fc44c25d05b1fdcf0d"} Feb 16 17:24:32 crc kubenswrapper[4794]: I0216 17:24:32.595674 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4f765b374f88e9160ed2aac652f625bdd7ae0674be7c9fc44c25d05b1fdcf0d" Feb 16 17:24:32 crc kubenswrapper[4794]: I0216 17:24:32.595749 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-4618-account-create-update-s8vpk" Feb 16 17:24:32 crc kubenswrapper[4794]: I0216 17:24:32.613732 4794 generic.go:334] "Generic (PLEG): container finished" podID="521b6a44-f328-4e6e-926b-f27a9b9810ad" containerID="cef756b523489089cdfc52fe85cf59247cde121a8515537da9a4a1f17ba2c217" exitCode=0 Feb 16 17:24:32 crc kubenswrapper[4794]: I0216 17:24:32.613821 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9j8qt" event={"ID":"521b6a44-f328-4e6e-926b-f27a9b9810ad","Type":"ContainerDied","Data":"cef756b523489089cdfc52fe85cf59247cde121a8515537da9a4a1f17ba2c217"} Feb 16 17:24:32 crc kubenswrapper[4794]: I0216 17:24:32.625008 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-rn6n9" Feb 16 17:24:32 crc kubenswrapper[4794]: I0216 17:24:32.625419 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-rn6n9" event={"ID":"78df36ef-5c86-41c8-9085-7ce98caad880","Type":"ContainerDied","Data":"d9d8276317afe3dd18c1c4cd5d1c572b2509335901ce946fa37fe8ff336e877f"} Feb 16 17:24:32 crc kubenswrapper[4794]: I0216 17:24:32.625447 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9d8276317afe3dd18c1c4cd5d1c572b2509335901ce946fa37fe8ff336e877f" Feb 16 17:24:32 crc kubenswrapper[4794]: I0216 17:24:32.676293 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpcjl\" (UniqueName: \"kubernetes.io/projected/a6bd641d-034e-45b5-9379-422fe35d0054-kube-api-access-qpcjl\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:32 crc kubenswrapper[4794]: I0216 17:24:32.676357 4794 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6bd641d-034e-45b5-9379-422fe35d0054-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:32 crc kubenswrapper[4794]: I0216 17:24:32.804359 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a990777-4ada-4e8d-ac0f-451a616ec3bc" path="/var/lib/kubelet/pods/2a990777-4ada-4e8d-ac0f-451a616ec3bc/volumes" Feb 16 17:24:32 crc kubenswrapper[4794]: I0216 17:24:32.881573 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:24:32 crc kubenswrapper[4794]: W0216 17:24:32.882417 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7b8e8f9_a2c7_4ce7_8a5e_24300eb8ce94.slice/crio-b6159682fde96e07e3a0e2f92425bbf3f217b837bf7a1055381dcb937c2595d4 WatchSource:0}: Error finding container b6159682fde96e07e3a0e2f92425bbf3f217b837bf7a1055381dcb937c2595d4: Status 404 returned error can't find the container with id b6159682fde96e07e3a0e2f92425bbf3f217b837bf7a1055381dcb937c2595d4 Feb 16 17:24:32 crc kubenswrapper[4794]: E0216 17:24:32.966894 4794 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a13c70a904a7546f869de189ad3d3f65091bd7b1966f754e6cf7722f8723eb5 is running failed: container process not found" containerID="6a13c70a904a7546f869de189ad3d3f65091bd7b1966f754e6cf7722f8723eb5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 17:24:32 crc kubenswrapper[4794]: E0216 17:24:32.967455 4794 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a13c70a904a7546f869de189ad3d3f65091bd7b1966f754e6cf7722f8723eb5 is running failed: container process not found" containerID="6a13c70a904a7546f869de189ad3d3f65091bd7b1966f754e6cf7722f8723eb5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 17:24:32 crc kubenswrapper[4794]: E0216 17:24:32.967683 4794 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a13c70a904a7546f869de189ad3d3f65091bd7b1966f754e6cf7722f8723eb5 is running failed: container process not found" containerID="6a13c70a904a7546f869de189ad3d3f65091bd7b1966f754e6cf7722f8723eb5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 17:24:32 crc kubenswrapper[4794]: E0216 17:24:32.967736 4794 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a13c70a904a7546f869de189ad3d3f65091bd7b1966f754e6cf7722f8723eb5 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="43f61c36-3ecb-43dc-a9cd-d713af555005" containerName="nova-scheduler-scheduler" Feb 16 17:24:33 crc kubenswrapper[4794]: I0216 17:24:33.483264 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 17:24:33 crc kubenswrapper[4794]: I0216 17:24:33.600417 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43f61c36-3ecb-43dc-a9cd-d713af555005-config-data\") pod \"43f61c36-3ecb-43dc-a9cd-d713af555005\" (UID: \"43f61c36-3ecb-43dc-a9cd-d713af555005\") " Feb 16 17:24:33 crc kubenswrapper[4794]: I0216 17:24:33.600762 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f61c36-3ecb-43dc-a9cd-d713af555005-combined-ca-bundle\") pod \"43f61c36-3ecb-43dc-a9cd-d713af555005\" (UID: \"43f61c36-3ecb-43dc-a9cd-d713af555005\") " Feb 16 17:24:33 crc kubenswrapper[4794]: I0216 17:24:33.600861 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjmbx\" (UniqueName: \"kubernetes.io/projected/43f61c36-3ecb-43dc-a9cd-d713af555005-kube-api-access-qjmbx\") pod \"43f61c36-3ecb-43dc-a9cd-d713af555005\" (UID: \"43f61c36-3ecb-43dc-a9cd-d713af555005\") " Feb 16 17:24:33 crc kubenswrapper[4794]: I0216 17:24:33.608536 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43f61c36-3ecb-43dc-a9cd-d713af555005-kube-api-access-qjmbx" (OuterVolumeSpecName: "kube-api-access-qjmbx") pod "43f61c36-3ecb-43dc-a9cd-d713af555005" (UID: "43f61c36-3ecb-43dc-a9cd-d713af555005"). InnerVolumeSpecName "kube-api-access-qjmbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:24:33 crc kubenswrapper[4794]: I0216 17:24:33.647242 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43f61c36-3ecb-43dc-a9cd-d713af555005-config-data" (OuterVolumeSpecName: "config-data") pod "43f61c36-3ecb-43dc-a9cd-d713af555005" (UID: "43f61c36-3ecb-43dc-a9cd-d713af555005"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:33 crc kubenswrapper[4794]: I0216 17:24:33.648144 4794 generic.go:334] "Generic (PLEG): container finished" podID="43f61c36-3ecb-43dc-a9cd-d713af555005" containerID="6a13c70a904a7546f869de189ad3d3f65091bd7b1966f754e6cf7722f8723eb5" exitCode=0 Feb 16 17:24:33 crc kubenswrapper[4794]: I0216 17:24:33.648318 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"43f61c36-3ecb-43dc-a9cd-d713af555005","Type":"ContainerDied","Data":"6a13c70a904a7546f869de189ad3d3f65091bd7b1966f754e6cf7722f8723eb5"} Feb 16 17:24:33 crc kubenswrapper[4794]: I0216 17:24:33.648412 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"43f61c36-3ecb-43dc-a9cd-d713af555005","Type":"ContainerDied","Data":"b1144509806ec8559531176fd9266e4d3dae26701e06b81c2b4e0e12709d7806"} Feb 16 17:24:33 crc kubenswrapper[4794]: I0216 17:24:33.648483 4794 scope.go:117] "RemoveContainer" containerID="6a13c70a904a7546f869de189ad3d3f65091bd7b1966f754e6cf7722f8723eb5" Feb 16 17:24:33 crc kubenswrapper[4794]: I0216 17:24:33.648685 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 17:24:33 crc kubenswrapper[4794]: I0216 17:24:33.657042 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94","Type":"ContainerStarted","Data":"575daf2240b0c78c5ffd48e6a5e3537007479eee5178e5f5870bc389b2b21629"} Feb 16 17:24:33 crc kubenswrapper[4794]: I0216 17:24:33.657088 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94","Type":"ContainerStarted","Data":"b6159682fde96e07e3a0e2f92425bbf3f217b837bf7a1055381dcb937c2595d4"} Feb 16 17:24:33 crc kubenswrapper[4794]: I0216 17:24:33.657371 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="60bc3506-fc79-458a-bae4-cedfe5f09450" containerName="nova-metadata-log" containerID="cri-o://84c3bc688cf374fd20db486406c752c2adcfb9c52b250cea2bae1ffd583931f3" gracePeriod=30 Feb 16 17:24:33 crc kubenswrapper[4794]: I0216 17:24:33.658018 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="60bc3506-fc79-458a-bae4-cedfe5f09450" containerName="nova-metadata-metadata" containerID="cri-o://16b72ec76fdd869daf9a12348f14e96cba0987dfceaf9c4b9c9435ce5e8459ad" gracePeriod=30 Feb 16 17:24:33 crc kubenswrapper[4794]: I0216 17:24:33.666489 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43f61c36-3ecb-43dc-a9cd-d713af555005-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43f61c36-3ecb-43dc-a9cd-d713af555005" (UID: "43f61c36-3ecb-43dc-a9cd-d713af555005"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:33 crc kubenswrapper[4794]: I0216 17:24:33.690210 4794 scope.go:117] "RemoveContainer" containerID="6a13c70a904a7546f869de189ad3d3f65091bd7b1966f754e6cf7722f8723eb5" Feb 16 17:24:33 crc kubenswrapper[4794]: E0216 17:24:33.690671 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a13c70a904a7546f869de189ad3d3f65091bd7b1966f754e6cf7722f8723eb5\": container with ID starting with 6a13c70a904a7546f869de189ad3d3f65091bd7b1966f754e6cf7722f8723eb5 not found: ID does not exist" containerID="6a13c70a904a7546f869de189ad3d3f65091bd7b1966f754e6cf7722f8723eb5" Feb 16 17:24:33 crc kubenswrapper[4794]: I0216 17:24:33.690711 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a13c70a904a7546f869de189ad3d3f65091bd7b1966f754e6cf7722f8723eb5"} err="failed to get container status \"6a13c70a904a7546f869de189ad3d3f65091bd7b1966f754e6cf7722f8723eb5\": rpc error: code = NotFound desc = could not find container \"6a13c70a904a7546f869de189ad3d3f65091bd7b1966f754e6cf7722f8723eb5\": container with ID starting with 6a13c70a904a7546f869de189ad3d3f65091bd7b1966f754e6cf7722f8723eb5 not found: ID does not exist" Feb 16 17:24:33 crc kubenswrapper[4794]: I0216 17:24:33.703891 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43f61c36-3ecb-43dc-a9cd-d713af555005-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:33 crc kubenswrapper[4794]: I0216 17:24:33.704246 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f61c36-3ecb-43dc-a9cd-d713af555005-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:33 crc kubenswrapper[4794]: I0216 17:24:33.704260 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjmbx\" (UniqueName: \"kubernetes.io/projected/43f61c36-3ecb-43dc-a9cd-d713af555005-kube-api-access-qjmbx\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.023669 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.037396 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.047108 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:24:34 crc kubenswrapper[4794]: E0216 17:24:34.047667 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43f61c36-3ecb-43dc-a9cd-d713af555005" containerName="nova-scheduler-scheduler" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.047683 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="43f61c36-3ecb-43dc-a9cd-d713af555005" containerName="nova-scheduler-scheduler" Feb 16 17:24:34 crc kubenswrapper[4794]: E0216 17:24:34.047701 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78df36ef-5c86-41c8-9085-7ce98caad880" containerName="mariadb-database-create" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.047707 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="78df36ef-5c86-41c8-9085-7ce98caad880" containerName="mariadb-database-create" Feb 16 17:24:34 crc kubenswrapper[4794]: E0216 17:24:34.047721 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6bd641d-034e-45b5-9379-422fe35d0054" containerName="mariadb-account-create-update" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.047728 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6bd641d-034e-45b5-9379-422fe35d0054" containerName="mariadb-account-create-update" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.047969 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="78df36ef-5c86-41c8-9085-7ce98caad880" containerName="mariadb-database-create" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.047991 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6bd641d-034e-45b5-9379-422fe35d0054" containerName="mariadb-account-create-update" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.048010 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="43f61c36-3ecb-43dc-a9cd-d713af555005" containerName="nova-scheduler-scheduler" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.048884 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.056637 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.078868 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.116914 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0ed14a7-ee41-453d-8114-8e955b120c40-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b0ed14a7-ee41-453d-8114-8e955b120c40\") " pod="openstack/nova-scheduler-0" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.117502 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0ed14a7-ee41-453d-8114-8e955b120c40-config-data\") pod \"nova-scheduler-0\" (UID: \"b0ed14a7-ee41-453d-8114-8e955b120c40\") " pod="openstack/nova-scheduler-0" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.117728 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvl46\" (UniqueName: \"kubernetes.io/projected/b0ed14a7-ee41-453d-8114-8e955b120c40-kube-api-access-qvl46\") pod \"nova-scheduler-0\" (UID: \"b0ed14a7-ee41-453d-8114-8e955b120c40\") " pod="openstack/nova-scheduler-0" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.219601 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0ed14a7-ee41-453d-8114-8e955b120c40-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b0ed14a7-ee41-453d-8114-8e955b120c40\") " pod="openstack/nova-scheduler-0" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.219788 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0ed14a7-ee41-453d-8114-8e955b120c40-config-data\") pod \"nova-scheduler-0\" (UID: \"b0ed14a7-ee41-453d-8114-8e955b120c40\") " pod="openstack/nova-scheduler-0" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.219853 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvl46\" (UniqueName: \"kubernetes.io/projected/b0ed14a7-ee41-453d-8114-8e955b120c40-kube-api-access-qvl46\") pod \"nova-scheduler-0\" (UID: \"b0ed14a7-ee41-453d-8114-8e955b120c40\") " pod="openstack/nova-scheduler-0" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.225415 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0ed14a7-ee41-453d-8114-8e955b120c40-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b0ed14a7-ee41-453d-8114-8e955b120c40\") " pod="openstack/nova-scheduler-0" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.240082 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0ed14a7-ee41-453d-8114-8e955b120c40-config-data\") pod \"nova-scheduler-0\" (UID: \"b0ed14a7-ee41-453d-8114-8e955b120c40\") " pod="openstack/nova-scheduler-0" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.242479 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvl46\" (UniqueName: \"kubernetes.io/projected/b0ed14a7-ee41-453d-8114-8e955b120c40-kube-api-access-qvl46\") pod \"nova-scheduler-0\" (UID: \"b0ed14a7-ee41-453d-8114-8e955b120c40\") " pod="openstack/nova-scheduler-0" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.359464 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9j8qt" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.372734 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.413503 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.423429 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xj75\" (UniqueName: \"kubernetes.io/projected/521b6a44-f328-4e6e-926b-f27a9b9810ad-kube-api-access-9xj75\") pod \"521b6a44-f328-4e6e-926b-f27a9b9810ad\" (UID: \"521b6a44-f328-4e6e-926b-f27a9b9810ad\") " Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.423857 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/521b6a44-f328-4e6e-926b-f27a9b9810ad-scripts\") pod \"521b6a44-f328-4e6e-926b-f27a9b9810ad\" (UID: \"521b6a44-f328-4e6e-926b-f27a9b9810ad\") " Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.424012 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/521b6a44-f328-4e6e-926b-f27a9b9810ad-config-data\") pod \"521b6a44-f328-4e6e-926b-f27a9b9810ad\" (UID: \"521b6a44-f328-4e6e-926b-f27a9b9810ad\") " Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.424180 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/521b6a44-f328-4e6e-926b-f27a9b9810ad-combined-ca-bundle\") pod \"521b6a44-f328-4e6e-926b-f27a9b9810ad\" (UID: \"521b6a44-f328-4e6e-926b-f27a9b9810ad\") " Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.428481 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/521b6a44-f328-4e6e-926b-f27a9b9810ad-kube-api-access-9xj75" (OuterVolumeSpecName: "kube-api-access-9xj75") pod "521b6a44-f328-4e6e-926b-f27a9b9810ad" (UID: "521b6a44-f328-4e6e-926b-f27a9b9810ad"). InnerVolumeSpecName "kube-api-access-9xj75". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.435509 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/521b6a44-f328-4e6e-926b-f27a9b9810ad-scripts" (OuterVolumeSpecName: "scripts") pod "521b6a44-f328-4e6e-926b-f27a9b9810ad" (UID: "521b6a44-f328-4e6e-926b-f27a9b9810ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.501708 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/521b6a44-f328-4e6e-926b-f27a9b9810ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "521b6a44-f328-4e6e-926b-f27a9b9810ad" (UID: "521b6a44-f328-4e6e-926b-f27a9b9810ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.527816 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/60bc3506-fc79-458a-bae4-cedfe5f09450-nova-metadata-tls-certs\") pod \"60bc3506-fc79-458a-bae4-cedfe5f09450\" (UID: \"60bc3506-fc79-458a-bae4-cedfe5f09450\") " Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.527984 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60bc3506-fc79-458a-bae4-cedfe5f09450-logs\") pod \"60bc3506-fc79-458a-bae4-cedfe5f09450\" (UID: \"60bc3506-fc79-458a-bae4-cedfe5f09450\") " Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.528119 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60bc3506-fc79-458a-bae4-cedfe5f09450-config-data\") pod \"60bc3506-fc79-458a-bae4-cedfe5f09450\" (UID: \"60bc3506-fc79-458a-bae4-cedfe5f09450\") " Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.528183 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5m2ls\" (UniqueName: \"kubernetes.io/projected/60bc3506-fc79-458a-bae4-cedfe5f09450-kube-api-access-5m2ls\") pod \"60bc3506-fc79-458a-bae4-cedfe5f09450\" (UID: \"60bc3506-fc79-458a-bae4-cedfe5f09450\") " Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.528206 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60bc3506-fc79-458a-bae4-cedfe5f09450-combined-ca-bundle\") pod \"60bc3506-fc79-458a-bae4-cedfe5f09450\" (UID: \"60bc3506-fc79-458a-bae4-cedfe5f09450\") " Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.528272 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/521b6a44-f328-4e6e-926b-f27a9b9810ad-config-data" (OuterVolumeSpecName: "config-data") pod "521b6a44-f328-4e6e-926b-f27a9b9810ad" (UID: "521b6a44-f328-4e6e-926b-f27a9b9810ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.528653 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60bc3506-fc79-458a-bae4-cedfe5f09450-logs" (OuterVolumeSpecName: "logs") pod "60bc3506-fc79-458a-bae4-cedfe5f09450" (UID: "60bc3506-fc79-458a-bae4-cedfe5f09450"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.529165 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xj75\" (UniqueName: \"kubernetes.io/projected/521b6a44-f328-4e6e-926b-f27a9b9810ad-kube-api-access-9xj75\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.529181 4794 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/521b6a44-f328-4e6e-926b-f27a9b9810ad-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.529190 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/521b6a44-f328-4e6e-926b-f27a9b9810ad-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.529200 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/521b6a44-f328-4e6e-926b-f27a9b9810ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.529208 4794 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60bc3506-fc79-458a-bae4-cedfe5f09450-logs\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.533152 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60bc3506-fc79-458a-bae4-cedfe5f09450-kube-api-access-5m2ls" (OuterVolumeSpecName: "kube-api-access-5m2ls") pod "60bc3506-fc79-458a-bae4-cedfe5f09450" (UID: "60bc3506-fc79-458a-bae4-cedfe5f09450"). InnerVolumeSpecName "kube-api-access-5m2ls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.567680 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60bc3506-fc79-458a-bae4-cedfe5f09450-config-data" (OuterVolumeSpecName: "config-data") pod "60bc3506-fc79-458a-bae4-cedfe5f09450" (UID: "60bc3506-fc79-458a-bae4-cedfe5f09450"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.581471 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60bc3506-fc79-458a-bae4-cedfe5f09450-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60bc3506-fc79-458a-bae4-cedfe5f09450" (UID: "60bc3506-fc79-458a-bae4-cedfe5f09450"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.631307 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60bc3506-fc79-458a-bae4-cedfe5f09450-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "60bc3506-fc79-458a-bae4-cedfe5f09450" (UID: "60bc3506-fc79-458a-bae4-cedfe5f09450"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.631429 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/60bc3506-fc79-458a-bae4-cedfe5f09450-nova-metadata-tls-certs\") pod \"60bc3506-fc79-458a-bae4-cedfe5f09450\" (UID: \"60bc3506-fc79-458a-bae4-cedfe5f09450\") " Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.631996 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60bc3506-fc79-458a-bae4-cedfe5f09450-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.632015 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5m2ls\" (UniqueName: \"kubernetes.io/projected/60bc3506-fc79-458a-bae4-cedfe5f09450-kube-api-access-5m2ls\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.632024 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60bc3506-fc79-458a-bae4-cedfe5f09450-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:34 crc kubenswrapper[4794]: W0216 17:24:34.632082 4794 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/60bc3506-fc79-458a-bae4-cedfe5f09450/volumes/kubernetes.io~secret/nova-metadata-tls-certs Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.632090 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60bc3506-fc79-458a-bae4-cedfe5f09450-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "60bc3506-fc79-458a-bae4-cedfe5f09450" (UID: "60bc3506-fc79-458a-bae4-cedfe5f09450"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.672043 4794 generic.go:334] "Generic (PLEG): container finished" podID="60bc3506-fc79-458a-bae4-cedfe5f09450" containerID="16b72ec76fdd869daf9a12348f14e96cba0987dfceaf9c4b9c9435ce5e8459ad" exitCode=0 Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.672237 4794 generic.go:334] "Generic (PLEG): container finished" podID="60bc3506-fc79-458a-bae4-cedfe5f09450" containerID="84c3bc688cf374fd20db486406c752c2adcfb9c52b250cea2bae1ffd583931f3" exitCode=143 Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.672229 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"60bc3506-fc79-458a-bae4-cedfe5f09450","Type":"ContainerDied","Data":"16b72ec76fdd869daf9a12348f14e96cba0987dfceaf9c4b9c9435ce5e8459ad"} Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.672426 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"60bc3506-fc79-458a-bae4-cedfe5f09450","Type":"ContainerDied","Data":"84c3bc688cf374fd20db486406c752c2adcfb9c52b250cea2bae1ffd583931f3"} Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.672487 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"60bc3506-fc79-458a-bae4-cedfe5f09450","Type":"ContainerDied","Data":"a7159595670de318a94b8def56b4cdf01cb8a5e4e6ad0fcc7ec09718f5965dcd"} Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.672569 4794 scope.go:117] "RemoveContainer" containerID="16b72ec76fdd869daf9a12348f14e96cba0987dfceaf9c4b9c9435ce5e8459ad" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.675006 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.683001 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94","Type":"ContainerStarted","Data":"1e9bef14e6f742a06ccd945662057ab4607e9e4d5326ba1ee50f7a53e4820fcb"} Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.690604 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9j8qt" event={"ID":"521b6a44-f328-4e6e-926b-f27a9b9810ad","Type":"ContainerDied","Data":"3d989728a6c6473563fd329699220c7c5105ee4d0614aa7048d2de1a9a071282"} Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.690648 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d989728a6c6473563fd329699220c7c5105ee4d0614aa7048d2de1a9a071282" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.690715 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9j8qt" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.719569 4794 scope.go:117] "RemoveContainer" containerID="84c3bc688cf374fd20db486406c752c2adcfb9c52b250cea2bae1ffd583931f3" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.734189 4794 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/60bc3506-fc79-458a-bae4-cedfe5f09450-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.764700 4794 scope.go:117] "RemoveContainer" containerID="16b72ec76fdd869daf9a12348f14e96cba0987dfceaf9c4b9c9435ce5e8459ad" Feb 16 17:24:34 crc kubenswrapper[4794]: E0216 17:24:34.768414 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16b72ec76fdd869daf9a12348f14e96cba0987dfceaf9c4b9c9435ce5e8459ad\": container with ID starting with 16b72ec76fdd869daf9a12348f14e96cba0987dfceaf9c4b9c9435ce5e8459ad not found: ID does not exist" containerID="16b72ec76fdd869daf9a12348f14e96cba0987dfceaf9c4b9c9435ce5e8459ad" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.768451 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16b72ec76fdd869daf9a12348f14e96cba0987dfceaf9c4b9c9435ce5e8459ad"} err="failed to get container status \"16b72ec76fdd869daf9a12348f14e96cba0987dfceaf9c4b9c9435ce5e8459ad\": rpc error: code = NotFound desc = could not find container \"16b72ec76fdd869daf9a12348f14e96cba0987dfceaf9c4b9c9435ce5e8459ad\": container with ID starting with 16b72ec76fdd869daf9a12348f14e96cba0987dfceaf9c4b9c9435ce5e8459ad not found: ID does not exist" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.768481 4794 scope.go:117] "RemoveContainer" containerID="84c3bc688cf374fd20db486406c752c2adcfb9c52b250cea2bae1ffd583931f3" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.770443 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 17:24:34 crc kubenswrapper[4794]: E0216 17:24:34.772305 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84c3bc688cf374fd20db486406c752c2adcfb9c52b250cea2bae1ffd583931f3\": container with ID starting with 84c3bc688cf374fd20db486406c752c2adcfb9c52b250cea2bae1ffd583931f3 not found: ID does not exist" containerID="84c3bc688cf374fd20db486406c752c2adcfb9c52b250cea2bae1ffd583931f3" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.772435 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84c3bc688cf374fd20db486406c752c2adcfb9c52b250cea2bae1ffd583931f3"} err="failed to get container status \"84c3bc688cf374fd20db486406c752c2adcfb9c52b250cea2bae1ffd583931f3\": rpc error: code = NotFound desc = could not find container \"84c3bc688cf374fd20db486406c752c2adcfb9c52b250cea2bae1ffd583931f3\": container with ID starting with 84c3bc688cf374fd20db486406c752c2adcfb9c52b250cea2bae1ffd583931f3 not found: ID does not exist" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.772675 4794 scope.go:117] "RemoveContainer" containerID="16b72ec76fdd869daf9a12348f14e96cba0987dfceaf9c4b9c9435ce5e8459ad" Feb 16 17:24:34 crc kubenswrapper[4794]: E0216 17:24:34.775314 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60bc3506-fc79-458a-bae4-cedfe5f09450" containerName="nova-metadata-log" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.781499 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="60bc3506-fc79-458a-bae4-cedfe5f09450" containerName="nova-metadata-log" Feb 16 17:24:34 crc kubenswrapper[4794]: E0216 17:24:34.781601 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="521b6a44-f328-4e6e-926b-f27a9b9810ad" containerName="nova-cell1-conductor-db-sync" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.781651 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="521b6a44-f328-4e6e-926b-f27a9b9810ad" containerName="nova-cell1-conductor-db-sync" Feb 16 17:24:34 crc kubenswrapper[4794]: E0216 17:24:34.781734 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60bc3506-fc79-458a-bae4-cedfe5f09450" containerName="nova-metadata-metadata" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.781781 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="60bc3506-fc79-458a-bae4-cedfe5f09450" containerName="nova-metadata-metadata" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.784174 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16b72ec76fdd869daf9a12348f14e96cba0987dfceaf9c4b9c9435ce5e8459ad"} err="failed to get container status \"16b72ec76fdd869daf9a12348f14e96cba0987dfceaf9c4b9c9435ce5e8459ad\": rpc error: code = NotFound desc = could not find container \"16b72ec76fdd869daf9a12348f14e96cba0987dfceaf9c4b9c9435ce5e8459ad\": container with ID starting with 16b72ec76fdd869daf9a12348f14e96cba0987dfceaf9c4b9c9435ce5e8459ad not found: ID does not exist" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.784224 4794 scope.go:117] "RemoveContainer" containerID="84c3bc688cf374fd20db486406c752c2adcfb9c52b250cea2bae1ffd583931f3" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.784852 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84c3bc688cf374fd20db486406c752c2adcfb9c52b250cea2bae1ffd583931f3"} err="failed to get container status \"84c3bc688cf374fd20db486406c752c2adcfb9c52b250cea2bae1ffd583931f3\": rpc error: code = NotFound desc = could not find container \"84c3bc688cf374fd20db486406c752c2adcfb9c52b250cea2bae1ffd583931f3\": container with ID starting with 84c3bc688cf374fd20db486406c752c2adcfb9c52b250cea2bae1ffd583931f3 not found: ID does not exist" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.787056 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="60bc3506-fc79-458a-bae4-cedfe5f09450" containerName="nova-metadata-metadata" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.787506 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="521b6a44-f328-4e6e-926b-f27a9b9810ad" containerName="nova-cell1-conductor-db-sync" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.787632 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="60bc3506-fc79-458a-bae4-cedfe5f09450" containerName="nova-metadata-log" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.794101 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.800031 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.849600 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnlw7\" (UniqueName: \"kubernetes.io/projected/33cde066-6417-44f1-9bd6-53ceb52a577b-kube-api-access-dnlw7\") pod \"nova-cell1-conductor-0\" (UID: \"33cde066-6417-44f1-9bd6-53ceb52a577b\") " pod="openstack/nova-cell1-conductor-0" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.849651 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33cde066-6417-44f1-9bd6-53ceb52a577b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"33cde066-6417-44f1-9bd6-53ceb52a577b\") " pod="openstack/nova-cell1-conductor-0" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.849857 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33cde066-6417-44f1-9bd6-53ceb52a577b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"33cde066-6417-44f1-9bd6-53ceb52a577b\") " pod="openstack/nova-cell1-conductor-0" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.869592 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43f61c36-3ecb-43dc-a9cd-d713af555005" path="/var/lib/kubelet/pods/43f61c36-3ecb-43dc-a9cd-d713af555005/volumes" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.870408 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.872433 4794 scope.go:117] "RemoveContainer" containerID="6e22a7be8018d748f49f3c871459e97f76ee04f859d3d1d0d46b4bbb2dd36691" Feb 16 17:24:34 crc kubenswrapper[4794]: E0216 17:24:34.872653 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.893425 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.911718 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 17:24:34 crc kubenswrapper[4794]: W0216 17:24:34.926636 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0ed14a7_ee41_453d_8114_8e955b120c40.slice/crio-5df6d44cc4570ddb288c8ae751cff1dd0a5f94ae94ea470cf908de4cea2dd2a6 WatchSource:0}: Error finding container 5df6d44cc4570ddb288c8ae751cff1dd0a5f94ae94ea470cf908de4cea2dd2a6: Status 404 returned error can't find the container with id 5df6d44cc4570ddb288c8ae751cff1dd0a5f94ae94ea470cf908de4cea2dd2a6 Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.929446 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.931564 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.935600 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.944916 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.950518 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.953495 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnlw7\" (UniqueName: \"kubernetes.io/projected/33cde066-6417-44f1-9bd6-53ceb52a577b-kube-api-access-dnlw7\") pod \"nova-cell1-conductor-0\" (UID: \"33cde066-6417-44f1-9bd6-53ceb52a577b\") " pod="openstack/nova-cell1-conductor-0" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.953797 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33cde066-6417-44f1-9bd6-53ceb52a577b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"33cde066-6417-44f1-9bd6-53ceb52a577b\") " pod="openstack/nova-cell1-conductor-0" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.954351 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33cde066-6417-44f1-9bd6-53ceb52a577b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"33cde066-6417-44f1-9bd6-53ceb52a577b\") " pod="openstack/nova-cell1-conductor-0" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.970972 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33cde066-6417-44f1-9bd6-53ceb52a577b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"33cde066-6417-44f1-9bd6-53ceb52a577b\") " pod="openstack/nova-cell1-conductor-0" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.973056 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnlw7\" (UniqueName: \"kubernetes.io/projected/33cde066-6417-44f1-9bd6-53ceb52a577b-kube-api-access-dnlw7\") pod \"nova-cell1-conductor-0\" (UID: \"33cde066-6417-44f1-9bd6-53ceb52a577b\") " pod="openstack/nova-cell1-conductor-0" Feb 16 17:24:34 crc kubenswrapper[4794]: I0216 17:24:34.975958 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33cde066-6417-44f1-9bd6-53ceb52a577b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"33cde066-6417-44f1-9bd6-53ceb52a577b\") " pod="openstack/nova-cell1-conductor-0" Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.024629 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.061698 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tc8k\" (UniqueName: \"kubernetes.io/projected/6268e9f0-e992-4887-8a99-80a1b5459cb3-kube-api-access-9tc8k\") pod \"nova-metadata-0\" (UID: \"6268e9f0-e992-4887-8a99-80a1b5459cb3\") " pod="openstack/nova-metadata-0" Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.061821 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6268e9f0-e992-4887-8a99-80a1b5459cb3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6268e9f0-e992-4887-8a99-80a1b5459cb3\") " pod="openstack/nova-metadata-0" Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.063050 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6268e9f0-e992-4887-8a99-80a1b5459cb3-config-data\") pod \"nova-metadata-0\" (UID: \"6268e9f0-e992-4887-8a99-80a1b5459cb3\") " pod="openstack/nova-metadata-0" Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.063219 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6268e9f0-e992-4887-8a99-80a1b5459cb3-logs\") pod \"nova-metadata-0\" (UID: \"6268e9f0-e992-4887-8a99-80a1b5459cb3\") " pod="openstack/nova-metadata-0" Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.063392 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6268e9f0-e992-4887-8a99-80a1b5459cb3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6268e9f0-e992-4887-8a99-80a1b5459cb3\") " pod="openstack/nova-metadata-0" Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.162368 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.165680 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6268e9f0-e992-4887-8a99-80a1b5459cb3-config-data\") pod \"nova-metadata-0\" (UID: \"6268e9f0-e992-4887-8a99-80a1b5459cb3\") " pod="openstack/nova-metadata-0" Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.165735 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6268e9f0-e992-4887-8a99-80a1b5459cb3-logs\") pod \"nova-metadata-0\" (UID: \"6268e9f0-e992-4887-8a99-80a1b5459cb3\") " pod="openstack/nova-metadata-0" Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.165812 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6268e9f0-e992-4887-8a99-80a1b5459cb3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6268e9f0-e992-4887-8a99-80a1b5459cb3\") " pod="openstack/nova-metadata-0" Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.165897 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tc8k\" (UniqueName: \"kubernetes.io/projected/6268e9f0-e992-4887-8a99-80a1b5459cb3-kube-api-access-9tc8k\") pod \"nova-metadata-0\" (UID: \"6268e9f0-e992-4887-8a99-80a1b5459cb3\") " pod="openstack/nova-metadata-0" Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.165951 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6268e9f0-e992-4887-8a99-80a1b5459cb3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6268e9f0-e992-4887-8a99-80a1b5459cb3\") " pod="openstack/nova-metadata-0" Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.166788 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6268e9f0-e992-4887-8a99-80a1b5459cb3-logs\") pod \"nova-metadata-0\" (UID: \"6268e9f0-e992-4887-8a99-80a1b5459cb3\") " pod="openstack/nova-metadata-0" Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.170913 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6268e9f0-e992-4887-8a99-80a1b5459cb3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6268e9f0-e992-4887-8a99-80a1b5459cb3\") " pod="openstack/nova-metadata-0" Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.172533 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6268e9f0-e992-4887-8a99-80a1b5459cb3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6268e9f0-e992-4887-8a99-80a1b5459cb3\") " pod="openstack/nova-metadata-0" Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.180088 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6268e9f0-e992-4887-8a99-80a1b5459cb3-config-data\") pod \"nova-metadata-0\" (UID: \"6268e9f0-e992-4887-8a99-80a1b5459cb3\") " pod="openstack/nova-metadata-0" Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.188198 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tc8k\" (UniqueName: \"kubernetes.io/projected/6268e9f0-e992-4887-8a99-80a1b5459cb3-kube-api-access-9tc8k\") pod \"nova-metadata-0\" (UID: \"6268e9f0-e992-4887-8a99-80a1b5459cb3\") " pod="openstack/nova-metadata-0" Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.394112 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.695743 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.716714 4794 generic.go:334] "Generic (PLEG): container finished" podID="863be36f-716a-4890-9790-1e82c5542f1f" containerID="76159a0e72c392553cae5906836fd1e7b1067347b69184a2998a3d394b57803b" exitCode=0 Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.716795 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"863be36f-716a-4890-9790-1e82c5542f1f","Type":"ContainerDied","Data":"76159a0e72c392553cae5906836fd1e7b1067347b69184a2998a3d394b57803b"} Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.716823 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"863be36f-716a-4890-9790-1e82c5542f1f","Type":"ContainerDied","Data":"2aab68831be65f4245248e5c1e565267914c6ec33ee10f0cfdd19761f1030557"} Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.716834 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.716845 4794 scope.go:117] "RemoveContainer" containerID="76159a0e72c392553cae5906836fd1e7b1067347b69184a2998a3d394b57803b" Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.730824 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94","Type":"ContainerStarted","Data":"465de4d6e8e79885c1f2bdb23b6f65278df6d98b8028914836cde307df056239"} Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.739459 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b0ed14a7-ee41-453d-8114-8e955b120c40","Type":"ContainerStarted","Data":"e1c0620537b9e6151bd065d33ec4815bd4cea215dda129f52b146e6a6a4e74bd"} Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.739506 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b0ed14a7-ee41-453d-8114-8e955b120c40","Type":"ContainerStarted","Data":"5df6d44cc4570ddb288c8ae751cff1dd0a5f94ae94ea470cf908de4cea2dd2a6"} Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.791090 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/863be36f-716a-4890-9790-1e82c5542f1f-logs\") pod \"863be36f-716a-4890-9790-1e82c5542f1f\" (UID: \"863be36f-716a-4890-9790-1e82c5542f1f\") " Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.791372 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn2rw\" (UniqueName: \"kubernetes.io/projected/863be36f-716a-4890-9790-1e82c5542f1f-kube-api-access-kn2rw\") pod \"863be36f-716a-4890-9790-1e82c5542f1f\" (UID: \"863be36f-716a-4890-9790-1e82c5542f1f\") " Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.791439 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/863be36f-716a-4890-9790-1e82c5542f1f-config-data\") pod \"863be36f-716a-4890-9790-1e82c5542f1f\" (UID: \"863be36f-716a-4890-9790-1e82c5542f1f\") " Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.791501 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/863be36f-716a-4890-9790-1e82c5542f1f-combined-ca-bundle\") pod \"863be36f-716a-4890-9790-1e82c5542f1f\" (UID: \"863be36f-716a-4890-9790-1e82c5542f1f\") " Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.796576 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/863be36f-716a-4890-9790-1e82c5542f1f-logs" (OuterVolumeSpecName: "logs") pod "863be36f-716a-4890-9790-1e82c5542f1f" (UID: "863be36f-716a-4890-9790-1e82c5542f1f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.819993 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/863be36f-716a-4890-9790-1e82c5542f1f-kube-api-access-kn2rw" (OuterVolumeSpecName: "kube-api-access-kn2rw") pod "863be36f-716a-4890-9790-1e82c5542f1f" (UID: "863be36f-716a-4890-9790-1e82c5542f1f"). InnerVolumeSpecName "kube-api-access-kn2rw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.845843 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.8458240639999999 podStartE2EDuration="1.845824064s" podCreationTimestamp="2026-02-16 17:24:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:24:35.771465899 +0000 UTC m=+1501.719560556" watchObservedRunningTime="2026-02-16 17:24:35.845824064 +0000 UTC m=+1501.793918711" Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.860623 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/863be36f-716a-4890-9790-1e82c5542f1f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "863be36f-716a-4890-9790-1e82c5542f1f" (UID: "863be36f-716a-4890-9790-1e82c5542f1f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.871493 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/863be36f-716a-4890-9790-1e82c5542f1f-config-data" (OuterVolumeSpecName: "config-data") pod "863be36f-716a-4890-9790-1e82c5542f1f" (UID: "863be36f-716a-4890-9790-1e82c5542f1f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.884467 4794 scope.go:117] "RemoveContainer" containerID="7da1c55f756dd17a88b39d85dc53a6648ed8d9966956f0f579ccd7603fac65b7" Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.929306 4794 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/863be36f-716a-4890-9790-1e82c5542f1f-logs\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.929365 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kn2rw\" (UniqueName: \"kubernetes.io/projected/863be36f-716a-4890-9790-1e82c5542f1f-kube-api-access-kn2rw\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.929376 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/863be36f-716a-4890-9790-1e82c5542f1f-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.929388 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/863be36f-716a-4890-9790-1e82c5542f1f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.942084 4794 scope.go:117] "RemoveContainer" containerID="76159a0e72c392553cae5906836fd1e7b1067347b69184a2998a3d394b57803b" Feb 16 17:24:35 crc kubenswrapper[4794]: E0216 17:24:35.945402 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76159a0e72c392553cae5906836fd1e7b1067347b69184a2998a3d394b57803b\": container with ID starting with 76159a0e72c392553cae5906836fd1e7b1067347b69184a2998a3d394b57803b not found: ID does not exist" containerID="76159a0e72c392553cae5906836fd1e7b1067347b69184a2998a3d394b57803b" Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.945432 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76159a0e72c392553cae5906836fd1e7b1067347b69184a2998a3d394b57803b"} err="failed to get container status \"76159a0e72c392553cae5906836fd1e7b1067347b69184a2998a3d394b57803b\": rpc error: code = NotFound desc = could not find container \"76159a0e72c392553cae5906836fd1e7b1067347b69184a2998a3d394b57803b\": container with ID starting with 76159a0e72c392553cae5906836fd1e7b1067347b69184a2998a3d394b57803b not found: ID does not exist" Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.945453 4794 scope.go:117] "RemoveContainer" containerID="7da1c55f756dd17a88b39d85dc53a6648ed8d9966956f0f579ccd7603fac65b7" Feb 16 17:24:35 crc kubenswrapper[4794]: E0216 17:24:35.945820 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7da1c55f756dd17a88b39d85dc53a6648ed8d9966956f0f579ccd7603fac65b7\": container with ID starting with 7da1c55f756dd17a88b39d85dc53a6648ed8d9966956f0f579ccd7603fac65b7 not found: ID does not exist" containerID="7da1c55f756dd17a88b39d85dc53a6648ed8d9966956f0f579ccd7603fac65b7" Feb 16 17:24:35 crc kubenswrapper[4794]: I0216 17:24:35.945852 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7da1c55f756dd17a88b39d85dc53a6648ed8d9966956f0f579ccd7603fac65b7"} err="failed to get container status \"7da1c55f756dd17a88b39d85dc53a6648ed8d9966956f0f579ccd7603fac65b7\": rpc error: code = NotFound desc = could not find container \"7da1c55f756dd17a88b39d85dc53a6648ed8d9966956f0f579ccd7603fac65b7\": container with ID starting with 7da1c55f756dd17a88b39d85dc53a6648ed8d9966956f0f579ccd7603fac65b7 not found: ID does not exist" Feb 16 17:24:36 crc kubenswrapper[4794]: I0216 17:24:36.184411 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:24:36 crc kubenswrapper[4794]: I0216 17:24:36.239216 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:24:36 crc kubenswrapper[4794]: I0216 17:24:36.253133 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 17:24:36 crc kubenswrapper[4794]: I0216 17:24:36.269563 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 17:24:36 crc kubenswrapper[4794]: E0216 17:24:36.272001 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="863be36f-716a-4890-9790-1e82c5542f1f" containerName="nova-api-api" Feb 16 17:24:36 crc kubenswrapper[4794]: I0216 17:24:36.272031 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="863be36f-716a-4890-9790-1e82c5542f1f" containerName="nova-api-api" Feb 16 17:24:36 crc kubenswrapper[4794]: E0216 17:24:36.272050 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="863be36f-716a-4890-9790-1e82c5542f1f" containerName="nova-api-log" Feb 16 17:24:36 crc kubenswrapper[4794]: I0216 17:24:36.272060 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="863be36f-716a-4890-9790-1e82c5542f1f" containerName="nova-api-log" Feb 16 17:24:36 crc kubenswrapper[4794]: I0216 17:24:36.272592 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="863be36f-716a-4890-9790-1e82c5542f1f" containerName="nova-api-api" Feb 16 17:24:36 crc kubenswrapper[4794]: I0216 17:24:36.272620 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="863be36f-716a-4890-9790-1e82c5542f1f" containerName="nova-api-log" Feb 16 17:24:36 crc kubenswrapper[4794]: I0216 17:24:36.275734 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:24:36 crc kubenswrapper[4794]: I0216 17:24:36.279910 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 17:24:36 crc kubenswrapper[4794]: I0216 17:24:36.294128 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81beeae8-f0b5-480c-92cf-ce047e2a55a5-config-data\") pod \"nova-api-0\" (UID: \"81beeae8-f0b5-480c-92cf-ce047e2a55a5\") " pod="openstack/nova-api-0" Feb 16 17:24:36 crc kubenswrapper[4794]: I0216 17:24:36.294319 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7j5w\" (UniqueName: \"kubernetes.io/projected/81beeae8-f0b5-480c-92cf-ce047e2a55a5-kube-api-access-p7j5w\") pod \"nova-api-0\" (UID: \"81beeae8-f0b5-480c-92cf-ce047e2a55a5\") " pod="openstack/nova-api-0" Feb 16 17:24:36 crc kubenswrapper[4794]: I0216 17:24:36.294603 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81beeae8-f0b5-480c-92cf-ce047e2a55a5-logs\") pod \"nova-api-0\" (UID: \"81beeae8-f0b5-480c-92cf-ce047e2a55a5\") " pod="openstack/nova-api-0" Feb 16 17:24:36 crc kubenswrapper[4794]: I0216 17:24:36.294766 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81beeae8-f0b5-480c-92cf-ce047e2a55a5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"81beeae8-f0b5-480c-92cf-ce047e2a55a5\") " pod="openstack/nova-api-0" Feb 16 17:24:36 crc kubenswrapper[4794]: I0216 17:24:36.303311 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:24:36 crc kubenswrapper[4794]: I0216 17:24:36.326443 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:24:36 crc kubenswrapper[4794]: I0216 17:24:36.396499 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81beeae8-f0b5-480c-92cf-ce047e2a55a5-logs\") pod \"nova-api-0\" (UID: \"81beeae8-f0b5-480c-92cf-ce047e2a55a5\") " pod="openstack/nova-api-0" Feb 16 17:24:36 crc kubenswrapper[4794]: I0216 17:24:36.396562 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81beeae8-f0b5-480c-92cf-ce047e2a55a5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"81beeae8-f0b5-480c-92cf-ce047e2a55a5\") " pod="openstack/nova-api-0" Feb 16 17:24:36 crc kubenswrapper[4794]: I0216 17:24:36.396649 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81beeae8-f0b5-480c-92cf-ce047e2a55a5-config-data\") pod \"nova-api-0\" (UID: \"81beeae8-f0b5-480c-92cf-ce047e2a55a5\") " pod="openstack/nova-api-0" Feb 16 17:24:36 crc kubenswrapper[4794]: I0216 17:24:36.396708 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7j5w\" (UniqueName: \"kubernetes.io/projected/81beeae8-f0b5-480c-92cf-ce047e2a55a5-kube-api-access-p7j5w\") pod \"nova-api-0\" (UID: \"81beeae8-f0b5-480c-92cf-ce047e2a55a5\") " pod="openstack/nova-api-0" Feb 16 17:24:36 crc kubenswrapper[4794]: I0216 17:24:36.398069 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81beeae8-f0b5-480c-92cf-ce047e2a55a5-logs\") pod \"nova-api-0\" (UID: \"81beeae8-f0b5-480c-92cf-ce047e2a55a5\") " pod="openstack/nova-api-0" Feb 16 17:24:36 crc kubenswrapper[4794]: I0216 17:24:36.401739 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81beeae8-f0b5-480c-92cf-ce047e2a55a5-config-data\") pod \"nova-api-0\" (UID: \"81beeae8-f0b5-480c-92cf-ce047e2a55a5\") " pod="openstack/nova-api-0" Feb 16 17:24:36 crc kubenswrapper[4794]: I0216 17:24:36.401859 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81beeae8-f0b5-480c-92cf-ce047e2a55a5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"81beeae8-f0b5-480c-92cf-ce047e2a55a5\") " pod="openstack/nova-api-0" Feb 16 17:24:36 crc kubenswrapper[4794]: I0216 17:24:36.421223 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7j5w\" (UniqueName: \"kubernetes.io/projected/81beeae8-f0b5-480c-92cf-ce047e2a55a5-kube-api-access-p7j5w\") pod \"nova-api-0\" (UID: \"81beeae8-f0b5-480c-92cf-ce047e2a55a5\") " pod="openstack/nova-api-0" Feb 16 17:24:36 crc kubenswrapper[4794]: I0216 17:24:36.505065 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:24:36 crc kubenswrapper[4794]: I0216 17:24:36.781188 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"33cde066-6417-44f1-9bd6-53ceb52a577b","Type":"ContainerStarted","Data":"60b37cafe12a1c2187701ac9b4d8697835521e4d32937fd2c7964d6d086dc540"} Feb 16 17:24:36 crc kubenswrapper[4794]: I0216 17:24:36.781591 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 16 17:24:36 crc kubenswrapper[4794]: I0216 17:24:36.781605 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"33cde066-6417-44f1-9bd6-53ceb52a577b","Type":"ContainerStarted","Data":"4f171cbc055eeeba02fb1a5b5df8e584f29fa20ff1e31a8abf419fb3067ac3b7"} Feb 16 17:24:36 crc kubenswrapper[4794]: I0216 17:24:36.800673 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.800656819 podStartE2EDuration="2.800656819s" podCreationTimestamp="2026-02-16 17:24:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:24:36.796936784 +0000 UTC m=+1502.745031441" watchObservedRunningTime="2026-02-16 17:24:36.800656819 +0000 UTC m=+1502.748751466" Feb 16 17:24:36 crc kubenswrapper[4794]: I0216 17:24:36.824478 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60bc3506-fc79-458a-bae4-cedfe5f09450" path="/var/lib/kubelet/pods/60bc3506-fc79-458a-bae4-cedfe5f09450/volumes" Feb 16 17:24:36 crc kubenswrapper[4794]: I0216 17:24:36.825183 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="863be36f-716a-4890-9790-1e82c5542f1f" path="/var/lib/kubelet/pods/863be36f-716a-4890-9790-1e82c5542f1f/volumes" Feb 16 17:24:36 crc kubenswrapper[4794]: I0216 17:24:36.825829 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6268e9f0-e992-4887-8a99-80a1b5459cb3","Type":"ContainerStarted","Data":"c98da034c395daf027cfac4174ee49fc0913d68eaa7a105f4c31c0046e08cd64"} Feb 16 17:24:36 crc kubenswrapper[4794]: I0216 17:24:36.825849 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6268e9f0-e992-4887-8a99-80a1b5459cb3","Type":"ContainerStarted","Data":"a09a142da95bf005d0ccb35d14810bf29a50ed1cea71f06256e14a6d44dd3adf"} Feb 16 17:24:37 crc kubenswrapper[4794]: W0216 17:24:37.067015 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81beeae8_f0b5_480c_92cf_ce047e2a55a5.slice/crio-9c04b4268b2112a7ea6571bd3ce512e706c9dc4ced78d1664191e54ab33413f4 WatchSource:0}: Error finding container 9c04b4268b2112a7ea6571bd3ce512e706c9dc4ced78d1664191e54ab33413f4: Status 404 returned error can't find the container with id 9c04b4268b2112a7ea6571bd3ce512e706c9dc4ced78d1664191e54ab33413f4 Feb 16 17:24:37 crc kubenswrapper[4794]: I0216 17:24:37.067531 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:24:37 crc kubenswrapper[4794]: I0216 17:24:37.808842 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6268e9f0-e992-4887-8a99-80a1b5459cb3","Type":"ContainerStarted","Data":"e14af41e3422319bb29fab616cc6d9d89fa53d6d466f15dcbd3087e841726665"} Feb 16 17:24:37 crc kubenswrapper[4794]: I0216 17:24:37.810950 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"81beeae8-f0b5-480c-92cf-ce047e2a55a5","Type":"ContainerStarted","Data":"a12f9787d1b709ac0cfbe17d3850951219242d69eb011c85c8a96eae34aea977"} Feb 16 17:24:37 crc kubenswrapper[4794]: I0216 17:24:37.810979 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"81beeae8-f0b5-480c-92cf-ce047e2a55a5","Type":"ContainerStarted","Data":"ffbc9497b5c9a21de01f28e99dcd0b65281361a347588ba5834a9ca830e0fbd6"} Feb 16 17:24:37 crc kubenswrapper[4794]: I0216 17:24:37.810989 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"81beeae8-f0b5-480c-92cf-ce047e2a55a5","Type":"ContainerStarted","Data":"9c04b4268b2112a7ea6571bd3ce512e706c9dc4ced78d1664191e54ab33413f4"} Feb 16 17:24:37 crc kubenswrapper[4794]: I0216 17:24:37.814550 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94","Type":"ContainerStarted","Data":"8e8c0eb292e4f32977a9154a6d67924fdfacec5b2d2b07da700d05672a74c88d"} Feb 16 17:24:37 crc kubenswrapper[4794]: I0216 17:24:37.814790 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 17:24:37 crc kubenswrapper[4794]: I0216 17:24:37.843728 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.843701953 podStartE2EDuration="3.843701953s" podCreationTimestamp="2026-02-16 17:24:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:24:37.827394802 +0000 UTC m=+1503.775489459" watchObservedRunningTime="2026-02-16 17:24:37.843701953 +0000 UTC m=+1503.791796600" Feb 16 17:24:37 crc kubenswrapper[4794]: I0216 17:24:37.853407 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.853391508 podStartE2EDuration="1.853391508s" podCreationTimestamp="2026-02-16 17:24:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:24:37.851903816 +0000 UTC m=+1503.799998463" watchObservedRunningTime="2026-02-16 17:24:37.853391508 +0000 UTC m=+1503.801486155" Feb 16 17:24:37 crc kubenswrapper[4794]: I0216 17:24:37.891299 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.988939659 podStartE2EDuration="6.891278021s" podCreationTimestamp="2026-02-16 17:24:31 +0000 UTC" firstStartedPulling="2026-02-16 17:24:32.884776514 +0000 UTC m=+1498.832871161" lastFinishedPulling="2026-02-16 17:24:36.787114876 +0000 UTC m=+1502.735209523" observedRunningTime="2026-02-16 17:24:37.884579331 +0000 UTC m=+1503.832673978" watchObservedRunningTime="2026-02-16 17:24:37.891278021 +0000 UTC m=+1503.839372658" Feb 16 17:24:37 crc kubenswrapper[4794]: I0216 17:24:37.987603 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-sf58s"] Feb 16 17:24:37 crc kubenswrapper[4794]: I0216 17:24:37.989002 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-sf58s" Feb 16 17:24:37 crc kubenswrapper[4794]: I0216 17:24:37.992047 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 16 17:24:37 crc kubenswrapper[4794]: I0216 17:24:37.992221 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 16 17:24:37 crc kubenswrapper[4794]: I0216 17:24:37.992260 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 17:24:37 crc kubenswrapper[4794]: I0216 17:24:37.992407 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-kxvmt" Feb 16 17:24:38 crc kubenswrapper[4794]: I0216 17:24:38.013600 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-sf58s"] Feb 16 17:24:38 crc kubenswrapper[4794]: I0216 17:24:38.146438 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/377738df-5701-4cde-a811-3c975e20fce7-combined-ca-bundle\") pod \"aodh-db-sync-sf58s\" (UID: \"377738df-5701-4cde-a811-3c975e20fce7\") " pod="openstack/aodh-db-sync-sf58s" Feb 16 17:24:38 crc kubenswrapper[4794]: I0216 17:24:38.146495 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/377738df-5701-4cde-a811-3c975e20fce7-config-data\") pod \"aodh-db-sync-sf58s\" (UID: \"377738df-5701-4cde-a811-3c975e20fce7\") " pod="openstack/aodh-db-sync-sf58s" Feb 16 17:24:38 crc kubenswrapper[4794]: I0216 17:24:38.146539 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/377738df-5701-4cde-a811-3c975e20fce7-scripts\") pod \"aodh-db-sync-sf58s\" (UID: \"377738df-5701-4cde-a811-3c975e20fce7\") " pod="openstack/aodh-db-sync-sf58s" Feb 16 17:24:38 crc kubenswrapper[4794]: I0216 17:24:38.146659 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clgrm\" (UniqueName: \"kubernetes.io/projected/377738df-5701-4cde-a811-3c975e20fce7-kube-api-access-clgrm\") pod \"aodh-db-sync-sf58s\" (UID: \"377738df-5701-4cde-a811-3c975e20fce7\") " pod="openstack/aodh-db-sync-sf58s" Feb 16 17:24:38 crc kubenswrapper[4794]: I0216 17:24:38.249299 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clgrm\" (UniqueName: \"kubernetes.io/projected/377738df-5701-4cde-a811-3c975e20fce7-kube-api-access-clgrm\") pod \"aodh-db-sync-sf58s\" (UID: \"377738df-5701-4cde-a811-3c975e20fce7\") " pod="openstack/aodh-db-sync-sf58s" Feb 16 17:24:38 crc kubenswrapper[4794]: I0216 17:24:38.249944 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/377738df-5701-4cde-a811-3c975e20fce7-combined-ca-bundle\") pod \"aodh-db-sync-sf58s\" (UID: \"377738df-5701-4cde-a811-3c975e20fce7\") " pod="openstack/aodh-db-sync-sf58s" Feb 16 17:24:38 crc kubenswrapper[4794]: I0216 17:24:38.250058 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/377738df-5701-4cde-a811-3c975e20fce7-config-data\") pod \"aodh-db-sync-sf58s\" (UID: \"377738df-5701-4cde-a811-3c975e20fce7\") " pod="openstack/aodh-db-sync-sf58s" Feb 16 17:24:38 crc kubenswrapper[4794]: I0216 17:24:38.250161 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/377738df-5701-4cde-a811-3c975e20fce7-scripts\") pod \"aodh-db-sync-sf58s\" (UID: \"377738df-5701-4cde-a811-3c975e20fce7\") " pod="openstack/aodh-db-sync-sf58s" Feb 16 17:24:38 crc kubenswrapper[4794]: I0216 17:24:38.257448 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/377738df-5701-4cde-a811-3c975e20fce7-combined-ca-bundle\") pod \"aodh-db-sync-sf58s\" (UID: \"377738df-5701-4cde-a811-3c975e20fce7\") " pod="openstack/aodh-db-sync-sf58s" Feb 16 17:24:38 crc kubenswrapper[4794]: I0216 17:24:38.257924 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/377738df-5701-4cde-a811-3c975e20fce7-config-data\") pod \"aodh-db-sync-sf58s\" (UID: \"377738df-5701-4cde-a811-3c975e20fce7\") " pod="openstack/aodh-db-sync-sf58s" Feb 16 17:24:38 crc kubenswrapper[4794]: I0216 17:24:38.258022 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/377738df-5701-4cde-a811-3c975e20fce7-scripts\") pod \"aodh-db-sync-sf58s\" (UID: \"377738df-5701-4cde-a811-3c975e20fce7\") " pod="openstack/aodh-db-sync-sf58s" Feb 16 17:24:38 crc kubenswrapper[4794]: I0216 17:24:38.279901 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clgrm\" (UniqueName: \"kubernetes.io/projected/377738df-5701-4cde-a811-3c975e20fce7-kube-api-access-clgrm\") pod \"aodh-db-sync-sf58s\" (UID: \"377738df-5701-4cde-a811-3c975e20fce7\") " pod="openstack/aodh-db-sync-sf58s" Feb 16 17:24:38 crc kubenswrapper[4794]: I0216 17:24:38.307756 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-sf58s" Feb 16 17:24:39 crc kubenswrapper[4794]: I0216 17:24:39.163915 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-sf58s"] Feb 16 17:24:39 crc kubenswrapper[4794]: I0216 17:24:39.373856 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 17:24:39 crc kubenswrapper[4794]: I0216 17:24:39.845993 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-sf58s" event={"ID":"377738df-5701-4cde-a811-3c975e20fce7","Type":"ContainerStarted","Data":"76676a17b753f9d336f215d8e67ecdd21d6d7f67291ee54b8ae627cab5abbc80"} Feb 16 17:24:40 crc kubenswrapper[4794]: I0216 17:24:40.395083 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 17:24:40 crc kubenswrapper[4794]: I0216 17:24:40.395133 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 17:24:44 crc kubenswrapper[4794]: I0216 17:24:44.373788 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 17:24:44 crc kubenswrapper[4794]: I0216 17:24:44.402344 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 17:24:44 crc kubenswrapper[4794]: I0216 17:24:44.897986 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-sf58s" event={"ID":"377738df-5701-4cde-a811-3c975e20fce7","Type":"ContainerStarted","Data":"f42f3f6652e80673cd93402c97cc19fc746d71d59bd381ad65fa0d9465ac6651"} Feb 16 17:24:44 crc kubenswrapper[4794]: I0216 17:24:44.918660 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-sf58s" podStartSLOduration=3.016164438 podStartE2EDuration="7.918639397s" podCreationTimestamp="2026-02-16 17:24:37 +0000 UTC" firstStartedPulling="2026-02-16 17:24:39.170822854 +0000 UTC m=+1505.118917501" lastFinishedPulling="2026-02-16 17:24:44.073297793 +0000 UTC m=+1510.021392460" observedRunningTime="2026-02-16 17:24:44.914321745 +0000 UTC m=+1510.862416402" watchObservedRunningTime="2026-02-16 17:24:44.918639397 +0000 UTC m=+1510.866734044" Feb 16 17:24:44 crc kubenswrapper[4794]: I0216 17:24:44.932182 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 17:24:45 crc kubenswrapper[4794]: I0216 17:24:45.225270 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 16 17:24:45 crc kubenswrapper[4794]: I0216 17:24:45.395196 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 17:24:45 crc kubenswrapper[4794]: I0216 17:24:45.395246 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 17:24:46 crc kubenswrapper[4794]: I0216 17:24:46.407457 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6268e9f0-e992-4887-8a99-80a1b5459cb3" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.250:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 17:24:46 crc kubenswrapper[4794]: I0216 17:24:46.407502 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6268e9f0-e992-4887-8a99-80a1b5459cb3" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.250:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 17:24:46 crc kubenswrapper[4794]: I0216 17:24:46.505624 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 17:24:46 crc kubenswrapper[4794]: I0216 17:24:46.505693 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 17:24:46 crc kubenswrapper[4794]: I0216 17:24:46.791494 4794 scope.go:117] "RemoveContainer" containerID="6e22a7be8018d748f49f3c871459e97f76ee04f859d3d1d0d46b4bbb2dd36691" Feb 16 17:24:46 crc kubenswrapper[4794]: E0216 17:24:46.792209 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:24:46 crc kubenswrapper[4794]: I0216 17:24:46.916911 4794 generic.go:334] "Generic (PLEG): container finished" podID="377738df-5701-4cde-a811-3c975e20fce7" containerID="f42f3f6652e80673cd93402c97cc19fc746d71d59bd381ad65fa0d9465ac6651" exitCode=0 Feb 16 17:24:46 crc kubenswrapper[4794]: I0216 17:24:46.916962 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-sf58s" event={"ID":"377738df-5701-4cde-a811-3c975e20fce7","Type":"ContainerDied","Data":"f42f3f6652e80673cd93402c97cc19fc746d71d59bd381ad65fa0d9465ac6651"} Feb 16 17:24:47 crc kubenswrapper[4794]: I0216 17:24:47.587680 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="81beeae8-f0b5-480c-92cf-ce047e2a55a5" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.251:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 17:24:47 crc kubenswrapper[4794]: I0216 17:24:47.588140 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="81beeae8-f0b5-480c-92cf-ce047e2a55a5" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.251:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 17:24:48 crc kubenswrapper[4794]: I0216 17:24:48.567195 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-sf58s" Feb 16 17:24:48 crc kubenswrapper[4794]: I0216 17:24:48.727870 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/377738df-5701-4cde-a811-3c975e20fce7-config-data\") pod \"377738df-5701-4cde-a811-3c975e20fce7\" (UID: \"377738df-5701-4cde-a811-3c975e20fce7\") " Feb 16 17:24:48 crc kubenswrapper[4794]: I0216 17:24:48.728275 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/377738df-5701-4cde-a811-3c975e20fce7-combined-ca-bundle\") pod \"377738df-5701-4cde-a811-3c975e20fce7\" (UID: \"377738df-5701-4cde-a811-3c975e20fce7\") " Feb 16 17:24:48 crc kubenswrapper[4794]: I0216 17:24:48.728442 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/377738df-5701-4cde-a811-3c975e20fce7-scripts\") pod \"377738df-5701-4cde-a811-3c975e20fce7\" (UID: \"377738df-5701-4cde-a811-3c975e20fce7\") " Feb 16 17:24:48 crc kubenswrapper[4794]: I0216 17:24:48.728526 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clgrm\" (UniqueName: \"kubernetes.io/projected/377738df-5701-4cde-a811-3c975e20fce7-kube-api-access-clgrm\") pod \"377738df-5701-4cde-a811-3c975e20fce7\" (UID: \"377738df-5701-4cde-a811-3c975e20fce7\") " Feb 16 17:24:48 crc kubenswrapper[4794]: I0216 17:24:48.733971 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/377738df-5701-4cde-a811-3c975e20fce7-scripts" (OuterVolumeSpecName: "scripts") pod "377738df-5701-4cde-a811-3c975e20fce7" (UID: "377738df-5701-4cde-a811-3c975e20fce7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:48 crc kubenswrapper[4794]: I0216 17:24:48.735015 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/377738df-5701-4cde-a811-3c975e20fce7-kube-api-access-clgrm" (OuterVolumeSpecName: "kube-api-access-clgrm") pod "377738df-5701-4cde-a811-3c975e20fce7" (UID: "377738df-5701-4cde-a811-3c975e20fce7"). InnerVolumeSpecName "kube-api-access-clgrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:24:48 crc kubenswrapper[4794]: I0216 17:24:48.767163 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/377738df-5701-4cde-a811-3c975e20fce7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "377738df-5701-4cde-a811-3c975e20fce7" (UID: "377738df-5701-4cde-a811-3c975e20fce7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:48 crc kubenswrapper[4794]: I0216 17:24:48.780103 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/377738df-5701-4cde-a811-3c975e20fce7-config-data" (OuterVolumeSpecName: "config-data") pod "377738df-5701-4cde-a811-3c975e20fce7" (UID: "377738df-5701-4cde-a811-3c975e20fce7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:48 crc kubenswrapper[4794]: I0216 17:24:48.831238 4794 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/377738df-5701-4cde-a811-3c975e20fce7-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:48 crc kubenswrapper[4794]: I0216 17:24:48.831277 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clgrm\" (UniqueName: \"kubernetes.io/projected/377738df-5701-4cde-a811-3c975e20fce7-kube-api-access-clgrm\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:48 crc kubenswrapper[4794]: I0216 17:24:48.831288 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/377738df-5701-4cde-a811-3c975e20fce7-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:48 crc kubenswrapper[4794]: I0216 17:24:48.831297 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/377738df-5701-4cde-a811-3c975e20fce7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:48 crc kubenswrapper[4794]: I0216 17:24:48.951373 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-sf58s" event={"ID":"377738df-5701-4cde-a811-3c975e20fce7","Type":"ContainerDied","Data":"76676a17b753f9d336f215d8e67ecdd21d6d7f67291ee54b8ae627cab5abbc80"} Feb 16 17:24:48 crc kubenswrapper[4794]: I0216 17:24:48.951850 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76676a17b753f9d336f215d8e67ecdd21d6d7f67291ee54b8ae627cab5abbc80" Feb 16 17:24:48 crc kubenswrapper[4794]: I0216 17:24:48.951567 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-sf58s" Feb 16 17:24:52 crc kubenswrapper[4794]: I0216 17:24:52.585642 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 16 17:24:52 crc kubenswrapper[4794]: E0216 17:24:52.586436 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="377738df-5701-4cde-a811-3c975e20fce7" containerName="aodh-db-sync" Feb 16 17:24:52 crc kubenswrapper[4794]: I0216 17:24:52.586452 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="377738df-5701-4cde-a811-3c975e20fce7" containerName="aodh-db-sync" Feb 16 17:24:52 crc kubenswrapper[4794]: I0216 17:24:52.586674 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="377738df-5701-4cde-a811-3c975e20fce7" containerName="aodh-db-sync" Feb 16 17:24:52 crc kubenswrapper[4794]: I0216 17:24:52.595475 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 16 17:24:52 crc kubenswrapper[4794]: I0216 17:24:52.600226 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 16 17:24:52 crc kubenswrapper[4794]: I0216 17:24:52.600894 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-kxvmt" Feb 16 17:24:52 crc kubenswrapper[4794]: I0216 17:24:52.601070 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 16 17:24:52 crc kubenswrapper[4794]: I0216 17:24:52.652171 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 16 17:24:52 crc kubenswrapper[4794]: I0216 17:24:52.729712 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0135c16b-58fd-4898-b711-786fa961ddfe-scripts\") pod \"aodh-0\" (UID: \"0135c16b-58fd-4898-b711-786fa961ddfe\") " pod="openstack/aodh-0" Feb 16 17:24:52 crc kubenswrapper[4794]: I0216 17:24:52.729810 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0135c16b-58fd-4898-b711-786fa961ddfe-combined-ca-bundle\") pod \"aodh-0\" (UID: \"0135c16b-58fd-4898-b711-786fa961ddfe\") " pod="openstack/aodh-0" Feb 16 17:24:52 crc kubenswrapper[4794]: I0216 17:24:52.731624 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0135c16b-58fd-4898-b711-786fa961ddfe-config-data\") pod \"aodh-0\" (UID: \"0135c16b-58fd-4898-b711-786fa961ddfe\") " pod="openstack/aodh-0" Feb 16 17:24:52 crc kubenswrapper[4794]: I0216 17:24:52.731878 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnccv\" (UniqueName: \"kubernetes.io/projected/0135c16b-58fd-4898-b711-786fa961ddfe-kube-api-access-mnccv\") pod \"aodh-0\" (UID: \"0135c16b-58fd-4898-b711-786fa961ddfe\") " pod="openstack/aodh-0" Feb 16 17:24:52 crc kubenswrapper[4794]: I0216 17:24:52.833936 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0135c16b-58fd-4898-b711-786fa961ddfe-scripts\") pod \"aodh-0\" (UID: \"0135c16b-58fd-4898-b711-786fa961ddfe\") " pod="openstack/aodh-0" Feb 16 17:24:52 crc kubenswrapper[4794]: I0216 17:24:52.834030 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0135c16b-58fd-4898-b711-786fa961ddfe-combined-ca-bundle\") pod \"aodh-0\" (UID: \"0135c16b-58fd-4898-b711-786fa961ddfe\") " pod="openstack/aodh-0" Feb 16 17:24:52 crc kubenswrapper[4794]: I0216 17:24:52.834085 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0135c16b-58fd-4898-b711-786fa961ddfe-config-data\") pod \"aodh-0\" (UID: \"0135c16b-58fd-4898-b711-786fa961ddfe\") " pod="openstack/aodh-0" Feb 16 17:24:52 crc kubenswrapper[4794]: I0216 17:24:52.834233 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnccv\" (UniqueName: \"kubernetes.io/projected/0135c16b-58fd-4898-b711-786fa961ddfe-kube-api-access-mnccv\") pod \"aodh-0\" (UID: \"0135c16b-58fd-4898-b711-786fa961ddfe\") " pod="openstack/aodh-0" Feb 16 17:24:52 crc kubenswrapper[4794]: I0216 17:24:52.840279 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0135c16b-58fd-4898-b711-786fa961ddfe-scripts\") pod \"aodh-0\" (UID: \"0135c16b-58fd-4898-b711-786fa961ddfe\") " pod="openstack/aodh-0" Feb 16 17:24:52 crc kubenswrapper[4794]: I0216 17:24:52.840610 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0135c16b-58fd-4898-b711-786fa961ddfe-combined-ca-bundle\") pod \"aodh-0\" (UID: \"0135c16b-58fd-4898-b711-786fa961ddfe\") " pod="openstack/aodh-0" Feb 16 17:24:52 crc kubenswrapper[4794]: I0216 17:24:52.854762 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0135c16b-58fd-4898-b711-786fa961ddfe-config-data\") pod \"aodh-0\" (UID: \"0135c16b-58fd-4898-b711-786fa961ddfe\") " pod="openstack/aodh-0" Feb 16 17:24:52 crc kubenswrapper[4794]: I0216 17:24:52.856376 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnccv\" (UniqueName: \"kubernetes.io/projected/0135c16b-58fd-4898-b711-786fa961ddfe-kube-api-access-mnccv\") pod \"aodh-0\" (UID: \"0135c16b-58fd-4898-b711-786fa961ddfe\") " pod="openstack/aodh-0" Feb 16 17:24:52 crc kubenswrapper[4794]: I0216 17:24:52.936160 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 16 17:24:53 crc kubenswrapper[4794]: I0216 17:24:53.478391 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 16 17:24:53 crc kubenswrapper[4794]: W0216 17:24:53.484472 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0135c16b_58fd_4898_b711_786fa961ddfe.slice/crio-a23741f0aaed88324d4a1c6cb639e09a1940213390d84f2df3117f248acd462f WatchSource:0}: Error finding container a23741f0aaed88324d4a1c6cb639e09a1940213390d84f2df3117f248acd462f: Status 404 returned error can't find the container with id a23741f0aaed88324d4a1c6cb639e09a1940213390d84f2df3117f248acd462f Feb 16 17:24:54 crc kubenswrapper[4794]: I0216 17:24:54.030150 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0135c16b-58fd-4898-b711-786fa961ddfe","Type":"ContainerStarted","Data":"a23741f0aaed88324d4a1c6cb639e09a1940213390d84f2df3117f248acd462f"} Feb 16 17:24:54 crc kubenswrapper[4794]: I0216 17:24:54.742039 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:24:54 crc kubenswrapper[4794]: I0216 17:24:54.898969 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86v64\" (UniqueName: \"kubernetes.io/projected/7ef6f08e-df36-49a2-b05e-b62545488b2d-kube-api-access-86v64\") pod \"7ef6f08e-df36-49a2-b05e-b62545488b2d\" (UID: \"7ef6f08e-df36-49a2-b05e-b62545488b2d\") " Feb 16 17:24:54 crc kubenswrapper[4794]: I0216 17:24:54.899528 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ef6f08e-df36-49a2-b05e-b62545488b2d-config-data\") pod \"7ef6f08e-df36-49a2-b05e-b62545488b2d\" (UID: \"7ef6f08e-df36-49a2-b05e-b62545488b2d\") " Feb 16 17:24:54 crc kubenswrapper[4794]: I0216 17:24:54.899620 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ef6f08e-df36-49a2-b05e-b62545488b2d-combined-ca-bundle\") pod \"7ef6f08e-df36-49a2-b05e-b62545488b2d\" (UID: \"7ef6f08e-df36-49a2-b05e-b62545488b2d\") " Feb 16 17:24:54 crc kubenswrapper[4794]: I0216 17:24:54.907433 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ef6f08e-df36-49a2-b05e-b62545488b2d-kube-api-access-86v64" (OuterVolumeSpecName: "kube-api-access-86v64") pod "7ef6f08e-df36-49a2-b05e-b62545488b2d" (UID: "7ef6f08e-df36-49a2-b05e-b62545488b2d"). InnerVolumeSpecName "kube-api-access-86v64". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:24:54 crc kubenswrapper[4794]: I0216 17:24:54.938655 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ef6f08e-df36-49a2-b05e-b62545488b2d-config-data" (OuterVolumeSpecName: "config-data") pod "7ef6f08e-df36-49a2-b05e-b62545488b2d" (UID: "7ef6f08e-df36-49a2-b05e-b62545488b2d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:54 crc kubenswrapper[4794]: I0216 17:24:54.940486 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ef6f08e-df36-49a2-b05e-b62545488b2d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ef6f08e-df36-49a2-b05e-b62545488b2d" (UID: "7ef6f08e-df36-49a2-b05e-b62545488b2d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.003034 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86v64\" (UniqueName: \"kubernetes.io/projected/7ef6f08e-df36-49a2-b05e-b62545488b2d-kube-api-access-86v64\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.003073 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ef6f08e-df36-49a2-b05e-b62545488b2d-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.003084 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ef6f08e-df36-49a2-b05e-b62545488b2d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.042795 4794 generic.go:334] "Generic (PLEG): container finished" podID="7ef6f08e-df36-49a2-b05e-b62545488b2d" containerID="2079d767c31a67dd91206f1ef52ef209de8fe0db27ad7f46c94629596d1cac5e" exitCode=137 Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.042880 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7ef6f08e-df36-49a2-b05e-b62545488b2d","Type":"ContainerDied","Data":"2079d767c31a67dd91206f1ef52ef209de8fe0db27ad7f46c94629596d1cac5e"} Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.042916 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7ef6f08e-df36-49a2-b05e-b62545488b2d","Type":"ContainerDied","Data":"de3938786386d6912af785e3fc893d9dd13710d5399a97c82c605b6154fa62c4"} Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.042939 4794 scope.go:117] "RemoveContainer" containerID="2079d767c31a67dd91206f1ef52ef209de8fe0db27ad7f46c94629596d1cac5e" Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.043101 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.060808 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0135c16b-58fd-4898-b711-786fa961ddfe","Type":"ContainerStarted","Data":"4e7db69f5536609f27e83c67261c26ed7d0d609ce0198a15105744257717f0ce"} Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.089019 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.091181 4794 scope.go:117] "RemoveContainer" containerID="2079d767c31a67dd91206f1ef52ef209de8fe0db27ad7f46c94629596d1cac5e" Feb 16 17:24:55 crc kubenswrapper[4794]: E0216 17:24:55.091925 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2079d767c31a67dd91206f1ef52ef209de8fe0db27ad7f46c94629596d1cac5e\": container with ID starting with 2079d767c31a67dd91206f1ef52ef209de8fe0db27ad7f46c94629596d1cac5e not found: ID does not exist" containerID="2079d767c31a67dd91206f1ef52ef209de8fe0db27ad7f46c94629596d1cac5e" Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.091958 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2079d767c31a67dd91206f1ef52ef209de8fe0db27ad7f46c94629596d1cac5e"} err="failed to get container status \"2079d767c31a67dd91206f1ef52ef209de8fe0db27ad7f46c94629596d1cac5e\": rpc error: code = NotFound desc = could not find container \"2079d767c31a67dd91206f1ef52ef209de8fe0db27ad7f46c94629596d1cac5e\": container with ID starting with 2079d767c31a67dd91206f1ef52ef209de8fe0db27ad7f46c94629596d1cac5e not found: ID does not exist" Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.104496 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.131145 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 17:24:55 crc kubenswrapper[4794]: E0216 17:24:55.131836 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ef6f08e-df36-49a2-b05e-b62545488b2d" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.131859 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ef6f08e-df36-49a2-b05e-b62545488b2d" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.132106 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ef6f08e-df36-49a2-b05e-b62545488b2d" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.132931 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.139782 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.139844 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.140044 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.155533 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.310841 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhslm\" (UniqueName: \"kubernetes.io/projected/2952b970-259a-4f23-b3bc-614d5e88a6d1-kube-api-access-bhslm\") pod \"nova-cell1-novncproxy-0\" (UID: \"2952b970-259a-4f23-b3bc-614d5e88a6d1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.311032 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/2952b970-259a-4f23-b3bc-614d5e88a6d1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2952b970-259a-4f23-b3bc-614d5e88a6d1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.311201 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2952b970-259a-4f23-b3bc-614d5e88a6d1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2952b970-259a-4f23-b3bc-614d5e88a6d1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.311295 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/2952b970-259a-4f23-b3bc-614d5e88a6d1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2952b970-259a-4f23-b3bc-614d5e88a6d1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.311384 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2952b970-259a-4f23-b3bc-614d5e88a6d1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2952b970-259a-4f23-b3bc-614d5e88a6d1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.403352 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.403817 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.411829 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.413118 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhslm\" (UniqueName: \"kubernetes.io/projected/2952b970-259a-4f23-b3bc-614d5e88a6d1-kube-api-access-bhslm\") pod \"nova-cell1-novncproxy-0\" (UID: \"2952b970-259a-4f23-b3bc-614d5e88a6d1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.413267 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/2952b970-259a-4f23-b3bc-614d5e88a6d1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2952b970-259a-4f23-b3bc-614d5e88a6d1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.413341 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2952b970-259a-4f23-b3bc-614d5e88a6d1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2952b970-259a-4f23-b3bc-614d5e88a6d1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.413368 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/2952b970-259a-4f23-b3bc-614d5e88a6d1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2952b970-259a-4f23-b3bc-614d5e88a6d1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.413400 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2952b970-259a-4f23-b3bc-614d5e88a6d1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2952b970-259a-4f23-b3bc-614d5e88a6d1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.422327 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2952b970-259a-4f23-b3bc-614d5e88a6d1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2952b970-259a-4f23-b3bc-614d5e88a6d1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.426847 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/2952b970-259a-4f23-b3bc-614d5e88a6d1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2952b970-259a-4f23-b3bc-614d5e88a6d1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.441814 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/2952b970-259a-4f23-b3bc-614d5e88a6d1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2952b970-259a-4f23-b3bc-614d5e88a6d1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.442856 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhslm\" (UniqueName: \"kubernetes.io/projected/2952b970-259a-4f23-b3bc-614d5e88a6d1-kube-api-access-bhslm\") pod \"nova-cell1-novncproxy-0\" (UID: \"2952b970-259a-4f23-b3bc-614d5e88a6d1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.446111 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2952b970-259a-4f23-b3bc-614d5e88a6d1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2952b970-259a-4f23-b3bc-614d5e88a6d1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.459489 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.776617 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.776904 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94" containerName="ceilometer-central-agent" containerID="cri-o://575daf2240b0c78c5ffd48e6a5e3537007479eee5178e5f5870bc389b2b21629" gracePeriod=30 Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.777901 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94" containerName="proxy-httpd" containerID="cri-o://8e8c0eb292e4f32977a9154a6d67924fdfacec5b2d2b07da700d05672a74c88d" gracePeriod=30 Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.777951 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94" containerName="sg-core" containerID="cri-o://465de4d6e8e79885c1f2bdb23b6f65278df6d98b8028914836cde307df056239" gracePeriod=30 Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.777928 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94" containerName="ceilometer-notification-agent" containerID="cri-o://1e9bef14e6f742a06ccd945662057ab4607e9e4d5326ba1ee50f7a53e4820fcb" gracePeriod=30 Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.788519 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 16 17:24:55 crc kubenswrapper[4794]: I0216 17:24:55.939793 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 16 17:24:56 crc kubenswrapper[4794]: I0216 17:24:56.077912 4794 generic.go:334] "Generic (PLEG): container finished" podID="c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94" containerID="8e8c0eb292e4f32977a9154a6d67924fdfacec5b2d2b07da700d05672a74c88d" exitCode=0 Feb 16 17:24:56 crc kubenswrapper[4794]: I0216 17:24:56.077949 4794 generic.go:334] "Generic (PLEG): container finished" podID="c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94" containerID="465de4d6e8e79885c1f2bdb23b6f65278df6d98b8028914836cde307df056239" exitCode=2 Feb 16 17:24:56 crc kubenswrapper[4794]: I0216 17:24:56.077998 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94","Type":"ContainerDied","Data":"8e8c0eb292e4f32977a9154a6d67924fdfacec5b2d2b07da700d05672a74c88d"} Feb 16 17:24:56 crc kubenswrapper[4794]: I0216 17:24:56.078057 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94","Type":"ContainerDied","Data":"465de4d6e8e79885c1f2bdb23b6f65278df6d98b8028914836cde307df056239"} Feb 16 17:24:56 crc kubenswrapper[4794]: I0216 17:24:56.091343 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 17:24:56 crc kubenswrapper[4794]: I0216 17:24:56.592770 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 17:24:56 crc kubenswrapper[4794]: I0216 17:24:56.593757 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 17:24:56 crc kubenswrapper[4794]: I0216 17:24:56.635780 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 17:24:56 crc kubenswrapper[4794]: I0216 17:24:56.699609 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 17:24:56 crc kubenswrapper[4794]: I0216 17:24:56.808820 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ef6f08e-df36-49a2-b05e-b62545488b2d" path="/var/lib/kubelet/pods/7ef6f08e-df36-49a2-b05e-b62545488b2d/volumes" Feb 16 17:24:56 crc kubenswrapper[4794]: I0216 17:24:56.990030 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 17:24:56 crc kubenswrapper[4794]: W0216 17:24:56.993184 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2952b970_259a_4f23_b3bc_614d5e88a6d1.slice/crio-2ab914cec330de76c81c63ba08267ef092df05ad8e772c4ee75dc469eddd45e5 WatchSource:0}: Error finding container 2ab914cec330de76c81c63ba08267ef092df05ad8e772c4ee75dc469eddd45e5: Status 404 returned error can't find the container with id 2ab914cec330de76c81c63ba08267ef092df05ad8e772c4ee75dc469eddd45e5 Feb 16 17:24:57 crc kubenswrapper[4794]: I0216 17:24:57.110045 4794 generic.go:334] "Generic (PLEG): container finished" podID="c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94" containerID="575daf2240b0c78c5ffd48e6a5e3537007479eee5178e5f5870bc389b2b21629" exitCode=0 Feb 16 17:24:57 crc kubenswrapper[4794]: I0216 17:24:57.110202 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94","Type":"ContainerDied","Data":"575daf2240b0c78c5ffd48e6a5e3537007479eee5178e5f5870bc389b2b21629"} Feb 16 17:24:57 crc kubenswrapper[4794]: I0216 17:24:57.113883 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2952b970-259a-4f23-b3bc-614d5e88a6d1","Type":"ContainerStarted","Data":"2ab914cec330de76c81c63ba08267ef092df05ad8e772c4ee75dc469eddd45e5"} Feb 16 17:24:57 crc kubenswrapper[4794]: I0216 17:24:57.118430 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0135c16b-58fd-4898-b711-786fa961ddfe","Type":"ContainerStarted","Data":"1eb6de74e33c5395a20b2b53d19d7376cb4f1ddab30d9869af282eff0332f37e"} Feb 16 17:24:57 crc kubenswrapper[4794]: I0216 17:24:57.118855 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 17:24:57 crc kubenswrapper[4794]: I0216 17:24:57.125528 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 17:24:57 crc kubenswrapper[4794]: I0216 17:24:57.318046 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-pczg4"] Feb 16 17:24:57 crc kubenswrapper[4794]: I0216 17:24:57.322345 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-pczg4" Feb 16 17:24:57 crc kubenswrapper[4794]: I0216 17:24:57.332568 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-pczg4"] Feb 16 17:24:57 crc kubenswrapper[4794]: I0216 17:24:57.398234 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9abdf39-73a5-420f-8b9b-59831d550111-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-pczg4\" (UID: \"c9abdf39-73a5-420f-8b9b-59831d550111\") " pod="openstack/dnsmasq-dns-f84f9ccf-pczg4" Feb 16 17:24:57 crc kubenswrapper[4794]: I0216 17:24:57.398731 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c9abdf39-73a5-420f-8b9b-59831d550111-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-pczg4\" (UID: \"c9abdf39-73a5-420f-8b9b-59831d550111\") " pod="openstack/dnsmasq-dns-f84f9ccf-pczg4" Feb 16 17:24:57 crc kubenswrapper[4794]: I0216 17:24:57.398918 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9abdf39-73a5-420f-8b9b-59831d550111-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-pczg4\" (UID: \"c9abdf39-73a5-420f-8b9b-59831d550111\") " pod="openstack/dnsmasq-dns-f84f9ccf-pczg4" Feb 16 17:24:57 crc kubenswrapper[4794]: I0216 17:24:57.398955 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9abdf39-73a5-420f-8b9b-59831d550111-config\") pod \"dnsmasq-dns-f84f9ccf-pczg4\" (UID: \"c9abdf39-73a5-420f-8b9b-59831d550111\") " pod="openstack/dnsmasq-dns-f84f9ccf-pczg4" Feb 16 17:24:57 crc kubenswrapper[4794]: I0216 17:24:57.399181 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9abdf39-73a5-420f-8b9b-59831d550111-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-pczg4\" (UID: \"c9abdf39-73a5-420f-8b9b-59831d550111\") " pod="openstack/dnsmasq-dns-f84f9ccf-pczg4" Feb 16 17:24:57 crc kubenswrapper[4794]: I0216 17:24:57.399416 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkbjs\" (UniqueName: \"kubernetes.io/projected/c9abdf39-73a5-420f-8b9b-59831d550111-kube-api-access-fkbjs\") pod \"dnsmasq-dns-f84f9ccf-pczg4\" (UID: \"c9abdf39-73a5-420f-8b9b-59831d550111\") " pod="openstack/dnsmasq-dns-f84f9ccf-pczg4" Feb 16 17:24:57 crc kubenswrapper[4794]: I0216 17:24:57.501397 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9abdf39-73a5-420f-8b9b-59831d550111-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-pczg4\" (UID: \"c9abdf39-73a5-420f-8b9b-59831d550111\") " pod="openstack/dnsmasq-dns-f84f9ccf-pczg4" Feb 16 17:24:57 crc kubenswrapper[4794]: I0216 17:24:57.501478 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9abdf39-73a5-420f-8b9b-59831d550111-config\") pod \"dnsmasq-dns-f84f9ccf-pczg4\" (UID: \"c9abdf39-73a5-420f-8b9b-59831d550111\") " pod="openstack/dnsmasq-dns-f84f9ccf-pczg4" Feb 16 17:24:57 crc kubenswrapper[4794]: I0216 17:24:57.501523 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9abdf39-73a5-420f-8b9b-59831d550111-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-pczg4\" (UID: \"c9abdf39-73a5-420f-8b9b-59831d550111\") " pod="openstack/dnsmasq-dns-f84f9ccf-pczg4" Feb 16 17:24:57 crc kubenswrapper[4794]: I0216 17:24:57.501572 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkbjs\" (UniqueName: \"kubernetes.io/projected/c9abdf39-73a5-420f-8b9b-59831d550111-kube-api-access-fkbjs\") pod \"dnsmasq-dns-f84f9ccf-pczg4\" (UID: \"c9abdf39-73a5-420f-8b9b-59831d550111\") " pod="openstack/dnsmasq-dns-f84f9ccf-pczg4" Feb 16 17:24:57 crc kubenswrapper[4794]: I0216 17:24:57.501634 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9abdf39-73a5-420f-8b9b-59831d550111-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-pczg4\" (UID: \"c9abdf39-73a5-420f-8b9b-59831d550111\") " pod="openstack/dnsmasq-dns-f84f9ccf-pczg4" Feb 16 17:24:57 crc kubenswrapper[4794]: I0216 17:24:57.501685 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c9abdf39-73a5-420f-8b9b-59831d550111-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-pczg4\" (UID: \"c9abdf39-73a5-420f-8b9b-59831d550111\") " pod="openstack/dnsmasq-dns-f84f9ccf-pczg4" Feb 16 17:24:57 crc kubenswrapper[4794]: I0216 17:24:57.502875 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c9abdf39-73a5-420f-8b9b-59831d550111-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-pczg4\" (UID: \"c9abdf39-73a5-420f-8b9b-59831d550111\") " pod="openstack/dnsmasq-dns-f84f9ccf-pczg4" Feb 16 17:24:57 crc kubenswrapper[4794]: I0216 17:24:57.503034 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9abdf39-73a5-420f-8b9b-59831d550111-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-pczg4\" (UID: \"c9abdf39-73a5-420f-8b9b-59831d550111\") " pod="openstack/dnsmasq-dns-f84f9ccf-pczg4" Feb 16 17:24:57 crc kubenswrapper[4794]: I0216 17:24:57.503489 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9abdf39-73a5-420f-8b9b-59831d550111-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-pczg4\" (UID: \"c9abdf39-73a5-420f-8b9b-59831d550111\") " pod="openstack/dnsmasq-dns-f84f9ccf-pczg4" Feb 16 17:24:57 crc kubenswrapper[4794]: I0216 17:24:57.504434 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9abdf39-73a5-420f-8b9b-59831d550111-config\") pod \"dnsmasq-dns-f84f9ccf-pczg4\" (UID: \"c9abdf39-73a5-420f-8b9b-59831d550111\") " pod="openstack/dnsmasq-dns-f84f9ccf-pczg4" Feb 16 17:24:57 crc kubenswrapper[4794]: I0216 17:24:57.507790 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9abdf39-73a5-420f-8b9b-59831d550111-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-pczg4\" (UID: \"c9abdf39-73a5-420f-8b9b-59831d550111\") " pod="openstack/dnsmasq-dns-f84f9ccf-pczg4" Feb 16 17:24:57 crc kubenswrapper[4794]: I0216 17:24:57.527329 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkbjs\" (UniqueName: \"kubernetes.io/projected/c9abdf39-73a5-420f-8b9b-59831d550111-kube-api-access-fkbjs\") pod \"dnsmasq-dns-f84f9ccf-pczg4\" (UID: \"c9abdf39-73a5-420f-8b9b-59831d550111\") " pod="openstack/dnsmasq-dns-f84f9ccf-pczg4" Feb 16 17:24:57 crc kubenswrapper[4794]: I0216 17:24:57.656118 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-pczg4" Feb 16 17:24:58 crc kubenswrapper[4794]: I0216 17:24:58.133852 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2952b970-259a-4f23-b3bc-614d5e88a6d1","Type":"ContainerStarted","Data":"338d403b696961ee9e90ca970b0db22d576e3fc1baf5488f04983d8e4f0764cf"} Feb 16 17:24:58 crc kubenswrapper[4794]: I0216 17:24:58.189889 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.189868167 podStartE2EDuration="3.189868167s" podCreationTimestamp="2026-02-16 17:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:24:58.183527178 +0000 UTC m=+1524.131621825" watchObservedRunningTime="2026-02-16 17:24:58.189868167 +0000 UTC m=+1524.137962814" Feb 16 17:24:58 crc kubenswrapper[4794]: I0216 17:24:58.214815 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-pczg4"] Feb 16 17:24:59 crc kubenswrapper[4794]: I0216 17:24:59.157999 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-pczg4" event={"ID":"c9abdf39-73a5-420f-8b9b-59831d550111","Type":"ContainerStarted","Data":"53b01755854ec804139457859821b7d1de227b10bcc305c7db758e469b86352e"} Feb 16 17:24:59 crc kubenswrapper[4794]: I0216 17:24:59.191562 4794 generic.go:334] "Generic (PLEG): container finished" podID="c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94" containerID="1e9bef14e6f742a06ccd945662057ab4607e9e4d5326ba1ee50f7a53e4820fcb" exitCode=0 Feb 16 17:24:59 crc kubenswrapper[4794]: I0216 17:24:59.191645 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94","Type":"ContainerDied","Data":"1e9bef14e6f742a06ccd945662057ab4607e9e4d5326ba1ee50f7a53e4820fcb"} Feb 16 17:24:59 crc kubenswrapper[4794]: I0216 17:24:59.385843 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:24:59 crc kubenswrapper[4794]: I0216 17:24:59.456242 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-sg-core-conf-yaml\") pod \"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94\" (UID: \"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94\") " Feb 16 17:24:59 crc kubenswrapper[4794]: I0216 17:24:59.456317 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-run-httpd\") pod \"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94\" (UID: \"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94\") " Feb 16 17:24:59 crc kubenswrapper[4794]: I0216 17:24:59.456356 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbgk7\" (UniqueName: \"kubernetes.io/projected/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-kube-api-access-cbgk7\") pod \"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94\" (UID: \"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94\") " Feb 16 17:24:59 crc kubenswrapper[4794]: I0216 17:24:59.456394 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-config-data\") pod \"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94\" (UID: \"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94\") " Feb 16 17:24:59 crc kubenswrapper[4794]: I0216 17:24:59.456415 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-scripts\") pod \"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94\" (UID: \"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94\") " Feb 16 17:24:59 crc kubenswrapper[4794]: I0216 17:24:59.456470 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-combined-ca-bundle\") pod \"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94\" (UID: \"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94\") " Feb 16 17:24:59 crc kubenswrapper[4794]: I0216 17:24:59.456489 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-log-httpd\") pod \"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94\" (UID: \"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94\") " Feb 16 17:24:59 crc kubenswrapper[4794]: I0216 17:24:59.457250 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94" (UID: "c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:24:59 crc kubenswrapper[4794]: I0216 17:24:59.459152 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94" (UID: "c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:24:59 crc kubenswrapper[4794]: I0216 17:24:59.468104 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-scripts" (OuterVolumeSpecName: "scripts") pod "c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94" (UID: "c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:59 crc kubenswrapper[4794]: I0216 17:24:59.471508 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-kube-api-access-cbgk7" (OuterVolumeSpecName: "kube-api-access-cbgk7") pod "c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94" (UID: "c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94"). InnerVolumeSpecName "kube-api-access-cbgk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:24:59 crc kubenswrapper[4794]: I0216 17:24:59.506960 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94" (UID: "c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:59 crc kubenswrapper[4794]: I0216 17:24:59.570054 4794 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:59 crc kubenswrapper[4794]: I0216 17:24:59.570106 4794 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:59 crc kubenswrapper[4794]: I0216 17:24:59.570117 4794 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:59 crc kubenswrapper[4794]: I0216 17:24:59.570272 4794 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:59 crc kubenswrapper[4794]: I0216 17:24:59.570287 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbgk7\" (UniqueName: \"kubernetes.io/projected/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-kube-api-access-cbgk7\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:59 crc kubenswrapper[4794]: I0216 17:24:59.650209 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94" (UID: "c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:59 crc kubenswrapper[4794]: I0216 17:24:59.687489 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:24:59 crc kubenswrapper[4794]: E0216 17:24:59.703152 4794 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9abdf39_73a5_420f_8b9b_59831d550111.slice/crio-conmon-2642727f1e737a0fd54e22ac129c67e5e32a0a08c556a8175d72e5def5391707.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9abdf39_73a5_420f_8b9b_59831d550111.slice/crio-2642727f1e737a0fd54e22ac129c67e5e32a0a08c556a8175d72e5def5391707.scope\": RecentStats: unable to find data in memory cache]" Feb 16 17:24:59 crc kubenswrapper[4794]: I0216 17:24:59.713682 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:24:59 crc kubenswrapper[4794]: I0216 17:24:59.780619 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-config-data" (OuterVolumeSpecName: "config-data") pod "c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94" (UID: "c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:24:59 crc kubenswrapper[4794]: I0216 17:24:59.789155 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.209073 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0135c16b-58fd-4898-b711-786fa961ddfe","Type":"ContainerStarted","Data":"300c850cc4b8542798e3490418388e0f2a551a053f3d01edd7497101244a28c2"} Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.214187 4794 generic.go:334] "Generic (PLEG): container finished" podID="c9abdf39-73a5-420f-8b9b-59831d550111" containerID="2642727f1e737a0fd54e22ac129c67e5e32a0a08c556a8175d72e5def5391707" exitCode=0 Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.214261 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-pczg4" event={"ID":"c9abdf39-73a5-420f-8b9b-59831d550111","Type":"ContainerDied","Data":"2642727f1e737a0fd54e22ac129c67e5e32a0a08c556a8175d72e5def5391707"} Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.220675 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="81beeae8-f0b5-480c-92cf-ce047e2a55a5" containerName="nova-api-log" containerID="cri-o://ffbc9497b5c9a21de01f28e99dcd0b65281361a347588ba5834a9ca830e0fbd6" gracePeriod=30 Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.220801 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.221135 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94","Type":"ContainerDied","Data":"b6159682fde96e07e3a0e2f92425bbf3f217b837bf7a1055381dcb937c2595d4"} Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.221196 4794 scope.go:117] "RemoveContainer" containerID="8e8c0eb292e4f32977a9154a6d67924fdfacec5b2d2b07da700d05672a74c88d" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.221366 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="81beeae8-f0b5-480c-92cf-ce047e2a55a5" containerName="nova-api-api" containerID="cri-o://a12f9787d1b709ac0cfbe17d3850951219242d69eb011c85c8a96eae34aea977" gracePeriod=30 Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.462406 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.468316 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.486178 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.497868 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:25:00 crc kubenswrapper[4794]: E0216 17:25:00.498415 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94" containerName="sg-core" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.498431 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94" containerName="sg-core" Feb 16 17:25:00 crc kubenswrapper[4794]: E0216 17:25:00.498466 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94" containerName="ceilometer-central-agent" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.498473 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94" containerName="ceilometer-central-agent" Feb 16 17:25:00 crc kubenswrapper[4794]: E0216 17:25:00.498497 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94" containerName="ceilometer-notification-agent" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.498504 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94" containerName="ceilometer-notification-agent" Feb 16 17:25:00 crc kubenswrapper[4794]: E0216 17:25:00.498522 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94" containerName="proxy-httpd" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.498529 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94" containerName="proxy-httpd" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.498749 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94" containerName="proxy-httpd" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.498772 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94" containerName="ceilometer-notification-agent" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.498785 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94" containerName="sg-core" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.498802 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94" containerName="ceilometer-central-agent" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.501002 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.505778 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.505836 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.512760 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.606252 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m4n2\" (UniqueName: \"kubernetes.io/projected/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-kube-api-access-6m4n2\") pod \"ceilometer-0\" (UID: \"0c570e74-9f5d-4d0b-b925-6718adb1fbd9\") " pod="openstack/ceilometer-0" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.606287 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-run-httpd\") pod \"ceilometer-0\" (UID: \"0c570e74-9f5d-4d0b-b925-6718adb1fbd9\") " pod="openstack/ceilometer-0" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.606319 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-log-httpd\") pod \"ceilometer-0\" (UID: \"0c570e74-9f5d-4d0b-b925-6718adb1fbd9\") " pod="openstack/ceilometer-0" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.606354 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0c570e74-9f5d-4d0b-b925-6718adb1fbd9\") " pod="openstack/ceilometer-0" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.609585 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0c570e74-9f5d-4d0b-b925-6718adb1fbd9\") " pod="openstack/ceilometer-0" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.611758 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-config-data\") pod \"ceilometer-0\" (UID: \"0c570e74-9f5d-4d0b-b925-6718adb1fbd9\") " pod="openstack/ceilometer-0" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.611822 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-scripts\") pod \"ceilometer-0\" (UID: \"0c570e74-9f5d-4d0b-b925-6718adb1fbd9\") " pod="openstack/ceilometer-0" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.714026 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-config-data\") pod \"ceilometer-0\" (UID: \"0c570e74-9f5d-4d0b-b925-6718adb1fbd9\") " pod="openstack/ceilometer-0" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.714096 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-scripts\") pod \"ceilometer-0\" (UID: \"0c570e74-9f5d-4d0b-b925-6718adb1fbd9\") " pod="openstack/ceilometer-0" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.714227 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-run-httpd\") pod \"ceilometer-0\" (UID: \"0c570e74-9f5d-4d0b-b925-6718adb1fbd9\") " pod="openstack/ceilometer-0" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.714252 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m4n2\" (UniqueName: \"kubernetes.io/projected/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-kube-api-access-6m4n2\") pod \"ceilometer-0\" (UID: \"0c570e74-9f5d-4d0b-b925-6718adb1fbd9\") " pod="openstack/ceilometer-0" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.714275 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-log-httpd\") pod \"ceilometer-0\" (UID: \"0c570e74-9f5d-4d0b-b925-6718adb1fbd9\") " pod="openstack/ceilometer-0" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.714342 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0c570e74-9f5d-4d0b-b925-6718adb1fbd9\") " pod="openstack/ceilometer-0" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.714808 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-log-httpd\") pod \"ceilometer-0\" (UID: \"0c570e74-9f5d-4d0b-b925-6718adb1fbd9\") " pod="openstack/ceilometer-0" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.714875 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-run-httpd\") pod \"ceilometer-0\" (UID: \"0c570e74-9f5d-4d0b-b925-6718adb1fbd9\") " pod="openstack/ceilometer-0" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.714988 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0c570e74-9f5d-4d0b-b925-6718adb1fbd9\") " pod="openstack/ceilometer-0" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.718376 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0c570e74-9f5d-4d0b-b925-6718adb1fbd9\") " pod="openstack/ceilometer-0" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.719711 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-scripts\") pod \"ceilometer-0\" (UID: \"0c570e74-9f5d-4d0b-b925-6718adb1fbd9\") " pod="openstack/ceilometer-0" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.719935 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-config-data\") pod \"ceilometer-0\" (UID: \"0c570e74-9f5d-4d0b-b925-6718adb1fbd9\") " pod="openstack/ceilometer-0" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.720762 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0c570e74-9f5d-4d0b-b925-6718adb1fbd9\") " pod="openstack/ceilometer-0" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.744008 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m4n2\" (UniqueName: \"kubernetes.io/projected/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-kube-api-access-6m4n2\") pod \"ceilometer-0\" (UID: \"0c570e74-9f5d-4d0b-b925-6718adb1fbd9\") " pod="openstack/ceilometer-0" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.806591 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94" path="/var/lib/kubelet/pods/c7b8e8f9-a2c7-4ce7-8a5e-24300eb8ce94/volumes" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.822345 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:25:00 crc kubenswrapper[4794]: I0216 17:25:00.937444 4794 scope.go:117] "RemoveContainer" containerID="465de4d6e8e79885c1f2bdb23b6f65278df6d98b8028914836cde307df056239" Feb 16 17:25:01 crc kubenswrapper[4794]: I0216 17:25:01.018839 4794 scope.go:117] "RemoveContainer" containerID="1e9bef14e6f742a06ccd945662057ab4607e9e4d5326ba1ee50f7a53e4820fcb" Feb 16 17:25:01 crc kubenswrapper[4794]: I0216 17:25:01.207679 4794 scope.go:117] "RemoveContainer" containerID="575daf2240b0c78c5ffd48e6a5e3537007479eee5178e5f5870bc389b2b21629" Feb 16 17:25:01 crc kubenswrapper[4794]: I0216 17:25:01.305662 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-pczg4" event={"ID":"c9abdf39-73a5-420f-8b9b-59831d550111","Type":"ContainerStarted","Data":"81cb2e84aea8e7f2b2910cce4d5631320a40bf09b87d1dd76fe3d11d640478ad"} Feb 16 17:25:01 crc kubenswrapper[4794]: I0216 17:25:01.306724 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f84f9ccf-pczg4" Feb 16 17:25:01 crc kubenswrapper[4794]: I0216 17:25:01.314391 4794 generic.go:334] "Generic (PLEG): container finished" podID="81beeae8-f0b5-480c-92cf-ce047e2a55a5" containerID="ffbc9497b5c9a21de01f28e99dcd0b65281361a347588ba5834a9ca830e0fbd6" exitCode=143 Feb 16 17:25:01 crc kubenswrapper[4794]: I0216 17:25:01.314439 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"81beeae8-f0b5-480c-92cf-ce047e2a55a5","Type":"ContainerDied","Data":"ffbc9497b5c9a21de01f28e99dcd0b65281361a347588ba5834a9ca830e0fbd6"} Feb 16 17:25:01 crc kubenswrapper[4794]: I0216 17:25:01.327125 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f84f9ccf-pczg4" podStartSLOduration=4.327079967 podStartE2EDuration="4.327079967s" podCreationTimestamp="2026-02-16 17:24:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:25:01.322929669 +0000 UTC m=+1527.271024316" watchObservedRunningTime="2026-02-16 17:25:01.327079967 +0000 UTC m=+1527.275174614" Feb 16 17:25:01 crc kubenswrapper[4794]: I0216 17:25:01.556900 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:25:01 crc kubenswrapper[4794]: I0216 17:25:01.792964 4794 scope.go:117] "RemoveContainer" containerID="6e22a7be8018d748f49f3c871459e97f76ee04f859d3d1d0d46b4bbb2dd36691" Feb 16 17:25:01 crc kubenswrapper[4794]: E0216 17:25:01.793208 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:25:02 crc kubenswrapper[4794]: I0216 17:25:02.325869 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0135c16b-58fd-4898-b711-786fa961ddfe","Type":"ContainerStarted","Data":"c97b08cfda5252db5e86bb5a56ebf72045e47ee226aaebc0a0eef401aaed9c8e"} Feb 16 17:25:02 crc kubenswrapper[4794]: I0216 17:25:02.326046 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="0135c16b-58fd-4898-b711-786fa961ddfe" containerName="aodh-api" containerID="cri-o://4e7db69f5536609f27e83c67261c26ed7d0d609ce0198a15105744257717f0ce" gracePeriod=30 Feb 16 17:25:02 crc kubenswrapper[4794]: I0216 17:25:02.326083 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="0135c16b-58fd-4898-b711-786fa961ddfe" containerName="aodh-listener" containerID="cri-o://c97b08cfda5252db5e86bb5a56ebf72045e47ee226aaebc0a0eef401aaed9c8e" gracePeriod=30 Feb 16 17:25:02 crc kubenswrapper[4794]: I0216 17:25:02.326358 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="0135c16b-58fd-4898-b711-786fa961ddfe" containerName="aodh-notifier" containerID="cri-o://300c850cc4b8542798e3490418388e0f2a551a053f3d01edd7497101244a28c2" gracePeriod=30 Feb 16 17:25:02 crc kubenswrapper[4794]: I0216 17:25:02.326285 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="0135c16b-58fd-4898-b711-786fa961ddfe" containerName="aodh-evaluator" containerID="cri-o://1eb6de74e33c5395a20b2b53d19d7376cb4f1ddab30d9869af282eff0332f37e" gracePeriod=30 Feb 16 17:25:02 crc kubenswrapper[4794]: I0216 17:25:02.331209 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c570e74-9f5d-4d0b-b925-6718adb1fbd9","Type":"ContainerStarted","Data":"723b674c33e8716d9c08a36c687a3be22763947ada990b147bf49041d9bb692f"} Feb 16 17:25:02 crc kubenswrapper[4794]: I0216 17:25:02.331263 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c570e74-9f5d-4d0b-b925-6718adb1fbd9","Type":"ContainerStarted","Data":"d5359ee5796981c325b5cf174da236e438938f5e238cd7d52880d6c71b2744f2"} Feb 16 17:25:02 crc kubenswrapper[4794]: I0216 17:25:02.348358 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.8159109730000003 podStartE2EDuration="10.348342304s" podCreationTimestamp="2026-02-16 17:24:52 +0000 UTC" firstStartedPulling="2026-02-16 17:24:53.487195687 +0000 UTC m=+1519.435290334" lastFinishedPulling="2026-02-16 17:25:01.019627008 +0000 UTC m=+1526.967721665" observedRunningTime="2026-02-16 17:25:02.346236805 +0000 UTC m=+1528.294331452" watchObservedRunningTime="2026-02-16 17:25:02.348342304 +0000 UTC m=+1528.296436951" Feb 16 17:25:03 crc kubenswrapper[4794]: I0216 17:25:03.204943 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:25:03 crc kubenswrapper[4794]: I0216 17:25:03.359851 4794 generic.go:334] "Generic (PLEG): container finished" podID="0135c16b-58fd-4898-b711-786fa961ddfe" containerID="1eb6de74e33c5395a20b2b53d19d7376cb4f1ddab30d9869af282eff0332f37e" exitCode=0 Feb 16 17:25:03 crc kubenswrapper[4794]: I0216 17:25:03.359890 4794 generic.go:334] "Generic (PLEG): container finished" podID="0135c16b-58fd-4898-b711-786fa961ddfe" containerID="4e7db69f5536609f27e83c67261c26ed7d0d609ce0198a15105744257717f0ce" exitCode=0 Feb 16 17:25:03 crc kubenswrapper[4794]: I0216 17:25:03.359979 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0135c16b-58fd-4898-b711-786fa961ddfe","Type":"ContainerDied","Data":"1eb6de74e33c5395a20b2b53d19d7376cb4f1ddab30d9869af282eff0332f37e"} Feb 16 17:25:03 crc kubenswrapper[4794]: I0216 17:25:03.360006 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0135c16b-58fd-4898-b711-786fa961ddfe","Type":"ContainerDied","Data":"4e7db69f5536609f27e83c67261c26ed7d0d609ce0198a15105744257717f0ce"} Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.232031 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.356603 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7j5w\" (UniqueName: \"kubernetes.io/projected/81beeae8-f0b5-480c-92cf-ce047e2a55a5-kube-api-access-p7j5w\") pod \"81beeae8-f0b5-480c-92cf-ce047e2a55a5\" (UID: \"81beeae8-f0b5-480c-92cf-ce047e2a55a5\") " Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.357035 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81beeae8-f0b5-480c-92cf-ce047e2a55a5-combined-ca-bundle\") pod \"81beeae8-f0b5-480c-92cf-ce047e2a55a5\" (UID: \"81beeae8-f0b5-480c-92cf-ce047e2a55a5\") " Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.357451 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81beeae8-f0b5-480c-92cf-ce047e2a55a5-logs\") pod \"81beeae8-f0b5-480c-92cf-ce047e2a55a5\" (UID: \"81beeae8-f0b5-480c-92cf-ce047e2a55a5\") " Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.357481 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81beeae8-f0b5-480c-92cf-ce047e2a55a5-config-data\") pod \"81beeae8-f0b5-480c-92cf-ce047e2a55a5\" (UID: \"81beeae8-f0b5-480c-92cf-ce047e2a55a5\") " Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.358381 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81beeae8-f0b5-480c-92cf-ce047e2a55a5-logs" (OuterVolumeSpecName: "logs") pod "81beeae8-f0b5-480c-92cf-ce047e2a55a5" (UID: "81beeae8-f0b5-480c-92cf-ce047e2a55a5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.372107 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81beeae8-f0b5-480c-92cf-ce047e2a55a5-kube-api-access-p7j5w" (OuterVolumeSpecName: "kube-api-access-p7j5w") pod "81beeae8-f0b5-480c-92cf-ce047e2a55a5" (UID: "81beeae8-f0b5-480c-92cf-ce047e2a55a5"). InnerVolumeSpecName "kube-api-access-p7j5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.387330 4794 generic.go:334] "Generic (PLEG): container finished" podID="0135c16b-58fd-4898-b711-786fa961ddfe" containerID="300c850cc4b8542798e3490418388e0f2a551a053f3d01edd7497101244a28c2" exitCode=0 Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.387403 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0135c16b-58fd-4898-b711-786fa961ddfe","Type":"ContainerDied","Data":"300c850cc4b8542798e3490418388e0f2a551a053f3d01edd7497101244a28c2"} Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.389584 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c570e74-9f5d-4d0b-b925-6718adb1fbd9","Type":"ContainerStarted","Data":"52050fe6bd55a4fedb657405ec95ead697376bd2d895064719eab30163e92b81"} Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.389613 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c570e74-9f5d-4d0b-b925-6718adb1fbd9","Type":"ContainerStarted","Data":"5fcd4f2fd8fe03f0366626635c5f91d9ae0840eb4a8ab050dc1f461126cf565e"} Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.391004 4794 generic.go:334] "Generic (PLEG): container finished" podID="81beeae8-f0b5-480c-92cf-ce047e2a55a5" containerID="a12f9787d1b709ac0cfbe17d3850951219242d69eb011c85c8a96eae34aea977" exitCode=0 Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.391028 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"81beeae8-f0b5-480c-92cf-ce047e2a55a5","Type":"ContainerDied","Data":"a12f9787d1b709ac0cfbe17d3850951219242d69eb011c85c8a96eae34aea977"} Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.391049 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"81beeae8-f0b5-480c-92cf-ce047e2a55a5","Type":"ContainerDied","Data":"9c04b4268b2112a7ea6571bd3ce512e706c9dc4ced78d1664191e54ab33413f4"} Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.391066 4794 scope.go:117] "RemoveContainer" containerID="a12f9787d1b709ac0cfbe17d3850951219242d69eb011c85c8a96eae34aea977" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.391078 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.409126 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81beeae8-f0b5-480c-92cf-ce047e2a55a5-config-data" (OuterVolumeSpecName: "config-data") pod "81beeae8-f0b5-480c-92cf-ce047e2a55a5" (UID: "81beeae8-f0b5-480c-92cf-ce047e2a55a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.412592 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81beeae8-f0b5-480c-92cf-ce047e2a55a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81beeae8-f0b5-480c-92cf-ce047e2a55a5" (UID: "81beeae8-f0b5-480c-92cf-ce047e2a55a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.462446 4794 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81beeae8-f0b5-480c-92cf-ce047e2a55a5-logs\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.462485 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81beeae8-f0b5-480c-92cf-ce047e2a55a5-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.462498 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7j5w\" (UniqueName: \"kubernetes.io/projected/81beeae8-f0b5-480c-92cf-ce047e2a55a5-kube-api-access-p7j5w\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.462511 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81beeae8-f0b5-480c-92cf-ce047e2a55a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.526932 4794 scope.go:117] "RemoveContainer" containerID="ffbc9497b5c9a21de01f28e99dcd0b65281361a347588ba5834a9ca830e0fbd6" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.552026 4794 scope.go:117] "RemoveContainer" containerID="a12f9787d1b709ac0cfbe17d3850951219242d69eb011c85c8a96eae34aea977" Feb 16 17:25:04 crc kubenswrapper[4794]: E0216 17:25:04.555459 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a12f9787d1b709ac0cfbe17d3850951219242d69eb011c85c8a96eae34aea977\": container with ID starting with a12f9787d1b709ac0cfbe17d3850951219242d69eb011c85c8a96eae34aea977 not found: ID does not exist" containerID="a12f9787d1b709ac0cfbe17d3850951219242d69eb011c85c8a96eae34aea977" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.555508 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a12f9787d1b709ac0cfbe17d3850951219242d69eb011c85c8a96eae34aea977"} err="failed to get container status \"a12f9787d1b709ac0cfbe17d3850951219242d69eb011c85c8a96eae34aea977\": rpc error: code = NotFound desc = could not find container \"a12f9787d1b709ac0cfbe17d3850951219242d69eb011c85c8a96eae34aea977\": container with ID starting with a12f9787d1b709ac0cfbe17d3850951219242d69eb011c85c8a96eae34aea977 not found: ID does not exist" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.555536 4794 scope.go:117] "RemoveContainer" containerID="ffbc9497b5c9a21de01f28e99dcd0b65281361a347588ba5834a9ca830e0fbd6" Feb 16 17:25:04 crc kubenswrapper[4794]: E0216 17:25:04.563416 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffbc9497b5c9a21de01f28e99dcd0b65281361a347588ba5834a9ca830e0fbd6\": container with ID starting with ffbc9497b5c9a21de01f28e99dcd0b65281361a347588ba5834a9ca830e0fbd6 not found: ID does not exist" containerID="ffbc9497b5c9a21de01f28e99dcd0b65281361a347588ba5834a9ca830e0fbd6" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.563457 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffbc9497b5c9a21de01f28e99dcd0b65281361a347588ba5834a9ca830e0fbd6"} err="failed to get container status \"ffbc9497b5c9a21de01f28e99dcd0b65281361a347588ba5834a9ca830e0fbd6\": rpc error: code = NotFound desc = could not find container \"ffbc9497b5c9a21de01f28e99dcd0b65281361a347588ba5834a9ca830e0fbd6\": container with ID starting with ffbc9497b5c9a21de01f28e99dcd0b65281361a347588ba5834a9ca830e0fbd6 not found: ID does not exist" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.735653 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.749027 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.761454 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 17:25:04 crc kubenswrapper[4794]: E0216 17:25:04.762156 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81beeae8-f0b5-480c-92cf-ce047e2a55a5" containerName="nova-api-api" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.762226 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="81beeae8-f0b5-480c-92cf-ce047e2a55a5" containerName="nova-api-api" Feb 16 17:25:04 crc kubenswrapper[4794]: E0216 17:25:04.762322 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81beeae8-f0b5-480c-92cf-ce047e2a55a5" containerName="nova-api-log" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.762410 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="81beeae8-f0b5-480c-92cf-ce047e2a55a5" containerName="nova-api-log" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.762761 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="81beeae8-f0b5-480c-92cf-ce047e2a55a5" containerName="nova-api-api" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.762831 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="81beeae8-f0b5-480c-92cf-ce047e2a55a5" containerName="nova-api-log" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.764244 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.812168 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.812210 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.813001 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8855b32-0f8f-4cc3-af68-abdb6219b49e-public-tls-certs\") pod \"nova-api-0\" (UID: \"e8855b32-0f8f-4cc3-af68-abdb6219b49e\") " pod="openstack/nova-api-0" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.813047 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8855b32-0f8f-4cc3-af68-abdb6219b49e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e8855b32-0f8f-4cc3-af68-abdb6219b49e\") " pod="openstack/nova-api-0" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.813130 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8855b32-0f8f-4cc3-af68-abdb6219b49e-config-data\") pod \"nova-api-0\" (UID: \"e8855b32-0f8f-4cc3-af68-abdb6219b49e\") " pod="openstack/nova-api-0" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.813152 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8855b32-0f8f-4cc3-af68-abdb6219b49e-logs\") pod \"nova-api-0\" (UID: \"e8855b32-0f8f-4cc3-af68-abdb6219b49e\") " pod="openstack/nova-api-0" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.813286 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8855b32-0f8f-4cc3-af68-abdb6219b49e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e8855b32-0f8f-4cc3-af68-abdb6219b49e\") " pod="openstack/nova-api-0" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.813466 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kctqq\" (UniqueName: \"kubernetes.io/projected/e8855b32-0f8f-4cc3-af68-abdb6219b49e-kube-api-access-kctqq\") pod \"nova-api-0\" (UID: \"e8855b32-0f8f-4cc3-af68-abdb6219b49e\") " pod="openstack/nova-api-0" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.813845 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.892375 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81beeae8-f0b5-480c-92cf-ce047e2a55a5" path="/var/lib/kubelet/pods/81beeae8-f0b5-480c-92cf-ce047e2a55a5/volumes" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.893060 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.916355 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8855b32-0f8f-4cc3-af68-abdb6219b49e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e8855b32-0f8f-4cc3-af68-abdb6219b49e\") " pod="openstack/nova-api-0" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.916461 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kctqq\" (UniqueName: \"kubernetes.io/projected/e8855b32-0f8f-4cc3-af68-abdb6219b49e-kube-api-access-kctqq\") pod \"nova-api-0\" (UID: \"e8855b32-0f8f-4cc3-af68-abdb6219b49e\") " pod="openstack/nova-api-0" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.916580 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8855b32-0f8f-4cc3-af68-abdb6219b49e-public-tls-certs\") pod \"nova-api-0\" (UID: \"e8855b32-0f8f-4cc3-af68-abdb6219b49e\") " pod="openstack/nova-api-0" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.916604 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8855b32-0f8f-4cc3-af68-abdb6219b49e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e8855b32-0f8f-4cc3-af68-abdb6219b49e\") " pod="openstack/nova-api-0" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.916631 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8855b32-0f8f-4cc3-af68-abdb6219b49e-logs\") pod \"nova-api-0\" (UID: \"e8855b32-0f8f-4cc3-af68-abdb6219b49e\") " pod="openstack/nova-api-0" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.916646 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8855b32-0f8f-4cc3-af68-abdb6219b49e-config-data\") pod \"nova-api-0\" (UID: \"e8855b32-0f8f-4cc3-af68-abdb6219b49e\") " pod="openstack/nova-api-0" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.918152 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8855b32-0f8f-4cc3-af68-abdb6219b49e-logs\") pod \"nova-api-0\" (UID: \"e8855b32-0f8f-4cc3-af68-abdb6219b49e\") " pod="openstack/nova-api-0" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.921346 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8855b32-0f8f-4cc3-af68-abdb6219b49e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e8855b32-0f8f-4cc3-af68-abdb6219b49e\") " pod="openstack/nova-api-0" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.921506 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8855b32-0f8f-4cc3-af68-abdb6219b49e-config-data\") pod \"nova-api-0\" (UID: \"e8855b32-0f8f-4cc3-af68-abdb6219b49e\") " pod="openstack/nova-api-0" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.922366 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8855b32-0f8f-4cc3-af68-abdb6219b49e-public-tls-certs\") pod \"nova-api-0\" (UID: \"e8855b32-0f8f-4cc3-af68-abdb6219b49e\") " pod="openstack/nova-api-0" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.924863 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8855b32-0f8f-4cc3-af68-abdb6219b49e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e8855b32-0f8f-4cc3-af68-abdb6219b49e\") " pod="openstack/nova-api-0" Feb 16 17:25:04 crc kubenswrapper[4794]: I0216 17:25:04.941498 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kctqq\" (UniqueName: \"kubernetes.io/projected/e8855b32-0f8f-4cc3-af68-abdb6219b49e-kube-api-access-kctqq\") pod \"nova-api-0\" (UID: \"e8855b32-0f8f-4cc3-af68-abdb6219b49e\") " pod="openstack/nova-api-0" Feb 16 17:25:05 crc kubenswrapper[4794]: I0216 17:25:05.144913 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:25:05 crc kubenswrapper[4794]: I0216 17:25:05.462229 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:25:05 crc kubenswrapper[4794]: I0216 17:25:05.498241 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:25:05 crc kubenswrapper[4794]: I0216 17:25:05.874788 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:25:05 crc kubenswrapper[4794]: W0216 17:25:05.875079 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8855b32_0f8f_4cc3_af68_abdb6219b49e.slice/crio-21a8fcfb36538e18c14b964b7c7d3314ec33b2ba82b39e703f5c8259cf1a3e96 WatchSource:0}: Error finding container 21a8fcfb36538e18c14b964b7c7d3314ec33b2ba82b39e703f5c8259cf1a3e96: Status 404 returned error can't find the container with id 21a8fcfb36538e18c14b964b7c7d3314ec33b2ba82b39e703f5c8259cf1a3e96 Feb 16 17:25:06 crc kubenswrapper[4794]: I0216 17:25:06.437297 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e8855b32-0f8f-4cc3-af68-abdb6219b49e","Type":"ContainerStarted","Data":"b8ce5687165cd769be4679f4ee59a3a537ff546cdd9c4a5a2ac3673747c44991"} Feb 16 17:25:06 crc kubenswrapper[4794]: I0216 17:25:06.437626 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e8855b32-0f8f-4cc3-af68-abdb6219b49e","Type":"ContainerStarted","Data":"21a8fcfb36538e18c14b964b7c7d3314ec33b2ba82b39e703f5c8259cf1a3e96"} Feb 16 17:25:06 crc kubenswrapper[4794]: I0216 17:25:06.455744 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:25:06 crc kubenswrapper[4794]: I0216 17:25:06.724365 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-4r7xb"] Feb 16 17:25:06 crc kubenswrapper[4794]: I0216 17:25:06.725931 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4r7xb" Feb 16 17:25:06 crc kubenswrapper[4794]: I0216 17:25:06.728643 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 16 17:25:06 crc kubenswrapper[4794]: I0216 17:25:06.728883 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 16 17:25:06 crc kubenswrapper[4794]: I0216 17:25:06.741084 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-4r7xb"] Feb 16 17:25:06 crc kubenswrapper[4794]: I0216 17:25:06.769120 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80d48a50-835e-455f-81f7-9c40a212b9e6-scripts\") pod \"nova-cell1-cell-mapping-4r7xb\" (UID: \"80d48a50-835e-455f-81f7-9c40a212b9e6\") " pod="openstack/nova-cell1-cell-mapping-4r7xb" Feb 16 17:25:06 crc kubenswrapper[4794]: I0216 17:25:06.769226 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80d48a50-835e-455f-81f7-9c40a212b9e6-config-data\") pod \"nova-cell1-cell-mapping-4r7xb\" (UID: \"80d48a50-835e-455f-81f7-9c40a212b9e6\") " pod="openstack/nova-cell1-cell-mapping-4r7xb" Feb 16 17:25:06 crc kubenswrapper[4794]: I0216 17:25:06.769529 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80d48a50-835e-455f-81f7-9c40a212b9e6-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4r7xb\" (UID: \"80d48a50-835e-455f-81f7-9c40a212b9e6\") " pod="openstack/nova-cell1-cell-mapping-4r7xb" Feb 16 17:25:06 crc kubenswrapper[4794]: I0216 17:25:06.769696 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zqsk\" (UniqueName: \"kubernetes.io/projected/80d48a50-835e-455f-81f7-9c40a212b9e6-kube-api-access-8zqsk\") pod \"nova-cell1-cell-mapping-4r7xb\" (UID: \"80d48a50-835e-455f-81f7-9c40a212b9e6\") " pod="openstack/nova-cell1-cell-mapping-4r7xb" Feb 16 17:25:06 crc kubenswrapper[4794]: I0216 17:25:06.872187 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80d48a50-835e-455f-81f7-9c40a212b9e6-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4r7xb\" (UID: \"80d48a50-835e-455f-81f7-9c40a212b9e6\") " pod="openstack/nova-cell1-cell-mapping-4r7xb" Feb 16 17:25:06 crc kubenswrapper[4794]: I0216 17:25:06.872319 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zqsk\" (UniqueName: \"kubernetes.io/projected/80d48a50-835e-455f-81f7-9c40a212b9e6-kube-api-access-8zqsk\") pod \"nova-cell1-cell-mapping-4r7xb\" (UID: \"80d48a50-835e-455f-81f7-9c40a212b9e6\") " pod="openstack/nova-cell1-cell-mapping-4r7xb" Feb 16 17:25:06 crc kubenswrapper[4794]: I0216 17:25:06.872543 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80d48a50-835e-455f-81f7-9c40a212b9e6-scripts\") pod \"nova-cell1-cell-mapping-4r7xb\" (UID: \"80d48a50-835e-455f-81f7-9c40a212b9e6\") " pod="openstack/nova-cell1-cell-mapping-4r7xb" Feb 16 17:25:06 crc kubenswrapper[4794]: I0216 17:25:06.872603 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80d48a50-835e-455f-81f7-9c40a212b9e6-config-data\") pod \"nova-cell1-cell-mapping-4r7xb\" (UID: \"80d48a50-835e-455f-81f7-9c40a212b9e6\") " pod="openstack/nova-cell1-cell-mapping-4r7xb" Feb 16 17:25:06 crc kubenswrapper[4794]: I0216 17:25:06.878152 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80d48a50-835e-455f-81f7-9c40a212b9e6-config-data\") pod \"nova-cell1-cell-mapping-4r7xb\" (UID: \"80d48a50-835e-455f-81f7-9c40a212b9e6\") " pod="openstack/nova-cell1-cell-mapping-4r7xb" Feb 16 17:25:06 crc kubenswrapper[4794]: I0216 17:25:06.878159 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80d48a50-835e-455f-81f7-9c40a212b9e6-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4r7xb\" (UID: \"80d48a50-835e-455f-81f7-9c40a212b9e6\") " pod="openstack/nova-cell1-cell-mapping-4r7xb" Feb 16 17:25:06 crc kubenswrapper[4794]: I0216 17:25:06.884135 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80d48a50-835e-455f-81f7-9c40a212b9e6-scripts\") pod \"nova-cell1-cell-mapping-4r7xb\" (UID: \"80d48a50-835e-455f-81f7-9c40a212b9e6\") " pod="openstack/nova-cell1-cell-mapping-4r7xb" Feb 16 17:25:06 crc kubenswrapper[4794]: I0216 17:25:06.894879 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zqsk\" (UniqueName: \"kubernetes.io/projected/80d48a50-835e-455f-81f7-9c40a212b9e6-kube-api-access-8zqsk\") pod \"nova-cell1-cell-mapping-4r7xb\" (UID: \"80d48a50-835e-455f-81f7-9c40a212b9e6\") " pod="openstack/nova-cell1-cell-mapping-4r7xb" Feb 16 17:25:07 crc kubenswrapper[4794]: I0216 17:25:07.056130 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4r7xb" Feb 16 17:25:07 crc kubenswrapper[4794]: I0216 17:25:07.451503 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c570e74-9f5d-4d0b-b925-6718adb1fbd9","Type":"ContainerStarted","Data":"ad7359ff9d7a87d42243071261b2d7eda8640e98b8914dc6b2ab58ec499f3727"} Feb 16 17:25:07 crc kubenswrapper[4794]: I0216 17:25:07.451693 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0c570e74-9f5d-4d0b-b925-6718adb1fbd9" containerName="ceilometer-central-agent" containerID="cri-o://723b674c33e8716d9c08a36c687a3be22763947ada990b147bf49041d9bb692f" gracePeriod=30 Feb 16 17:25:07 crc kubenswrapper[4794]: I0216 17:25:07.451762 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0c570e74-9f5d-4d0b-b925-6718adb1fbd9" containerName="sg-core" containerID="cri-o://5fcd4f2fd8fe03f0366626635c5f91d9ae0840eb4a8ab050dc1f461126cf565e" gracePeriod=30 Feb 16 17:25:07 crc kubenswrapper[4794]: I0216 17:25:07.451879 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 17:25:07 crc kubenswrapper[4794]: I0216 17:25:07.451815 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0c570e74-9f5d-4d0b-b925-6718adb1fbd9" containerName="ceilometer-notification-agent" containerID="cri-o://52050fe6bd55a4fedb657405ec95ead697376bd2d895064719eab30163e92b81" gracePeriod=30 Feb 16 17:25:07 crc kubenswrapper[4794]: I0216 17:25:07.451788 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0c570e74-9f5d-4d0b-b925-6718adb1fbd9" containerName="proxy-httpd" containerID="cri-o://ad7359ff9d7a87d42243071261b2d7eda8640e98b8914dc6b2ab58ec499f3727" gracePeriod=30 Feb 16 17:25:07 crc kubenswrapper[4794]: I0216 17:25:07.464282 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e8855b32-0f8f-4cc3-af68-abdb6219b49e","Type":"ContainerStarted","Data":"ab518f5d52d454fc51a1144f310a7d00194e1a086d3d14e547239203b92eedc5"} Feb 16 17:25:07 crc kubenswrapper[4794]: I0216 17:25:07.482986 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.040012316 podStartE2EDuration="7.48297036s" podCreationTimestamp="2026-02-16 17:25:00 +0000 UTC" firstStartedPulling="2026-02-16 17:25:01.538603469 +0000 UTC m=+1527.486698116" lastFinishedPulling="2026-02-16 17:25:05.981561513 +0000 UTC m=+1531.929656160" observedRunningTime="2026-02-16 17:25:07.480470059 +0000 UTC m=+1533.428564706" watchObservedRunningTime="2026-02-16 17:25:07.48297036 +0000 UTC m=+1533.431065007" Feb 16 17:25:07 crc kubenswrapper[4794]: I0216 17:25:07.509111 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.509091789 podStartE2EDuration="3.509091789s" podCreationTimestamp="2026-02-16 17:25:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:25:07.50486777 +0000 UTC m=+1533.452962417" watchObservedRunningTime="2026-02-16 17:25:07.509091789 +0000 UTC m=+1533.457186436" Feb 16 17:25:07 crc kubenswrapper[4794]: I0216 17:25:07.586432 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-4r7xb"] Feb 16 17:25:07 crc kubenswrapper[4794]: I0216 17:25:07.657708 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f84f9ccf-pczg4" Feb 16 17:25:07 crc kubenswrapper[4794]: I0216 17:25:07.731661 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-kdxjb"] Feb 16 17:25:07 crc kubenswrapper[4794]: I0216 17:25:07.731940 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-568d7fd7cf-kdxjb" podUID="3fbebaa3-8aa2-4ace-a9c9-558bc3964430" containerName="dnsmasq-dns" containerID="cri-o://a00b53ad46b822a70c9339195ca2a4b34915849555540ce220adb1a6c8f851a8" gracePeriod=10 Feb 16 17:25:07 crc kubenswrapper[4794]: I0216 17:25:07.921536 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-568d7fd7cf-kdxjb" podUID="3fbebaa3-8aa2-4ace-a9c9-558bc3964430" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.240:5353: connect: connection refused" Feb 16 17:25:08 crc kubenswrapper[4794]: I0216 17:25:08.481141 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4r7xb" event={"ID":"80d48a50-835e-455f-81f7-9c40a212b9e6","Type":"ContainerStarted","Data":"39148aaddc8efdd9d08367b65200436fc85a30a0cf6ccd872dd780e445c86ad9"} Feb 16 17:25:08 crc kubenswrapper[4794]: I0216 17:25:08.481500 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4r7xb" event={"ID":"80d48a50-835e-455f-81f7-9c40a212b9e6","Type":"ContainerStarted","Data":"6379f4a0860adb32bd5c3dc91e42bc59b25610866f7a0ad610c2f6100b689493"} Feb 16 17:25:08 crc kubenswrapper[4794]: I0216 17:25:08.483074 4794 generic.go:334] "Generic (PLEG): container finished" podID="3fbebaa3-8aa2-4ace-a9c9-558bc3964430" containerID="a00b53ad46b822a70c9339195ca2a4b34915849555540ce220adb1a6c8f851a8" exitCode=0 Feb 16 17:25:08 crc kubenswrapper[4794]: I0216 17:25:08.483127 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-kdxjb" event={"ID":"3fbebaa3-8aa2-4ace-a9c9-558bc3964430","Type":"ContainerDied","Data":"a00b53ad46b822a70c9339195ca2a4b34915849555540ce220adb1a6c8f851a8"} Feb 16 17:25:08 crc kubenswrapper[4794]: I0216 17:25:08.483147 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-kdxjb" event={"ID":"3fbebaa3-8aa2-4ace-a9c9-558bc3964430","Type":"ContainerDied","Data":"ff9256edb661d883a8e9fc31aeda100e031fa8dd89b1fff14b8ce121c17bac47"} Feb 16 17:25:08 crc kubenswrapper[4794]: I0216 17:25:08.483160 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff9256edb661d883a8e9fc31aeda100e031fa8dd89b1fff14b8ce121c17bac47" Feb 16 17:25:08 crc kubenswrapper[4794]: I0216 17:25:08.485338 4794 generic.go:334] "Generic (PLEG): container finished" podID="0c570e74-9f5d-4d0b-b925-6718adb1fbd9" containerID="ad7359ff9d7a87d42243071261b2d7eda8640e98b8914dc6b2ab58ec499f3727" exitCode=0 Feb 16 17:25:08 crc kubenswrapper[4794]: I0216 17:25:08.485357 4794 generic.go:334] "Generic (PLEG): container finished" podID="0c570e74-9f5d-4d0b-b925-6718adb1fbd9" containerID="5fcd4f2fd8fe03f0366626635c5f91d9ae0840eb4a8ab050dc1f461126cf565e" exitCode=2 Feb 16 17:25:08 crc kubenswrapper[4794]: I0216 17:25:08.485364 4794 generic.go:334] "Generic (PLEG): container finished" podID="0c570e74-9f5d-4d0b-b925-6718adb1fbd9" containerID="52050fe6bd55a4fedb657405ec95ead697376bd2d895064719eab30163e92b81" exitCode=0 Feb 16 17:25:08 crc kubenswrapper[4794]: I0216 17:25:08.486422 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c570e74-9f5d-4d0b-b925-6718adb1fbd9","Type":"ContainerDied","Data":"ad7359ff9d7a87d42243071261b2d7eda8640e98b8914dc6b2ab58ec499f3727"} Feb 16 17:25:08 crc kubenswrapper[4794]: I0216 17:25:08.486458 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c570e74-9f5d-4d0b-b925-6718adb1fbd9","Type":"ContainerDied","Data":"5fcd4f2fd8fe03f0366626635c5f91d9ae0840eb4a8ab050dc1f461126cf565e"} Feb 16 17:25:08 crc kubenswrapper[4794]: I0216 17:25:08.486473 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c570e74-9f5d-4d0b-b925-6718adb1fbd9","Type":"ContainerDied","Data":"52050fe6bd55a4fedb657405ec95ead697376bd2d895064719eab30163e92b81"} Feb 16 17:25:08 crc kubenswrapper[4794]: I0216 17:25:08.508560 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-4r7xb" podStartSLOduration=2.508533647 podStartE2EDuration="2.508533647s" podCreationTimestamp="2026-02-16 17:25:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:25:08.500649564 +0000 UTC m=+1534.448744211" watchObservedRunningTime="2026-02-16 17:25:08.508533647 +0000 UTC m=+1534.456628294" Feb 16 17:25:08 crc kubenswrapper[4794]: I0216 17:25:08.539980 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-kdxjb" Feb 16 17:25:08 crc kubenswrapper[4794]: I0216 17:25:08.635219 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fbebaa3-8aa2-4ace-a9c9-558bc3964430-dns-svc\") pod \"3fbebaa3-8aa2-4ace-a9c9-558bc3964430\" (UID: \"3fbebaa3-8aa2-4ace-a9c9-558bc3964430\") " Feb 16 17:25:08 crc kubenswrapper[4794]: I0216 17:25:08.635444 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fbebaa3-8aa2-4ace-a9c9-558bc3964430-ovsdbserver-nb\") pod \"3fbebaa3-8aa2-4ace-a9c9-558bc3964430\" (UID: \"3fbebaa3-8aa2-4ace-a9c9-558bc3964430\") " Feb 16 17:25:08 crc kubenswrapper[4794]: I0216 17:25:08.635480 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fbebaa3-8aa2-4ace-a9c9-558bc3964430-ovsdbserver-sb\") pod \"3fbebaa3-8aa2-4ace-a9c9-558bc3964430\" (UID: \"3fbebaa3-8aa2-4ace-a9c9-558bc3964430\") " Feb 16 17:25:08 crc kubenswrapper[4794]: I0216 17:25:08.635531 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fbebaa3-8aa2-4ace-a9c9-558bc3964430-dns-swift-storage-0\") pod \"3fbebaa3-8aa2-4ace-a9c9-558bc3964430\" (UID: \"3fbebaa3-8aa2-4ace-a9c9-558bc3964430\") " Feb 16 17:25:08 crc kubenswrapper[4794]: I0216 17:25:08.635626 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fbebaa3-8aa2-4ace-a9c9-558bc3964430-config\") pod \"3fbebaa3-8aa2-4ace-a9c9-558bc3964430\" (UID: \"3fbebaa3-8aa2-4ace-a9c9-558bc3964430\") " Feb 16 17:25:08 crc kubenswrapper[4794]: I0216 17:25:08.635709 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5spq\" (UniqueName: \"kubernetes.io/projected/3fbebaa3-8aa2-4ace-a9c9-558bc3964430-kube-api-access-q5spq\") pod \"3fbebaa3-8aa2-4ace-a9c9-558bc3964430\" (UID: \"3fbebaa3-8aa2-4ace-a9c9-558bc3964430\") " Feb 16 17:25:08 crc kubenswrapper[4794]: I0216 17:25:08.656329 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fbebaa3-8aa2-4ace-a9c9-558bc3964430-kube-api-access-q5spq" (OuterVolumeSpecName: "kube-api-access-q5spq") pod "3fbebaa3-8aa2-4ace-a9c9-558bc3964430" (UID: "3fbebaa3-8aa2-4ace-a9c9-558bc3964430"). InnerVolumeSpecName "kube-api-access-q5spq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:25:08 crc kubenswrapper[4794]: I0216 17:25:08.735986 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fbebaa3-8aa2-4ace-a9c9-558bc3964430-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3fbebaa3-8aa2-4ace-a9c9-558bc3964430" (UID: "3fbebaa3-8aa2-4ace-a9c9-558bc3964430"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:25:08 crc kubenswrapper[4794]: I0216 17:25:08.739738 4794 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fbebaa3-8aa2-4ace-a9c9-558bc3964430-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:08 crc kubenswrapper[4794]: I0216 17:25:08.739781 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5spq\" (UniqueName: \"kubernetes.io/projected/3fbebaa3-8aa2-4ace-a9c9-558bc3964430-kube-api-access-q5spq\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:08 crc kubenswrapper[4794]: I0216 17:25:08.744345 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fbebaa3-8aa2-4ace-a9c9-558bc3964430-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3fbebaa3-8aa2-4ace-a9c9-558bc3964430" (UID: "3fbebaa3-8aa2-4ace-a9c9-558bc3964430"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:25:08 crc kubenswrapper[4794]: I0216 17:25:08.771175 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fbebaa3-8aa2-4ace-a9c9-558bc3964430-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3fbebaa3-8aa2-4ace-a9c9-558bc3964430" (UID: "3fbebaa3-8aa2-4ace-a9c9-558bc3964430"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:25:08 crc kubenswrapper[4794]: I0216 17:25:08.786820 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fbebaa3-8aa2-4ace-a9c9-558bc3964430-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3fbebaa3-8aa2-4ace-a9c9-558bc3964430" (UID: "3fbebaa3-8aa2-4ace-a9c9-558bc3964430"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:25:08 crc kubenswrapper[4794]: I0216 17:25:08.790571 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fbebaa3-8aa2-4ace-a9c9-558bc3964430-config" (OuterVolumeSpecName: "config") pod "3fbebaa3-8aa2-4ace-a9c9-558bc3964430" (UID: "3fbebaa3-8aa2-4ace-a9c9-558bc3964430"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:25:08 crc kubenswrapper[4794]: I0216 17:25:08.842113 4794 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fbebaa3-8aa2-4ace-a9c9-558bc3964430-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:08 crc kubenswrapper[4794]: I0216 17:25:08.842155 4794 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fbebaa3-8aa2-4ace-a9c9-558bc3964430-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:08 crc kubenswrapper[4794]: I0216 17:25:08.842164 4794 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fbebaa3-8aa2-4ace-a9c9-558bc3964430-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:08 crc kubenswrapper[4794]: I0216 17:25:08.842175 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fbebaa3-8aa2-4ace-a9c9-558bc3964430-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:09 crc kubenswrapper[4794]: I0216 17:25:09.496634 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-kdxjb" Feb 16 17:25:09 crc kubenswrapper[4794]: I0216 17:25:09.530636 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-kdxjb"] Feb 16 17:25:09 crc kubenswrapper[4794]: I0216 17:25:09.545992 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-kdxjb"] Feb 16 17:25:10 crc kubenswrapper[4794]: I0216 17:25:10.546746 4794 generic.go:334] "Generic (PLEG): container finished" podID="0c570e74-9f5d-4d0b-b925-6718adb1fbd9" containerID="723b674c33e8716d9c08a36c687a3be22763947ada990b147bf49041d9bb692f" exitCode=0 Feb 16 17:25:10 crc kubenswrapper[4794]: I0216 17:25:10.547105 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c570e74-9f5d-4d0b-b925-6718adb1fbd9","Type":"ContainerDied","Data":"723b674c33e8716d9c08a36c687a3be22763947ada990b147bf49041d9bb692f"} Feb 16 17:25:10 crc kubenswrapper[4794]: I0216 17:25:10.805069 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fbebaa3-8aa2-4ace-a9c9-558bc3964430" path="/var/lib/kubelet/pods/3fbebaa3-8aa2-4ace-a9c9-558bc3964430/volumes" Feb 16 17:25:10 crc kubenswrapper[4794]: I0216 17:25:10.897717 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:25:10 crc kubenswrapper[4794]: I0216 17:25:10.987803 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m4n2\" (UniqueName: \"kubernetes.io/projected/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-kube-api-access-6m4n2\") pod \"0c570e74-9f5d-4d0b-b925-6718adb1fbd9\" (UID: \"0c570e74-9f5d-4d0b-b925-6718adb1fbd9\") " Feb 16 17:25:10 crc kubenswrapper[4794]: I0216 17:25:10.988492 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-log-httpd\") pod \"0c570e74-9f5d-4d0b-b925-6718adb1fbd9\" (UID: \"0c570e74-9f5d-4d0b-b925-6718adb1fbd9\") " Feb 16 17:25:10 crc kubenswrapper[4794]: I0216 17:25:10.988604 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-sg-core-conf-yaml\") pod \"0c570e74-9f5d-4d0b-b925-6718adb1fbd9\" (UID: \"0c570e74-9f5d-4d0b-b925-6718adb1fbd9\") " Feb 16 17:25:10 crc kubenswrapper[4794]: I0216 17:25:10.988730 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-run-httpd\") pod \"0c570e74-9f5d-4d0b-b925-6718adb1fbd9\" (UID: \"0c570e74-9f5d-4d0b-b925-6718adb1fbd9\") " Feb 16 17:25:10 crc kubenswrapper[4794]: I0216 17:25:10.988758 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-config-data\") pod \"0c570e74-9f5d-4d0b-b925-6718adb1fbd9\" (UID: \"0c570e74-9f5d-4d0b-b925-6718adb1fbd9\") " Feb 16 17:25:10 crc kubenswrapper[4794]: I0216 17:25:10.988832 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-combined-ca-bundle\") pod \"0c570e74-9f5d-4d0b-b925-6718adb1fbd9\" (UID: \"0c570e74-9f5d-4d0b-b925-6718adb1fbd9\") " Feb 16 17:25:10 crc kubenswrapper[4794]: I0216 17:25:10.988941 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-scripts\") pod \"0c570e74-9f5d-4d0b-b925-6718adb1fbd9\" (UID: \"0c570e74-9f5d-4d0b-b925-6718adb1fbd9\") " Feb 16 17:25:10 crc kubenswrapper[4794]: I0216 17:25:10.989487 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0c570e74-9f5d-4d0b-b925-6718adb1fbd9" (UID: "0c570e74-9f5d-4d0b-b925-6718adb1fbd9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:25:10 crc kubenswrapper[4794]: I0216 17:25:10.989582 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0c570e74-9f5d-4d0b-b925-6718adb1fbd9" (UID: "0c570e74-9f5d-4d0b-b925-6718adb1fbd9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:25:10 crc kubenswrapper[4794]: I0216 17:25:10.990390 4794 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:10 crc kubenswrapper[4794]: I0216 17:25:10.990470 4794 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:10 crc kubenswrapper[4794]: I0216 17:25:10.995559 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-kube-api-access-6m4n2" (OuterVolumeSpecName: "kube-api-access-6m4n2") pod "0c570e74-9f5d-4d0b-b925-6718adb1fbd9" (UID: "0c570e74-9f5d-4d0b-b925-6718adb1fbd9"). InnerVolumeSpecName "kube-api-access-6m4n2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.011458 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-scripts" (OuterVolumeSpecName: "scripts") pod "0c570e74-9f5d-4d0b-b925-6718adb1fbd9" (UID: "0c570e74-9f5d-4d0b-b925-6718adb1fbd9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.037423 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0c570e74-9f5d-4d0b-b925-6718adb1fbd9" (UID: "0c570e74-9f5d-4d0b-b925-6718adb1fbd9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.093828 4794 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.093857 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6m4n2\" (UniqueName: \"kubernetes.io/projected/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-kube-api-access-6m4n2\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.093871 4794 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.124840 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c570e74-9f5d-4d0b-b925-6718adb1fbd9" (UID: "0c570e74-9f5d-4d0b-b925-6718adb1fbd9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.135203 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-config-data" (OuterVolumeSpecName: "config-data") pod "0c570e74-9f5d-4d0b-b925-6718adb1fbd9" (UID: "0c570e74-9f5d-4d0b-b925-6718adb1fbd9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.195839 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.195875 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c570e74-9f5d-4d0b-b925-6718adb1fbd9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.558471 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0c570e74-9f5d-4d0b-b925-6718adb1fbd9","Type":"ContainerDied","Data":"d5359ee5796981c325b5cf174da236e438938f5e238cd7d52880d6c71b2744f2"} Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.558543 4794 scope.go:117] "RemoveContainer" containerID="ad7359ff9d7a87d42243071261b2d7eda8640e98b8914dc6b2ab58ec499f3727" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.558700 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.597517 4794 scope.go:117] "RemoveContainer" containerID="5fcd4f2fd8fe03f0366626635c5f91d9ae0840eb4a8ab050dc1f461126cf565e" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.606102 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.629780 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.631091 4794 scope.go:117] "RemoveContainer" containerID="52050fe6bd55a4fedb657405ec95ead697376bd2d895064719eab30163e92b81" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.643802 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:25:11 crc kubenswrapper[4794]: E0216 17:25:11.644494 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c570e74-9f5d-4d0b-b925-6718adb1fbd9" containerName="sg-core" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.644516 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c570e74-9f5d-4d0b-b925-6718adb1fbd9" containerName="sg-core" Feb 16 17:25:11 crc kubenswrapper[4794]: E0216 17:25:11.644536 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c570e74-9f5d-4d0b-b925-6718adb1fbd9" containerName="proxy-httpd" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.644545 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c570e74-9f5d-4d0b-b925-6718adb1fbd9" containerName="proxy-httpd" Feb 16 17:25:11 crc kubenswrapper[4794]: E0216 17:25:11.644580 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c570e74-9f5d-4d0b-b925-6718adb1fbd9" containerName="ceilometer-notification-agent" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.644589 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c570e74-9f5d-4d0b-b925-6718adb1fbd9" containerName="ceilometer-notification-agent" Feb 16 17:25:11 crc kubenswrapper[4794]: E0216 17:25:11.644616 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fbebaa3-8aa2-4ace-a9c9-558bc3964430" containerName="init" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.644625 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fbebaa3-8aa2-4ace-a9c9-558bc3964430" containerName="init" Feb 16 17:25:11 crc kubenswrapper[4794]: E0216 17:25:11.644640 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fbebaa3-8aa2-4ace-a9c9-558bc3964430" containerName="dnsmasq-dns" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.644648 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fbebaa3-8aa2-4ace-a9c9-558bc3964430" containerName="dnsmasq-dns" Feb 16 17:25:11 crc kubenswrapper[4794]: E0216 17:25:11.644669 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c570e74-9f5d-4d0b-b925-6718adb1fbd9" containerName="ceilometer-central-agent" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.644677 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c570e74-9f5d-4d0b-b925-6718adb1fbd9" containerName="ceilometer-central-agent" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.644946 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fbebaa3-8aa2-4ace-a9c9-558bc3964430" containerName="dnsmasq-dns" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.644978 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c570e74-9f5d-4d0b-b925-6718adb1fbd9" containerName="sg-core" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.644998 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c570e74-9f5d-4d0b-b925-6718adb1fbd9" containerName="proxy-httpd" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.645020 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c570e74-9f5d-4d0b-b925-6718adb1fbd9" containerName="ceilometer-central-agent" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.645038 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c570e74-9f5d-4d0b-b925-6718adb1fbd9" containerName="ceilometer-notification-agent" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.647693 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.649848 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.652243 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.658798 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.665951 4794 scope.go:117] "RemoveContainer" containerID="723b674c33e8716d9c08a36c687a3be22763947ada990b147bf49041d9bb692f" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.705870 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-log-httpd\") pod \"ceilometer-0\" (UID: \"4bfb4ae6-8e5f-4047-b00a-496e2cb97275\") " pod="openstack/ceilometer-0" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.706074 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g5lv\" (UniqueName: \"kubernetes.io/projected/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-kube-api-access-8g5lv\") pod \"ceilometer-0\" (UID: \"4bfb4ae6-8e5f-4047-b00a-496e2cb97275\") " pod="openstack/ceilometer-0" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.706117 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-run-httpd\") pod \"ceilometer-0\" (UID: \"4bfb4ae6-8e5f-4047-b00a-496e2cb97275\") " pod="openstack/ceilometer-0" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.706216 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-config-data\") pod \"ceilometer-0\" (UID: \"4bfb4ae6-8e5f-4047-b00a-496e2cb97275\") " pod="openstack/ceilometer-0" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.706532 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4bfb4ae6-8e5f-4047-b00a-496e2cb97275\") " pod="openstack/ceilometer-0" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.706757 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4bfb4ae6-8e5f-4047-b00a-496e2cb97275\") " pod="openstack/ceilometer-0" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.706996 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-scripts\") pod \"ceilometer-0\" (UID: \"4bfb4ae6-8e5f-4047-b00a-496e2cb97275\") " pod="openstack/ceilometer-0" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.809293 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-log-httpd\") pod \"ceilometer-0\" (UID: \"4bfb4ae6-8e5f-4047-b00a-496e2cb97275\") " pod="openstack/ceilometer-0" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.809711 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-log-httpd\") pod \"ceilometer-0\" (UID: \"4bfb4ae6-8e5f-4047-b00a-496e2cb97275\") " pod="openstack/ceilometer-0" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.809790 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g5lv\" (UniqueName: \"kubernetes.io/projected/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-kube-api-access-8g5lv\") pod \"ceilometer-0\" (UID: \"4bfb4ae6-8e5f-4047-b00a-496e2cb97275\") " pod="openstack/ceilometer-0" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.809824 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-run-httpd\") pod \"ceilometer-0\" (UID: \"4bfb4ae6-8e5f-4047-b00a-496e2cb97275\") " pod="openstack/ceilometer-0" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.809890 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-config-data\") pod \"ceilometer-0\" (UID: \"4bfb4ae6-8e5f-4047-b00a-496e2cb97275\") " pod="openstack/ceilometer-0" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.809955 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4bfb4ae6-8e5f-4047-b00a-496e2cb97275\") " pod="openstack/ceilometer-0" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.809998 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4bfb4ae6-8e5f-4047-b00a-496e2cb97275\") " pod="openstack/ceilometer-0" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.810072 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-scripts\") pod \"ceilometer-0\" (UID: \"4bfb4ae6-8e5f-4047-b00a-496e2cb97275\") " pod="openstack/ceilometer-0" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.811397 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-run-httpd\") pod \"ceilometer-0\" (UID: \"4bfb4ae6-8e5f-4047-b00a-496e2cb97275\") " pod="openstack/ceilometer-0" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.815398 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-scripts\") pod \"ceilometer-0\" (UID: \"4bfb4ae6-8e5f-4047-b00a-496e2cb97275\") " pod="openstack/ceilometer-0" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.819840 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4bfb4ae6-8e5f-4047-b00a-496e2cb97275\") " pod="openstack/ceilometer-0" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.821001 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4bfb4ae6-8e5f-4047-b00a-496e2cb97275\") " pod="openstack/ceilometer-0" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.827319 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-config-data\") pod \"ceilometer-0\" (UID: \"4bfb4ae6-8e5f-4047-b00a-496e2cb97275\") " pod="openstack/ceilometer-0" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.869041 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g5lv\" (UniqueName: \"kubernetes.io/projected/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-kube-api-access-8g5lv\") pod \"ceilometer-0\" (UID: \"4bfb4ae6-8e5f-4047-b00a-496e2cb97275\") " pod="openstack/ceilometer-0" Feb 16 17:25:11 crc kubenswrapper[4794]: I0216 17:25:11.998349 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:25:12 crc kubenswrapper[4794]: I0216 17:25:12.600767 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:25:12 crc kubenswrapper[4794]: I0216 17:25:12.817519 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c570e74-9f5d-4d0b-b925-6718adb1fbd9" path="/var/lib/kubelet/pods/0c570e74-9f5d-4d0b-b925-6718adb1fbd9/volumes" Feb 16 17:25:13 crc kubenswrapper[4794]: I0216 17:25:13.601873 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bfb4ae6-8e5f-4047-b00a-496e2cb97275","Type":"ContainerStarted","Data":"c667ef05c2615ab3428bdd947fa1adc22ade3d6fd2d037324420ae561f3a6b97"} Feb 16 17:25:13 crc kubenswrapper[4794]: I0216 17:25:13.602187 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bfb4ae6-8e5f-4047-b00a-496e2cb97275","Type":"ContainerStarted","Data":"45859627c7c5270d7a5eb8a2a070e4fdee20efffe9de79359f036c16d63c38c2"} Feb 16 17:25:13 crc kubenswrapper[4794]: I0216 17:25:13.792374 4794 scope.go:117] "RemoveContainer" containerID="6e22a7be8018d748f49f3c871459e97f76ee04f859d3d1d0d46b4bbb2dd36691" Feb 16 17:25:13 crc kubenswrapper[4794]: E0216 17:25:13.792710 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:25:14 crc kubenswrapper[4794]: I0216 17:25:14.618732 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bfb4ae6-8e5f-4047-b00a-496e2cb97275","Type":"ContainerStarted","Data":"78fbbc0f24acd8ba660d34fde5061312a915118c6231539c34a4e0219d165fd0"} Feb 16 17:25:14 crc kubenswrapper[4794]: I0216 17:25:14.620945 4794 generic.go:334] "Generic (PLEG): container finished" podID="80d48a50-835e-455f-81f7-9c40a212b9e6" containerID="39148aaddc8efdd9d08367b65200436fc85a30a0cf6ccd872dd780e445c86ad9" exitCode=0 Feb 16 17:25:14 crc kubenswrapper[4794]: I0216 17:25:14.620975 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4r7xb" event={"ID":"80d48a50-835e-455f-81f7-9c40a212b9e6","Type":"ContainerDied","Data":"39148aaddc8efdd9d08367b65200436fc85a30a0cf6ccd872dd780e445c86ad9"} Feb 16 17:25:15 crc kubenswrapper[4794]: I0216 17:25:15.145588 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 17:25:15 crc kubenswrapper[4794]: I0216 17:25:15.145673 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 17:25:15 crc kubenswrapper[4794]: I0216 17:25:15.637123 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bfb4ae6-8e5f-4047-b00a-496e2cb97275","Type":"ContainerStarted","Data":"a7e83c8823b194a5c6336b5d6816f58c0ed2e0574e6e09cfdfaf0d37fa1d9376"} Feb 16 17:25:16 crc kubenswrapper[4794]: I0216 17:25:16.228468 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e8855b32-0f8f-4cc3-af68-abdb6219b49e" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.1:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 17:25:16 crc kubenswrapper[4794]: I0216 17:25:16.228827 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e8855b32-0f8f-4cc3-af68-abdb6219b49e" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.1:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 17:25:16 crc kubenswrapper[4794]: I0216 17:25:16.290788 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4r7xb" Feb 16 17:25:16 crc kubenswrapper[4794]: I0216 17:25:16.450862 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80d48a50-835e-455f-81f7-9c40a212b9e6-scripts\") pod \"80d48a50-835e-455f-81f7-9c40a212b9e6\" (UID: \"80d48a50-835e-455f-81f7-9c40a212b9e6\") " Feb 16 17:25:16 crc kubenswrapper[4794]: I0216 17:25:16.451399 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80d48a50-835e-455f-81f7-9c40a212b9e6-combined-ca-bundle\") pod \"80d48a50-835e-455f-81f7-9c40a212b9e6\" (UID: \"80d48a50-835e-455f-81f7-9c40a212b9e6\") " Feb 16 17:25:16 crc kubenswrapper[4794]: I0216 17:25:16.451641 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80d48a50-835e-455f-81f7-9c40a212b9e6-config-data\") pod \"80d48a50-835e-455f-81f7-9c40a212b9e6\" (UID: \"80d48a50-835e-455f-81f7-9c40a212b9e6\") " Feb 16 17:25:16 crc kubenswrapper[4794]: I0216 17:25:16.451671 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zqsk\" (UniqueName: \"kubernetes.io/projected/80d48a50-835e-455f-81f7-9c40a212b9e6-kube-api-access-8zqsk\") pod \"80d48a50-835e-455f-81f7-9c40a212b9e6\" (UID: \"80d48a50-835e-455f-81f7-9c40a212b9e6\") " Feb 16 17:25:16 crc kubenswrapper[4794]: I0216 17:25:16.458886 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80d48a50-835e-455f-81f7-9c40a212b9e6-kube-api-access-8zqsk" (OuterVolumeSpecName: "kube-api-access-8zqsk") pod "80d48a50-835e-455f-81f7-9c40a212b9e6" (UID: "80d48a50-835e-455f-81f7-9c40a212b9e6"). InnerVolumeSpecName "kube-api-access-8zqsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:25:16 crc kubenswrapper[4794]: I0216 17:25:16.462444 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80d48a50-835e-455f-81f7-9c40a212b9e6-scripts" (OuterVolumeSpecName: "scripts") pod "80d48a50-835e-455f-81f7-9c40a212b9e6" (UID: "80d48a50-835e-455f-81f7-9c40a212b9e6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:25:16 crc kubenswrapper[4794]: I0216 17:25:16.490735 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80d48a50-835e-455f-81f7-9c40a212b9e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "80d48a50-835e-455f-81f7-9c40a212b9e6" (UID: "80d48a50-835e-455f-81f7-9c40a212b9e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:25:16 crc kubenswrapper[4794]: I0216 17:25:16.499170 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80d48a50-835e-455f-81f7-9c40a212b9e6-config-data" (OuterVolumeSpecName: "config-data") pod "80d48a50-835e-455f-81f7-9c40a212b9e6" (UID: "80d48a50-835e-455f-81f7-9c40a212b9e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:25:16 crc kubenswrapper[4794]: I0216 17:25:16.554991 4794 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80d48a50-835e-455f-81f7-9c40a212b9e6-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:16 crc kubenswrapper[4794]: I0216 17:25:16.555029 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80d48a50-835e-455f-81f7-9c40a212b9e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:16 crc kubenswrapper[4794]: I0216 17:25:16.555042 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80d48a50-835e-455f-81f7-9c40a212b9e6-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:16 crc kubenswrapper[4794]: I0216 17:25:16.555052 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zqsk\" (UniqueName: \"kubernetes.io/projected/80d48a50-835e-455f-81f7-9c40a212b9e6-kube-api-access-8zqsk\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:16 crc kubenswrapper[4794]: I0216 17:25:16.650410 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4r7xb" Feb 16 17:25:16 crc kubenswrapper[4794]: I0216 17:25:16.650409 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4r7xb" event={"ID":"80d48a50-835e-455f-81f7-9c40a212b9e6","Type":"ContainerDied","Data":"6379f4a0860adb32bd5c3dc91e42bc59b25610866f7a0ad610c2f6100b689493"} Feb 16 17:25:16 crc kubenswrapper[4794]: I0216 17:25:16.650473 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6379f4a0860adb32bd5c3dc91e42bc59b25610866f7a0ad610c2f6100b689493" Feb 16 17:25:16 crc kubenswrapper[4794]: I0216 17:25:16.657108 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bfb4ae6-8e5f-4047-b00a-496e2cb97275","Type":"ContainerStarted","Data":"58db2af9a34cbbf55cf99c07c1c587acda540c331431ba55038aec58f776599e"} Feb 16 17:25:16 crc kubenswrapper[4794]: I0216 17:25:16.657318 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 17:25:16 crc kubenswrapper[4794]: I0216 17:25:16.686580 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.422374389 podStartE2EDuration="5.686552445s" podCreationTimestamp="2026-02-16 17:25:11 +0000 UTC" firstStartedPulling="2026-02-16 17:25:12.604094752 +0000 UTC m=+1538.552189409" lastFinishedPulling="2026-02-16 17:25:15.868272808 +0000 UTC m=+1541.816367465" observedRunningTime="2026-02-16 17:25:16.682745177 +0000 UTC m=+1542.630839824" watchObservedRunningTime="2026-02-16 17:25:16.686552445 +0000 UTC m=+1542.634647112" Feb 16 17:25:16 crc kubenswrapper[4794]: I0216 17:25:16.848480 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:25:16 crc kubenswrapper[4794]: I0216 17:25:16.849192 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e8855b32-0f8f-4cc3-af68-abdb6219b49e" containerName="nova-api-log" containerID="cri-o://b8ce5687165cd769be4679f4ee59a3a537ff546cdd9c4a5a2ac3673747c44991" gracePeriod=30 Feb 16 17:25:16 crc kubenswrapper[4794]: I0216 17:25:16.849220 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e8855b32-0f8f-4cc3-af68-abdb6219b49e" containerName="nova-api-api" containerID="cri-o://ab518f5d52d454fc51a1144f310a7d00194e1a086d3d14e547239203b92eedc5" gracePeriod=30 Feb 16 17:25:16 crc kubenswrapper[4794]: I0216 17:25:16.867022 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:25:16 crc kubenswrapper[4794]: I0216 17:25:16.867221 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="b0ed14a7-ee41-453d-8114-8e955b120c40" containerName="nova-scheduler-scheduler" containerID="cri-o://e1c0620537b9e6151bd065d33ec4815bd4cea215dda129f52b146e6a6a4e74bd" gracePeriod=30 Feb 16 17:25:16 crc kubenswrapper[4794]: I0216 17:25:16.953261 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:25:16 crc kubenswrapper[4794]: I0216 17:25:16.953504 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6268e9f0-e992-4887-8a99-80a1b5459cb3" containerName="nova-metadata-log" containerID="cri-o://c98da034c395daf027cfac4174ee49fc0913d68eaa7a105f4c31c0046e08cd64" gracePeriod=30 Feb 16 17:25:16 crc kubenswrapper[4794]: I0216 17:25:16.953906 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6268e9f0-e992-4887-8a99-80a1b5459cb3" containerName="nova-metadata-metadata" containerID="cri-o://e14af41e3422319bb29fab616cc6d9d89fa53d6d466f15dcbd3087e841726665" gracePeriod=30 Feb 16 17:25:17 crc kubenswrapper[4794]: I0216 17:25:17.668929 4794 generic.go:334] "Generic (PLEG): container finished" podID="e8855b32-0f8f-4cc3-af68-abdb6219b49e" containerID="b8ce5687165cd769be4679f4ee59a3a537ff546cdd9c4a5a2ac3673747c44991" exitCode=143 Feb 16 17:25:17 crc kubenswrapper[4794]: I0216 17:25:17.668990 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e8855b32-0f8f-4cc3-af68-abdb6219b49e","Type":"ContainerDied","Data":"b8ce5687165cd769be4679f4ee59a3a537ff546cdd9c4a5a2ac3673747c44991"} Feb 16 17:25:17 crc kubenswrapper[4794]: I0216 17:25:17.671849 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6268e9f0-e992-4887-8a99-80a1b5459cb3","Type":"ContainerDied","Data":"c98da034c395daf027cfac4174ee49fc0913d68eaa7a105f4c31c0046e08cd64"} Feb 16 17:25:17 crc kubenswrapper[4794]: I0216 17:25:17.671848 4794 generic.go:334] "Generic (PLEG): container finished" podID="6268e9f0-e992-4887-8a99-80a1b5459cb3" containerID="c98da034c395daf027cfac4174ee49fc0913d68eaa7a105f4c31c0046e08cd64" exitCode=143 Feb 16 17:25:19 crc kubenswrapper[4794]: E0216 17:25:19.376281 4794 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e1c0620537b9e6151bd065d33ec4815bd4cea215dda129f52b146e6a6a4e74bd" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 17:25:19 crc kubenswrapper[4794]: E0216 17:25:19.378711 4794 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e1c0620537b9e6151bd065d33ec4815bd4cea215dda129f52b146e6a6a4e74bd" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 17:25:19 crc kubenswrapper[4794]: E0216 17:25:19.380556 4794 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e1c0620537b9e6151bd065d33ec4815bd4cea215dda129f52b146e6a6a4e74bd" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 17:25:19 crc kubenswrapper[4794]: E0216 17:25:19.380617 4794 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="b0ed14a7-ee41-453d-8114-8e955b120c40" containerName="nova-scheduler-scheduler" Feb 16 17:25:20 crc kubenswrapper[4794]: I0216 17:25:20.711624 4794 generic.go:334] "Generic (PLEG): container finished" podID="6268e9f0-e992-4887-8a99-80a1b5459cb3" containerID="e14af41e3422319bb29fab616cc6d9d89fa53d6d466f15dcbd3087e841726665" exitCode=0 Feb 16 17:25:20 crc kubenswrapper[4794]: I0216 17:25:20.711662 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6268e9f0-e992-4887-8a99-80a1b5459cb3","Type":"ContainerDied","Data":"e14af41e3422319bb29fab616cc6d9d89fa53d6d466f15dcbd3087e841726665"} Feb 16 17:25:20 crc kubenswrapper[4794]: I0216 17:25:20.715141 4794 generic.go:334] "Generic (PLEG): container finished" podID="b0ed14a7-ee41-453d-8114-8e955b120c40" containerID="e1c0620537b9e6151bd065d33ec4815bd4cea215dda129f52b146e6a6a4e74bd" exitCode=0 Feb 16 17:25:20 crc kubenswrapper[4794]: I0216 17:25:20.715172 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b0ed14a7-ee41-453d-8114-8e955b120c40","Type":"ContainerDied","Data":"e1c0620537b9e6151bd065d33ec4815bd4cea215dda129f52b146e6a6a4e74bd"} Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.084702 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.093983 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.163269 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0ed14a7-ee41-453d-8114-8e955b120c40-combined-ca-bundle\") pod \"b0ed14a7-ee41-453d-8114-8e955b120c40\" (UID: \"b0ed14a7-ee41-453d-8114-8e955b120c40\") " Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.163383 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tc8k\" (UniqueName: \"kubernetes.io/projected/6268e9f0-e992-4887-8a99-80a1b5459cb3-kube-api-access-9tc8k\") pod \"6268e9f0-e992-4887-8a99-80a1b5459cb3\" (UID: \"6268e9f0-e992-4887-8a99-80a1b5459cb3\") " Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.163592 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvl46\" (UniqueName: \"kubernetes.io/projected/b0ed14a7-ee41-453d-8114-8e955b120c40-kube-api-access-qvl46\") pod \"b0ed14a7-ee41-453d-8114-8e955b120c40\" (UID: \"b0ed14a7-ee41-453d-8114-8e955b120c40\") " Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.163698 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6268e9f0-e992-4887-8a99-80a1b5459cb3-combined-ca-bundle\") pod \"6268e9f0-e992-4887-8a99-80a1b5459cb3\" (UID: \"6268e9f0-e992-4887-8a99-80a1b5459cb3\") " Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.163773 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0ed14a7-ee41-453d-8114-8e955b120c40-config-data\") pod \"b0ed14a7-ee41-453d-8114-8e955b120c40\" (UID: \"b0ed14a7-ee41-453d-8114-8e955b120c40\") " Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.163810 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6268e9f0-e992-4887-8a99-80a1b5459cb3-logs\") pod \"6268e9f0-e992-4887-8a99-80a1b5459cb3\" (UID: \"6268e9f0-e992-4887-8a99-80a1b5459cb3\") " Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.163836 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6268e9f0-e992-4887-8a99-80a1b5459cb3-config-data\") pod \"6268e9f0-e992-4887-8a99-80a1b5459cb3\" (UID: \"6268e9f0-e992-4887-8a99-80a1b5459cb3\") " Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.163894 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6268e9f0-e992-4887-8a99-80a1b5459cb3-nova-metadata-tls-certs\") pod \"6268e9f0-e992-4887-8a99-80a1b5459cb3\" (UID: \"6268e9f0-e992-4887-8a99-80a1b5459cb3\") " Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.168022 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6268e9f0-e992-4887-8a99-80a1b5459cb3-logs" (OuterVolumeSpecName: "logs") pod "6268e9f0-e992-4887-8a99-80a1b5459cb3" (UID: "6268e9f0-e992-4887-8a99-80a1b5459cb3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.178401 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0ed14a7-ee41-453d-8114-8e955b120c40-kube-api-access-qvl46" (OuterVolumeSpecName: "kube-api-access-qvl46") pod "b0ed14a7-ee41-453d-8114-8e955b120c40" (UID: "b0ed14a7-ee41-453d-8114-8e955b120c40"). InnerVolumeSpecName "kube-api-access-qvl46". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.178566 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6268e9f0-e992-4887-8a99-80a1b5459cb3-kube-api-access-9tc8k" (OuterVolumeSpecName: "kube-api-access-9tc8k") pod "6268e9f0-e992-4887-8a99-80a1b5459cb3" (UID: "6268e9f0-e992-4887-8a99-80a1b5459cb3"). InnerVolumeSpecName "kube-api-access-9tc8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.219016 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6268e9f0-e992-4887-8a99-80a1b5459cb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6268e9f0-e992-4887-8a99-80a1b5459cb3" (UID: "6268e9f0-e992-4887-8a99-80a1b5459cb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.225443 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0ed14a7-ee41-453d-8114-8e955b120c40-config-data" (OuterVolumeSpecName: "config-data") pod "b0ed14a7-ee41-453d-8114-8e955b120c40" (UID: "b0ed14a7-ee41-453d-8114-8e955b120c40"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.227290 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0ed14a7-ee41-453d-8114-8e955b120c40-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0ed14a7-ee41-453d-8114-8e955b120c40" (UID: "b0ed14a7-ee41-453d-8114-8e955b120c40"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.258487 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6268e9f0-e992-4887-8a99-80a1b5459cb3-config-data" (OuterVolumeSpecName: "config-data") pod "6268e9f0-e992-4887-8a99-80a1b5459cb3" (UID: "6268e9f0-e992-4887-8a99-80a1b5459cb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.269480 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6268e9f0-e992-4887-8a99-80a1b5459cb3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.269664 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0ed14a7-ee41-453d-8114-8e955b120c40-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.269722 4794 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6268e9f0-e992-4887-8a99-80a1b5459cb3-logs\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.269779 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6268e9f0-e992-4887-8a99-80a1b5459cb3-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.269842 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0ed14a7-ee41-453d-8114-8e955b120c40-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.269904 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tc8k\" (UniqueName: \"kubernetes.io/projected/6268e9f0-e992-4887-8a99-80a1b5459cb3-kube-api-access-9tc8k\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.269964 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvl46\" (UniqueName: \"kubernetes.io/projected/b0ed14a7-ee41-453d-8114-8e955b120c40-kube-api-access-qvl46\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.310566 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6268e9f0-e992-4887-8a99-80a1b5459cb3-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "6268e9f0-e992-4887-8a99-80a1b5459cb3" (UID: "6268e9f0-e992-4887-8a99-80a1b5459cb3"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.372536 4794 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6268e9f0-e992-4887-8a99-80a1b5459cb3-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.729475 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.733581 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6268e9f0-e992-4887-8a99-80a1b5459cb3","Type":"ContainerDied","Data":"a09a142da95bf005d0ccb35d14810bf29a50ed1cea71f06256e14a6d44dd3adf"} Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.733657 4794 scope.go:117] "RemoveContainer" containerID="e14af41e3422319bb29fab616cc6d9d89fa53d6d466f15dcbd3087e841726665" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.736696 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b0ed14a7-ee41-453d-8114-8e955b120c40","Type":"ContainerDied","Data":"5df6d44cc4570ddb288c8ae751cff1dd0a5f94ae94ea470cf908de4cea2dd2a6"} Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.736794 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.780565 4794 scope.go:117] "RemoveContainer" containerID="c98da034c395daf027cfac4174ee49fc0913d68eaa7a105f4c31c0046e08cd64" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.799383 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.805313 4794 scope.go:117] "RemoveContainer" containerID="e1c0620537b9e6151bd065d33ec4815bd4cea215dda129f52b146e6a6a4e74bd" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.824356 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.846655 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.880365 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.899992 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:25:21 crc kubenswrapper[4794]: E0216 17:25:21.900932 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0ed14a7-ee41-453d-8114-8e955b120c40" containerName="nova-scheduler-scheduler" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.900956 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0ed14a7-ee41-453d-8114-8e955b120c40" containerName="nova-scheduler-scheduler" Feb 16 17:25:21 crc kubenswrapper[4794]: E0216 17:25:21.900990 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6268e9f0-e992-4887-8a99-80a1b5459cb3" containerName="nova-metadata-log" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.900999 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="6268e9f0-e992-4887-8a99-80a1b5459cb3" containerName="nova-metadata-log" Feb 16 17:25:21 crc kubenswrapper[4794]: E0216 17:25:21.901013 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80d48a50-835e-455f-81f7-9c40a212b9e6" containerName="nova-manage" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.901025 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="80d48a50-835e-455f-81f7-9c40a212b9e6" containerName="nova-manage" Feb 16 17:25:21 crc kubenswrapper[4794]: E0216 17:25:21.901042 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6268e9f0-e992-4887-8a99-80a1b5459cb3" containerName="nova-metadata-metadata" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.901052 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="6268e9f0-e992-4887-8a99-80a1b5459cb3" containerName="nova-metadata-metadata" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.901442 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="80d48a50-835e-455f-81f7-9c40a212b9e6" containerName="nova-manage" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.901467 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0ed14a7-ee41-453d-8114-8e955b120c40" containerName="nova-scheduler-scheduler" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.901483 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="6268e9f0-e992-4887-8a99-80a1b5459cb3" containerName="nova-metadata-log" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.901500 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="6268e9f0-e992-4887-8a99-80a1b5459cb3" containerName="nova-metadata-metadata" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.903283 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.906966 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.907478 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.924385 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.946385 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.948681 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.958053 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.964631 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.995845 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lrw2\" (UniqueName: \"kubernetes.io/projected/759dd9df-054e-4675-b614-d6cf32280981-kube-api-access-8lrw2\") pod \"nova-metadata-0\" (UID: \"759dd9df-054e-4675-b614-d6cf32280981\") " pod="openstack/nova-metadata-0" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.995988 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d18ce339-9b99-485a-8bff-1aa4bbf31dd7-config-data\") pod \"nova-scheduler-0\" (UID: \"d18ce339-9b99-485a-8bff-1aa4bbf31dd7\") " pod="openstack/nova-scheduler-0" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.996042 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bwlh\" (UniqueName: \"kubernetes.io/projected/d18ce339-9b99-485a-8bff-1aa4bbf31dd7-kube-api-access-6bwlh\") pod \"nova-scheduler-0\" (UID: \"d18ce339-9b99-485a-8bff-1aa4bbf31dd7\") " pod="openstack/nova-scheduler-0" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.996088 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/759dd9df-054e-4675-b614-d6cf32280981-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"759dd9df-054e-4675-b614-d6cf32280981\") " pod="openstack/nova-metadata-0" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.996113 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/759dd9df-054e-4675-b614-d6cf32280981-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"759dd9df-054e-4675-b614-d6cf32280981\") " pod="openstack/nova-metadata-0" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.996175 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/759dd9df-054e-4675-b614-d6cf32280981-logs\") pod \"nova-metadata-0\" (UID: \"759dd9df-054e-4675-b614-d6cf32280981\") " pod="openstack/nova-metadata-0" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.996203 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d18ce339-9b99-485a-8bff-1aa4bbf31dd7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d18ce339-9b99-485a-8bff-1aa4bbf31dd7\") " pod="openstack/nova-scheduler-0" Feb 16 17:25:21 crc kubenswrapper[4794]: I0216 17:25:21.996250 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/759dd9df-054e-4675-b614-d6cf32280981-config-data\") pod \"nova-metadata-0\" (UID: \"759dd9df-054e-4675-b614-d6cf32280981\") " pod="openstack/nova-metadata-0" Feb 16 17:25:22 crc kubenswrapper[4794]: I0216 17:25:22.097770 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d18ce339-9b99-485a-8bff-1aa4bbf31dd7-config-data\") pod \"nova-scheduler-0\" (UID: \"d18ce339-9b99-485a-8bff-1aa4bbf31dd7\") " pod="openstack/nova-scheduler-0" Feb 16 17:25:22 crc kubenswrapper[4794]: I0216 17:25:22.097824 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bwlh\" (UniqueName: \"kubernetes.io/projected/d18ce339-9b99-485a-8bff-1aa4bbf31dd7-kube-api-access-6bwlh\") pod \"nova-scheduler-0\" (UID: \"d18ce339-9b99-485a-8bff-1aa4bbf31dd7\") " pod="openstack/nova-scheduler-0" Feb 16 17:25:22 crc kubenswrapper[4794]: I0216 17:25:22.097875 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/759dd9df-054e-4675-b614-d6cf32280981-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"759dd9df-054e-4675-b614-d6cf32280981\") " pod="openstack/nova-metadata-0" Feb 16 17:25:22 crc kubenswrapper[4794]: I0216 17:25:22.097892 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/759dd9df-054e-4675-b614-d6cf32280981-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"759dd9df-054e-4675-b614-d6cf32280981\") " pod="openstack/nova-metadata-0" Feb 16 17:25:22 crc kubenswrapper[4794]: I0216 17:25:22.097943 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/759dd9df-054e-4675-b614-d6cf32280981-logs\") pod \"nova-metadata-0\" (UID: \"759dd9df-054e-4675-b614-d6cf32280981\") " pod="openstack/nova-metadata-0" Feb 16 17:25:22 crc kubenswrapper[4794]: I0216 17:25:22.097967 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d18ce339-9b99-485a-8bff-1aa4bbf31dd7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d18ce339-9b99-485a-8bff-1aa4bbf31dd7\") " pod="openstack/nova-scheduler-0" Feb 16 17:25:22 crc kubenswrapper[4794]: I0216 17:25:22.098001 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/759dd9df-054e-4675-b614-d6cf32280981-config-data\") pod \"nova-metadata-0\" (UID: \"759dd9df-054e-4675-b614-d6cf32280981\") " pod="openstack/nova-metadata-0" Feb 16 17:25:22 crc kubenswrapper[4794]: I0216 17:25:22.098085 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lrw2\" (UniqueName: \"kubernetes.io/projected/759dd9df-054e-4675-b614-d6cf32280981-kube-api-access-8lrw2\") pod \"nova-metadata-0\" (UID: \"759dd9df-054e-4675-b614-d6cf32280981\") " pod="openstack/nova-metadata-0" Feb 16 17:25:22 crc kubenswrapper[4794]: I0216 17:25:22.098803 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/759dd9df-054e-4675-b614-d6cf32280981-logs\") pod \"nova-metadata-0\" (UID: \"759dd9df-054e-4675-b614-d6cf32280981\") " pod="openstack/nova-metadata-0" Feb 16 17:25:22 crc kubenswrapper[4794]: I0216 17:25:22.109551 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/759dd9df-054e-4675-b614-d6cf32280981-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"759dd9df-054e-4675-b614-d6cf32280981\") " pod="openstack/nova-metadata-0" Feb 16 17:25:22 crc kubenswrapper[4794]: I0216 17:25:22.122082 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/759dd9df-054e-4675-b614-d6cf32280981-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"759dd9df-054e-4675-b614-d6cf32280981\") " pod="openstack/nova-metadata-0" Feb 16 17:25:22 crc kubenswrapper[4794]: I0216 17:25:22.122939 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d18ce339-9b99-485a-8bff-1aa4bbf31dd7-config-data\") pod \"nova-scheduler-0\" (UID: \"d18ce339-9b99-485a-8bff-1aa4bbf31dd7\") " pod="openstack/nova-scheduler-0" Feb 16 17:25:22 crc kubenswrapper[4794]: I0216 17:25:22.122969 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d18ce339-9b99-485a-8bff-1aa4bbf31dd7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d18ce339-9b99-485a-8bff-1aa4bbf31dd7\") " pod="openstack/nova-scheduler-0" Feb 16 17:25:22 crc kubenswrapper[4794]: I0216 17:25:22.149723 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bwlh\" (UniqueName: \"kubernetes.io/projected/d18ce339-9b99-485a-8bff-1aa4bbf31dd7-kube-api-access-6bwlh\") pod \"nova-scheduler-0\" (UID: \"d18ce339-9b99-485a-8bff-1aa4bbf31dd7\") " pod="openstack/nova-scheduler-0" Feb 16 17:25:22 crc kubenswrapper[4794]: I0216 17:25:22.163380 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lrw2\" (UniqueName: \"kubernetes.io/projected/759dd9df-054e-4675-b614-d6cf32280981-kube-api-access-8lrw2\") pod \"nova-metadata-0\" (UID: \"759dd9df-054e-4675-b614-d6cf32280981\") " pod="openstack/nova-metadata-0" Feb 16 17:25:22 crc kubenswrapper[4794]: I0216 17:25:22.181952 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/759dd9df-054e-4675-b614-d6cf32280981-config-data\") pod \"nova-metadata-0\" (UID: \"759dd9df-054e-4675-b614-d6cf32280981\") " pod="openstack/nova-metadata-0" Feb 16 17:25:22 crc kubenswrapper[4794]: I0216 17:25:22.227862 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:25:22 crc kubenswrapper[4794]: I0216 17:25:22.311579 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 17:25:22 crc kubenswrapper[4794]: I0216 17:25:22.723406 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:25:22 crc kubenswrapper[4794]: I0216 17:25:22.753532 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"759dd9df-054e-4675-b614-d6cf32280981","Type":"ContainerStarted","Data":"171ac57ffe352e88d67b17f5f03b962c4729555cf1ccfcad4cf53320d1484875"} Feb 16 17:25:22 crc kubenswrapper[4794]: I0216 17:25:22.756350 4794 generic.go:334] "Generic (PLEG): container finished" podID="e8855b32-0f8f-4cc3-af68-abdb6219b49e" containerID="ab518f5d52d454fc51a1144f310a7d00194e1a086d3d14e547239203b92eedc5" exitCode=0 Feb 16 17:25:22 crc kubenswrapper[4794]: I0216 17:25:22.756457 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e8855b32-0f8f-4cc3-af68-abdb6219b49e","Type":"ContainerDied","Data":"ab518f5d52d454fc51a1144f310a7d00194e1a086d3d14e547239203b92eedc5"} Feb 16 17:25:22 crc kubenswrapper[4794]: I0216 17:25:22.804444 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6268e9f0-e992-4887-8a99-80a1b5459cb3" path="/var/lib/kubelet/pods/6268e9f0-e992-4887-8a99-80a1b5459cb3/volumes" Feb 16 17:25:22 crc kubenswrapper[4794]: I0216 17:25:22.805334 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0ed14a7-ee41-453d-8114-8e955b120c40" path="/var/lib/kubelet/pods/b0ed14a7-ee41-453d-8114-8e955b120c40/volumes" Feb 16 17:25:22 crc kubenswrapper[4794]: I0216 17:25:22.888193 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.072482 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.196247 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8855b32-0f8f-4cc3-af68-abdb6219b49e-public-tls-certs\") pod \"e8855b32-0f8f-4cc3-af68-abdb6219b49e\" (UID: \"e8855b32-0f8f-4cc3-af68-abdb6219b49e\") " Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.196456 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kctqq\" (UniqueName: \"kubernetes.io/projected/e8855b32-0f8f-4cc3-af68-abdb6219b49e-kube-api-access-kctqq\") pod \"e8855b32-0f8f-4cc3-af68-abdb6219b49e\" (UID: \"e8855b32-0f8f-4cc3-af68-abdb6219b49e\") " Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.196518 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8855b32-0f8f-4cc3-af68-abdb6219b49e-logs\") pod \"e8855b32-0f8f-4cc3-af68-abdb6219b49e\" (UID: \"e8855b32-0f8f-4cc3-af68-abdb6219b49e\") " Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.196581 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8855b32-0f8f-4cc3-af68-abdb6219b49e-combined-ca-bundle\") pod \"e8855b32-0f8f-4cc3-af68-abdb6219b49e\" (UID: \"e8855b32-0f8f-4cc3-af68-abdb6219b49e\") " Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.196718 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8855b32-0f8f-4cc3-af68-abdb6219b49e-config-data\") pod \"e8855b32-0f8f-4cc3-af68-abdb6219b49e\" (UID: \"e8855b32-0f8f-4cc3-af68-abdb6219b49e\") " Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.196801 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8855b32-0f8f-4cc3-af68-abdb6219b49e-internal-tls-certs\") pod \"e8855b32-0f8f-4cc3-af68-abdb6219b49e\" (UID: \"e8855b32-0f8f-4cc3-af68-abdb6219b49e\") " Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.197150 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8855b32-0f8f-4cc3-af68-abdb6219b49e-logs" (OuterVolumeSpecName: "logs") pod "e8855b32-0f8f-4cc3-af68-abdb6219b49e" (UID: "e8855b32-0f8f-4cc3-af68-abdb6219b49e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.197696 4794 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8855b32-0f8f-4cc3-af68-abdb6219b49e-logs\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.200528 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8855b32-0f8f-4cc3-af68-abdb6219b49e-kube-api-access-kctqq" (OuterVolumeSpecName: "kube-api-access-kctqq") pod "e8855b32-0f8f-4cc3-af68-abdb6219b49e" (UID: "e8855b32-0f8f-4cc3-af68-abdb6219b49e"). InnerVolumeSpecName "kube-api-access-kctqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.231246 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8855b32-0f8f-4cc3-af68-abdb6219b49e-config-data" (OuterVolumeSpecName: "config-data") pod "e8855b32-0f8f-4cc3-af68-abdb6219b49e" (UID: "e8855b32-0f8f-4cc3-af68-abdb6219b49e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.249562 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8855b32-0f8f-4cc3-af68-abdb6219b49e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8855b32-0f8f-4cc3-af68-abdb6219b49e" (UID: "e8855b32-0f8f-4cc3-af68-abdb6219b49e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.270125 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8855b32-0f8f-4cc3-af68-abdb6219b49e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e8855b32-0f8f-4cc3-af68-abdb6219b49e" (UID: "e8855b32-0f8f-4cc3-af68-abdb6219b49e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.274727 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8855b32-0f8f-4cc3-af68-abdb6219b49e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e8855b32-0f8f-4cc3-af68-abdb6219b49e" (UID: "e8855b32-0f8f-4cc3-af68-abdb6219b49e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.299784 4794 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8855b32-0f8f-4cc3-af68-abdb6219b49e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.299824 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kctqq\" (UniqueName: \"kubernetes.io/projected/e8855b32-0f8f-4cc3-af68-abdb6219b49e-kube-api-access-kctqq\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.299840 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8855b32-0f8f-4cc3-af68-abdb6219b49e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.299865 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8855b32-0f8f-4cc3-af68-abdb6219b49e-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.299875 4794 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8855b32-0f8f-4cc3-af68-abdb6219b49e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.776924 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"759dd9df-054e-4675-b614-d6cf32280981","Type":"ContainerStarted","Data":"09e73cfcf5d6e7a506e88e756d84f1e7b6ed187b41f410792c62e34586bf40a6"} Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.777373 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"759dd9df-054e-4675-b614-d6cf32280981","Type":"ContainerStarted","Data":"b57f6b0d974d8b33b6a34f9e8a0b6832a1c9a3fdb58047414776ff6b488f99d1"} Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.779858 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e8855b32-0f8f-4cc3-af68-abdb6219b49e","Type":"ContainerDied","Data":"21a8fcfb36538e18c14b964b7c7d3314ec33b2ba82b39e703f5c8259cf1a3e96"} Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.779911 4794 scope.go:117] "RemoveContainer" containerID="ab518f5d52d454fc51a1144f310a7d00194e1a086d3d14e547239203b92eedc5" Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.780065 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.796475 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d18ce339-9b99-485a-8bff-1aa4bbf31dd7","Type":"ContainerStarted","Data":"700f0192293e98ec8ae0cfb9b02f989afda2079dbb4b87502afc7ae30bbf94b6"} Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.796708 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d18ce339-9b99-485a-8bff-1aa4bbf31dd7","Type":"ContainerStarted","Data":"58a0f979b9fc11139470f8d7b718c8019a4c3fab1e359931ef2222a2ac8a2205"} Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.826209 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.826183791 podStartE2EDuration="2.826183791s" podCreationTimestamp="2026-02-16 17:25:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:25:23.802853161 +0000 UTC m=+1549.750947818" watchObservedRunningTime="2026-02-16 17:25:23.826183791 +0000 UTC m=+1549.774278438" Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.830884 4794 scope.go:117] "RemoveContainer" containerID="b8ce5687165cd769be4679f4ee59a3a537ff546cdd9c4a5a2ac3673747c44991" Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.837513 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.8374930320000002 podStartE2EDuration="2.837493032s" podCreationTimestamp="2026-02-16 17:25:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:25:23.831650326 +0000 UTC m=+1549.779744963" watchObservedRunningTime="2026-02-16 17:25:23.837493032 +0000 UTC m=+1549.785587679" Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.868799 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.882743 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.894612 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 17:25:23 crc kubenswrapper[4794]: E0216 17:25:23.895370 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8855b32-0f8f-4cc3-af68-abdb6219b49e" containerName="nova-api-log" Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.895453 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8855b32-0f8f-4cc3-af68-abdb6219b49e" containerName="nova-api-log" Feb 16 17:25:23 crc kubenswrapper[4794]: E0216 17:25:23.895544 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8855b32-0f8f-4cc3-af68-abdb6219b49e" containerName="nova-api-api" Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.895613 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8855b32-0f8f-4cc3-af68-abdb6219b49e" containerName="nova-api-api" Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.895942 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8855b32-0f8f-4cc3-af68-abdb6219b49e" containerName="nova-api-log" Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.896080 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8855b32-0f8f-4cc3-af68-abdb6219b49e" containerName="nova-api-api" Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.897598 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.904936 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.905168 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.906342 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 17:25:23 crc kubenswrapper[4794]: I0216 17:25:23.916802 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:25:24 crc kubenswrapper[4794]: I0216 17:25:24.019909 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e7d5a0f-a988-41f2-8e63-6a3fccddbacc-config-data\") pod \"nova-api-0\" (UID: \"3e7d5a0f-a988-41f2-8e63-6a3fccddbacc\") " pod="openstack/nova-api-0" Feb 16 17:25:24 crc kubenswrapper[4794]: I0216 17:25:24.020077 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e7d5a0f-a988-41f2-8e63-6a3fccddbacc-logs\") pod \"nova-api-0\" (UID: \"3e7d5a0f-a988-41f2-8e63-6a3fccddbacc\") " pod="openstack/nova-api-0" Feb 16 17:25:24 crc kubenswrapper[4794]: I0216 17:25:24.020188 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e7d5a0f-a988-41f2-8e63-6a3fccddbacc-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3e7d5a0f-a988-41f2-8e63-6a3fccddbacc\") " pod="openstack/nova-api-0" Feb 16 17:25:24 crc kubenswrapper[4794]: I0216 17:25:24.020540 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e7d5a0f-a988-41f2-8e63-6a3fccddbacc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3e7d5a0f-a988-41f2-8e63-6a3fccddbacc\") " pod="openstack/nova-api-0" Feb 16 17:25:24 crc kubenswrapper[4794]: I0216 17:25:24.020735 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlml4\" (UniqueName: \"kubernetes.io/projected/3e7d5a0f-a988-41f2-8e63-6a3fccddbacc-kube-api-access-jlml4\") pod \"nova-api-0\" (UID: \"3e7d5a0f-a988-41f2-8e63-6a3fccddbacc\") " pod="openstack/nova-api-0" Feb 16 17:25:24 crc kubenswrapper[4794]: I0216 17:25:24.020957 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e7d5a0f-a988-41f2-8e63-6a3fccddbacc-public-tls-certs\") pod \"nova-api-0\" (UID: \"3e7d5a0f-a988-41f2-8e63-6a3fccddbacc\") " pod="openstack/nova-api-0" Feb 16 17:25:24 crc kubenswrapper[4794]: I0216 17:25:24.122747 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e7d5a0f-a988-41f2-8e63-6a3fccddbacc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3e7d5a0f-a988-41f2-8e63-6a3fccddbacc\") " pod="openstack/nova-api-0" Feb 16 17:25:24 crc kubenswrapper[4794]: I0216 17:25:24.122827 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlml4\" (UniqueName: \"kubernetes.io/projected/3e7d5a0f-a988-41f2-8e63-6a3fccddbacc-kube-api-access-jlml4\") pod \"nova-api-0\" (UID: \"3e7d5a0f-a988-41f2-8e63-6a3fccddbacc\") " pod="openstack/nova-api-0" Feb 16 17:25:24 crc kubenswrapper[4794]: I0216 17:25:24.122907 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e7d5a0f-a988-41f2-8e63-6a3fccddbacc-public-tls-certs\") pod \"nova-api-0\" (UID: \"3e7d5a0f-a988-41f2-8e63-6a3fccddbacc\") " pod="openstack/nova-api-0" Feb 16 17:25:24 crc kubenswrapper[4794]: I0216 17:25:24.122970 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e7d5a0f-a988-41f2-8e63-6a3fccddbacc-config-data\") pod \"nova-api-0\" (UID: \"3e7d5a0f-a988-41f2-8e63-6a3fccddbacc\") " pod="openstack/nova-api-0" Feb 16 17:25:24 crc kubenswrapper[4794]: I0216 17:25:24.123025 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e7d5a0f-a988-41f2-8e63-6a3fccddbacc-logs\") pod \"nova-api-0\" (UID: \"3e7d5a0f-a988-41f2-8e63-6a3fccddbacc\") " pod="openstack/nova-api-0" Feb 16 17:25:24 crc kubenswrapper[4794]: I0216 17:25:24.123052 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e7d5a0f-a988-41f2-8e63-6a3fccddbacc-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3e7d5a0f-a988-41f2-8e63-6a3fccddbacc\") " pod="openstack/nova-api-0" Feb 16 17:25:24 crc kubenswrapper[4794]: I0216 17:25:24.123495 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e7d5a0f-a988-41f2-8e63-6a3fccddbacc-logs\") pod \"nova-api-0\" (UID: \"3e7d5a0f-a988-41f2-8e63-6a3fccddbacc\") " pod="openstack/nova-api-0" Feb 16 17:25:24 crc kubenswrapper[4794]: I0216 17:25:24.127992 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e7d5a0f-a988-41f2-8e63-6a3fccddbacc-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3e7d5a0f-a988-41f2-8e63-6a3fccddbacc\") " pod="openstack/nova-api-0" Feb 16 17:25:24 crc kubenswrapper[4794]: I0216 17:25:24.128264 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e7d5a0f-a988-41f2-8e63-6a3fccddbacc-public-tls-certs\") pod \"nova-api-0\" (UID: \"3e7d5a0f-a988-41f2-8e63-6a3fccddbacc\") " pod="openstack/nova-api-0" Feb 16 17:25:24 crc kubenswrapper[4794]: I0216 17:25:24.128461 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e7d5a0f-a988-41f2-8e63-6a3fccddbacc-config-data\") pod \"nova-api-0\" (UID: \"3e7d5a0f-a988-41f2-8e63-6a3fccddbacc\") " pod="openstack/nova-api-0" Feb 16 17:25:24 crc kubenswrapper[4794]: I0216 17:25:24.133899 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e7d5a0f-a988-41f2-8e63-6a3fccddbacc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3e7d5a0f-a988-41f2-8e63-6a3fccddbacc\") " pod="openstack/nova-api-0" Feb 16 17:25:24 crc kubenswrapper[4794]: I0216 17:25:24.142566 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlml4\" (UniqueName: \"kubernetes.io/projected/3e7d5a0f-a988-41f2-8e63-6a3fccddbacc-kube-api-access-jlml4\") pod \"nova-api-0\" (UID: \"3e7d5a0f-a988-41f2-8e63-6a3fccddbacc\") " pod="openstack/nova-api-0" Feb 16 17:25:24 crc kubenswrapper[4794]: I0216 17:25:24.215269 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:25:24 crc kubenswrapper[4794]: W0216 17:25:24.724984 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e7d5a0f_a988_41f2_8e63_6a3fccddbacc.slice/crio-b429b1cadf57f9736ae9278c57b95d3cc13de053c33117ce83e8633ffdc569c9 WatchSource:0}: Error finding container b429b1cadf57f9736ae9278c57b95d3cc13de053c33117ce83e8633ffdc569c9: Status 404 returned error can't find the container with id b429b1cadf57f9736ae9278c57b95d3cc13de053c33117ce83e8633ffdc569c9 Feb 16 17:25:24 crc kubenswrapper[4794]: I0216 17:25:24.725318 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:25:24 crc kubenswrapper[4794]: I0216 17:25:24.813143 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8855b32-0f8f-4cc3-af68-abdb6219b49e" path="/var/lib/kubelet/pods/e8855b32-0f8f-4cc3-af68-abdb6219b49e/volumes" Feb 16 17:25:24 crc kubenswrapper[4794]: I0216 17:25:24.829516 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3e7d5a0f-a988-41f2-8e63-6a3fccddbacc","Type":"ContainerStarted","Data":"b429b1cadf57f9736ae9278c57b95d3cc13de053c33117ce83e8633ffdc569c9"} Feb 16 17:25:25 crc kubenswrapper[4794]: I0216 17:25:25.397218 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="6268e9f0-e992-4887-8a99-80a1b5459cb3" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.250:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 17:25:25 crc kubenswrapper[4794]: I0216 17:25:25.397315 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="6268e9f0-e992-4887-8a99-80a1b5459cb3" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.250:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 17:25:25 crc kubenswrapper[4794]: I0216 17:25:25.791481 4794 scope.go:117] "RemoveContainer" containerID="6e22a7be8018d748f49f3c871459e97f76ee04f859d3d1d0d46b4bbb2dd36691" Feb 16 17:25:25 crc kubenswrapper[4794]: E0216 17:25:25.791709 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:25:25 crc kubenswrapper[4794]: I0216 17:25:25.842443 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3e7d5a0f-a988-41f2-8e63-6a3fccddbacc","Type":"ContainerStarted","Data":"1cf6a7a490fad64dc721545be3f468b4f780c793898026b6c4fcf0f399c638f5"} Feb 16 17:25:25 crc kubenswrapper[4794]: I0216 17:25:25.842485 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3e7d5a0f-a988-41f2-8e63-6a3fccddbacc","Type":"ContainerStarted","Data":"f333fe3bea5137eba068a75758b9a985549e186fba4e3ac9876583e3451696b4"} Feb 16 17:25:25 crc kubenswrapper[4794]: I0216 17:25:25.867255 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.867228393 podStartE2EDuration="2.867228393s" podCreationTimestamp="2026-02-16 17:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:25:25.86110483 +0000 UTC m=+1551.809199477" watchObservedRunningTime="2026-02-16 17:25:25.867228393 +0000 UTC m=+1551.815323050" Feb 16 17:25:27 crc kubenswrapper[4794]: I0216 17:25:27.229069 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 17:25:27 crc kubenswrapper[4794]: I0216 17:25:27.229435 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 17:25:27 crc kubenswrapper[4794]: I0216 17:25:27.313423 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 17:25:32 crc kubenswrapper[4794]: I0216 17:25:32.228290 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 17:25:32 crc kubenswrapper[4794]: I0216 17:25:32.228901 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 17:25:32 crc kubenswrapper[4794]: I0216 17:25:32.312975 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 17:25:32 crc kubenswrapper[4794]: I0216 17:25:32.353264 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 17:25:32 crc kubenswrapper[4794]: I0216 17:25:32.895371 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 16 17:25:32 crc kubenswrapper[4794]: I0216 17:25:32.920995 4794 generic.go:334] "Generic (PLEG): container finished" podID="0135c16b-58fd-4898-b711-786fa961ddfe" containerID="c97b08cfda5252db5e86bb5a56ebf72045e47ee226aaebc0a0eef401aaed9c8e" exitCode=137 Feb 16 17:25:32 crc kubenswrapper[4794]: I0216 17:25:32.922235 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 16 17:25:32 crc kubenswrapper[4794]: I0216 17:25:32.922925 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0135c16b-58fd-4898-b711-786fa961ddfe","Type":"ContainerDied","Data":"c97b08cfda5252db5e86bb5a56ebf72045e47ee226aaebc0a0eef401aaed9c8e"} Feb 16 17:25:32 crc kubenswrapper[4794]: I0216 17:25:32.922963 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0135c16b-58fd-4898-b711-786fa961ddfe","Type":"ContainerDied","Data":"a23741f0aaed88324d4a1c6cb639e09a1940213390d84f2df3117f248acd462f"} Feb 16 17:25:32 crc kubenswrapper[4794]: I0216 17:25:32.922986 4794 scope.go:117] "RemoveContainer" containerID="c97b08cfda5252db5e86bb5a56ebf72045e47ee226aaebc0a0eef401aaed9c8e" Feb 16 17:25:32 crc kubenswrapper[4794]: I0216 17:25:32.967133 4794 scope.go:117] "RemoveContainer" containerID="300c850cc4b8542798e3490418388e0f2a551a053f3d01edd7497101244a28c2" Feb 16 17:25:32 crc kubenswrapper[4794]: I0216 17:25:32.981125 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 17:25:32 crc kubenswrapper[4794]: I0216 17:25:32.992508 4794 scope.go:117] "RemoveContainer" containerID="1eb6de74e33c5395a20b2b53d19d7376cb4f1ddab30d9869af282eff0332f37e" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.035932 4794 scope.go:117] "RemoveContainer" containerID="4e7db69f5536609f27e83c67261c26ed7d0d609ce0198a15105744257717f0ce" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.056217 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0135c16b-58fd-4898-b711-786fa961ddfe-config-data\") pod \"0135c16b-58fd-4898-b711-786fa961ddfe\" (UID: \"0135c16b-58fd-4898-b711-786fa961ddfe\") " Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.056445 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0135c16b-58fd-4898-b711-786fa961ddfe-scripts\") pod \"0135c16b-58fd-4898-b711-786fa961ddfe\" (UID: \"0135c16b-58fd-4898-b711-786fa961ddfe\") " Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.056728 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnccv\" (UniqueName: \"kubernetes.io/projected/0135c16b-58fd-4898-b711-786fa961ddfe-kube-api-access-mnccv\") pod \"0135c16b-58fd-4898-b711-786fa961ddfe\" (UID: \"0135c16b-58fd-4898-b711-786fa961ddfe\") " Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.056819 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0135c16b-58fd-4898-b711-786fa961ddfe-combined-ca-bundle\") pod \"0135c16b-58fd-4898-b711-786fa961ddfe\" (UID: \"0135c16b-58fd-4898-b711-786fa961ddfe\") " Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.062695 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0135c16b-58fd-4898-b711-786fa961ddfe-kube-api-access-mnccv" (OuterVolumeSpecName: "kube-api-access-mnccv") pod "0135c16b-58fd-4898-b711-786fa961ddfe" (UID: "0135c16b-58fd-4898-b711-786fa961ddfe"). InnerVolumeSpecName "kube-api-access-mnccv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.070104 4794 scope.go:117] "RemoveContainer" containerID="c97b08cfda5252db5e86bb5a56ebf72045e47ee226aaebc0a0eef401aaed9c8e" Feb 16 17:25:33 crc kubenswrapper[4794]: E0216 17:25:33.071639 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c97b08cfda5252db5e86bb5a56ebf72045e47ee226aaebc0a0eef401aaed9c8e\": container with ID starting with c97b08cfda5252db5e86bb5a56ebf72045e47ee226aaebc0a0eef401aaed9c8e not found: ID does not exist" containerID="c97b08cfda5252db5e86bb5a56ebf72045e47ee226aaebc0a0eef401aaed9c8e" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.071684 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c97b08cfda5252db5e86bb5a56ebf72045e47ee226aaebc0a0eef401aaed9c8e"} err="failed to get container status \"c97b08cfda5252db5e86bb5a56ebf72045e47ee226aaebc0a0eef401aaed9c8e\": rpc error: code = NotFound desc = could not find container \"c97b08cfda5252db5e86bb5a56ebf72045e47ee226aaebc0a0eef401aaed9c8e\": container with ID starting with c97b08cfda5252db5e86bb5a56ebf72045e47ee226aaebc0a0eef401aaed9c8e not found: ID does not exist" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.071707 4794 scope.go:117] "RemoveContainer" containerID="300c850cc4b8542798e3490418388e0f2a551a053f3d01edd7497101244a28c2" Feb 16 17:25:33 crc kubenswrapper[4794]: E0216 17:25:33.072099 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"300c850cc4b8542798e3490418388e0f2a551a053f3d01edd7497101244a28c2\": container with ID starting with 300c850cc4b8542798e3490418388e0f2a551a053f3d01edd7497101244a28c2 not found: ID does not exist" containerID="300c850cc4b8542798e3490418388e0f2a551a053f3d01edd7497101244a28c2" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.072140 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"300c850cc4b8542798e3490418388e0f2a551a053f3d01edd7497101244a28c2"} err="failed to get container status \"300c850cc4b8542798e3490418388e0f2a551a053f3d01edd7497101244a28c2\": rpc error: code = NotFound desc = could not find container \"300c850cc4b8542798e3490418388e0f2a551a053f3d01edd7497101244a28c2\": container with ID starting with 300c850cc4b8542798e3490418388e0f2a551a053f3d01edd7497101244a28c2 not found: ID does not exist" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.072166 4794 scope.go:117] "RemoveContainer" containerID="1eb6de74e33c5395a20b2b53d19d7376cb4f1ddab30d9869af282eff0332f37e" Feb 16 17:25:33 crc kubenswrapper[4794]: E0216 17:25:33.072499 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1eb6de74e33c5395a20b2b53d19d7376cb4f1ddab30d9869af282eff0332f37e\": container with ID starting with 1eb6de74e33c5395a20b2b53d19d7376cb4f1ddab30d9869af282eff0332f37e not found: ID does not exist" containerID="1eb6de74e33c5395a20b2b53d19d7376cb4f1ddab30d9869af282eff0332f37e" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.072532 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1eb6de74e33c5395a20b2b53d19d7376cb4f1ddab30d9869af282eff0332f37e"} err="failed to get container status \"1eb6de74e33c5395a20b2b53d19d7376cb4f1ddab30d9869af282eff0332f37e\": rpc error: code = NotFound desc = could not find container \"1eb6de74e33c5395a20b2b53d19d7376cb4f1ddab30d9869af282eff0332f37e\": container with ID starting with 1eb6de74e33c5395a20b2b53d19d7376cb4f1ddab30d9869af282eff0332f37e not found: ID does not exist" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.072556 4794 scope.go:117] "RemoveContainer" containerID="4e7db69f5536609f27e83c67261c26ed7d0d609ce0198a15105744257717f0ce" Feb 16 17:25:33 crc kubenswrapper[4794]: E0216 17:25:33.072809 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e7db69f5536609f27e83c67261c26ed7d0d609ce0198a15105744257717f0ce\": container with ID starting with 4e7db69f5536609f27e83c67261c26ed7d0d609ce0198a15105744257717f0ce not found: ID does not exist" containerID="4e7db69f5536609f27e83c67261c26ed7d0d609ce0198a15105744257717f0ce" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.072840 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e7db69f5536609f27e83c67261c26ed7d0d609ce0198a15105744257717f0ce"} err="failed to get container status \"4e7db69f5536609f27e83c67261c26ed7d0d609ce0198a15105744257717f0ce\": rpc error: code = NotFound desc = could not find container \"4e7db69f5536609f27e83c67261c26ed7d0d609ce0198a15105744257717f0ce\": container with ID starting with 4e7db69f5536609f27e83c67261c26ed7d0d609ce0198a15105744257717f0ce not found: ID does not exist" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.075446 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0135c16b-58fd-4898-b711-786fa961ddfe-scripts" (OuterVolumeSpecName: "scripts") pod "0135c16b-58fd-4898-b711-786fa961ddfe" (UID: "0135c16b-58fd-4898-b711-786fa961ddfe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.159971 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnccv\" (UniqueName: \"kubernetes.io/projected/0135c16b-58fd-4898-b711-786fa961ddfe-kube-api-access-mnccv\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.160171 4794 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0135c16b-58fd-4898-b711-786fa961ddfe-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.213654 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0135c16b-58fd-4898-b711-786fa961ddfe-config-data" (OuterVolumeSpecName: "config-data") pod "0135c16b-58fd-4898-b711-786fa961ddfe" (UID: "0135c16b-58fd-4898-b711-786fa961ddfe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.225506 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0135c16b-58fd-4898-b711-786fa961ddfe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0135c16b-58fd-4898-b711-786fa961ddfe" (UID: "0135c16b-58fd-4898-b711-786fa961ddfe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.244638 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="759dd9df-054e-4675-b614-d6cf32280981" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.4:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.244660 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="759dd9df-054e-4675-b614-d6cf32280981" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.4:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.262760 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0135c16b-58fd-4898-b711-786fa961ddfe-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.262801 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0135c16b-58fd-4898-b711-786fa961ddfe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.557890 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.569853 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.596764 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 16 17:25:33 crc kubenswrapper[4794]: E0216 17:25:33.597289 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0135c16b-58fd-4898-b711-786fa961ddfe" containerName="aodh-listener" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.597316 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="0135c16b-58fd-4898-b711-786fa961ddfe" containerName="aodh-listener" Feb 16 17:25:33 crc kubenswrapper[4794]: E0216 17:25:33.597337 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0135c16b-58fd-4898-b711-786fa961ddfe" containerName="aodh-notifier" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.597344 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="0135c16b-58fd-4898-b711-786fa961ddfe" containerName="aodh-notifier" Feb 16 17:25:33 crc kubenswrapper[4794]: E0216 17:25:33.597358 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0135c16b-58fd-4898-b711-786fa961ddfe" containerName="aodh-evaluator" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.597365 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="0135c16b-58fd-4898-b711-786fa961ddfe" containerName="aodh-evaluator" Feb 16 17:25:33 crc kubenswrapper[4794]: E0216 17:25:33.597378 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0135c16b-58fd-4898-b711-786fa961ddfe" containerName="aodh-api" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.597384 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="0135c16b-58fd-4898-b711-786fa961ddfe" containerName="aodh-api" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.597617 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="0135c16b-58fd-4898-b711-786fa961ddfe" containerName="aodh-api" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.597640 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="0135c16b-58fd-4898-b711-786fa961ddfe" containerName="aodh-notifier" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.597653 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="0135c16b-58fd-4898-b711-786fa961ddfe" containerName="aodh-evaluator" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.597665 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="0135c16b-58fd-4898-b711-786fa961ddfe" containerName="aodh-listener" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.600715 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.604698 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.605030 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.605141 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-kxvmt" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.605321 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.614354 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.631514 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.774536 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxnkn\" (UniqueName: \"kubernetes.io/projected/cd26d451-60ee-4078-a937-5c4969efc977-kube-api-access-hxnkn\") pod \"aodh-0\" (UID: \"cd26d451-60ee-4078-a937-5c4969efc977\") " pod="openstack/aodh-0" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.775083 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd26d451-60ee-4078-a937-5c4969efc977-internal-tls-certs\") pod \"aodh-0\" (UID: \"cd26d451-60ee-4078-a937-5c4969efc977\") " pod="openstack/aodh-0" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.775182 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd26d451-60ee-4078-a937-5c4969efc977-scripts\") pod \"aodh-0\" (UID: \"cd26d451-60ee-4078-a937-5c4969efc977\") " pod="openstack/aodh-0" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.775258 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd26d451-60ee-4078-a937-5c4969efc977-combined-ca-bundle\") pod \"aodh-0\" (UID: \"cd26d451-60ee-4078-a937-5c4969efc977\") " pod="openstack/aodh-0" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.775431 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd26d451-60ee-4078-a937-5c4969efc977-public-tls-certs\") pod \"aodh-0\" (UID: \"cd26d451-60ee-4078-a937-5c4969efc977\") " pod="openstack/aodh-0" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.775507 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd26d451-60ee-4078-a937-5c4969efc977-config-data\") pod \"aodh-0\" (UID: \"cd26d451-60ee-4078-a937-5c4969efc977\") " pod="openstack/aodh-0" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.877584 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd26d451-60ee-4078-a937-5c4969efc977-public-tls-certs\") pod \"aodh-0\" (UID: \"cd26d451-60ee-4078-a937-5c4969efc977\") " pod="openstack/aodh-0" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.877657 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd26d451-60ee-4078-a937-5c4969efc977-config-data\") pod \"aodh-0\" (UID: \"cd26d451-60ee-4078-a937-5c4969efc977\") " pod="openstack/aodh-0" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.877726 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxnkn\" (UniqueName: \"kubernetes.io/projected/cd26d451-60ee-4078-a937-5c4969efc977-kube-api-access-hxnkn\") pod \"aodh-0\" (UID: \"cd26d451-60ee-4078-a937-5c4969efc977\") " pod="openstack/aodh-0" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.877898 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd26d451-60ee-4078-a937-5c4969efc977-internal-tls-certs\") pod \"aodh-0\" (UID: \"cd26d451-60ee-4078-a937-5c4969efc977\") " pod="openstack/aodh-0" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.877967 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd26d451-60ee-4078-a937-5c4969efc977-scripts\") pod \"aodh-0\" (UID: \"cd26d451-60ee-4078-a937-5c4969efc977\") " pod="openstack/aodh-0" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.878030 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd26d451-60ee-4078-a937-5c4969efc977-combined-ca-bundle\") pod \"aodh-0\" (UID: \"cd26d451-60ee-4078-a937-5c4969efc977\") " pod="openstack/aodh-0" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.883336 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd26d451-60ee-4078-a937-5c4969efc977-internal-tls-certs\") pod \"aodh-0\" (UID: \"cd26d451-60ee-4078-a937-5c4969efc977\") " pod="openstack/aodh-0" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.884109 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd26d451-60ee-4078-a937-5c4969efc977-config-data\") pod \"aodh-0\" (UID: \"cd26d451-60ee-4078-a937-5c4969efc977\") " pod="openstack/aodh-0" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.884234 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd26d451-60ee-4078-a937-5c4969efc977-scripts\") pod \"aodh-0\" (UID: \"cd26d451-60ee-4078-a937-5c4969efc977\") " pod="openstack/aodh-0" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.884729 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd26d451-60ee-4078-a937-5c4969efc977-combined-ca-bundle\") pod \"aodh-0\" (UID: \"cd26d451-60ee-4078-a937-5c4969efc977\") " pod="openstack/aodh-0" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.888780 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd26d451-60ee-4078-a937-5c4969efc977-public-tls-certs\") pod \"aodh-0\" (UID: \"cd26d451-60ee-4078-a937-5c4969efc977\") " pod="openstack/aodh-0" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.903516 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxnkn\" (UniqueName: \"kubernetes.io/projected/cd26d451-60ee-4078-a937-5c4969efc977-kube-api-access-hxnkn\") pod \"aodh-0\" (UID: \"cd26d451-60ee-4078-a937-5c4969efc977\") " pod="openstack/aodh-0" Feb 16 17:25:33 crc kubenswrapper[4794]: I0216 17:25:33.917253 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 16 17:25:34 crc kubenswrapper[4794]: I0216 17:25:34.216638 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 17:25:34 crc kubenswrapper[4794]: I0216 17:25:34.216979 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 17:25:34 crc kubenswrapper[4794]: W0216 17:25:34.449566 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd26d451_60ee_4078_a937_5c4969efc977.slice/crio-d3db0848808cff8502eed5c1d8dd7aa8e9764dead92545b2afee4908454431db WatchSource:0}: Error finding container d3db0848808cff8502eed5c1d8dd7aa8e9764dead92545b2afee4908454431db: Status 404 returned error can't find the container with id d3db0848808cff8502eed5c1d8dd7aa8e9764dead92545b2afee4908454431db Feb 16 17:25:34 crc kubenswrapper[4794]: I0216 17:25:34.455120 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 16 17:25:34 crc kubenswrapper[4794]: I0216 17:25:34.804165 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0135c16b-58fd-4898-b711-786fa961ddfe" path="/var/lib/kubelet/pods/0135c16b-58fd-4898-b711-786fa961ddfe/volumes" Feb 16 17:25:34 crc kubenswrapper[4794]: I0216 17:25:34.955154 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"cd26d451-60ee-4078-a937-5c4969efc977","Type":"ContainerStarted","Data":"d3db0848808cff8502eed5c1d8dd7aa8e9764dead92545b2afee4908454431db"} Feb 16 17:25:35 crc kubenswrapper[4794]: I0216 17:25:35.231546 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3e7d5a0f-a988-41f2-8e63-6a3fccddbacc" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.6:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 17:25:35 crc kubenswrapper[4794]: I0216 17:25:35.231595 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3e7d5a0f-a988-41f2-8e63-6a3fccddbacc" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.6:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 17:25:35 crc kubenswrapper[4794]: I0216 17:25:35.971740 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"cd26d451-60ee-4078-a937-5c4969efc977","Type":"ContainerStarted","Data":"6e7d742854645cba4a0f290a0f2da573abe0c9b01a961b18c11506be973c0584"} Feb 16 17:25:35 crc kubenswrapper[4794]: I0216 17:25:35.972066 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"cd26d451-60ee-4078-a937-5c4969efc977","Type":"ContainerStarted","Data":"aca2a856baaa513280f2a468eb6851d6c98cd73f47abb0fffdf048d4769b3261"} Feb 16 17:25:36 crc kubenswrapper[4794]: I0216 17:25:36.984079 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"cd26d451-60ee-4078-a937-5c4969efc977","Type":"ContainerStarted","Data":"54e946603a28f0abd8ba1ffe5e6d1c9759d6825cde14f31021b278fff3ceffe7"} Feb 16 17:25:37 crc kubenswrapper[4794]: I0216 17:25:37.147398 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gsjk8"] Feb 16 17:25:37 crc kubenswrapper[4794]: I0216 17:25:37.151057 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gsjk8" Feb 16 17:25:37 crc kubenswrapper[4794]: I0216 17:25:37.165845 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gsjk8"] Feb 16 17:25:37 crc kubenswrapper[4794]: I0216 17:25:37.252875 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e740940-700c-48d4-87ae-f3df5493444b-catalog-content\") pod \"community-operators-gsjk8\" (UID: \"3e740940-700c-48d4-87ae-f3df5493444b\") " pod="openshift-marketplace/community-operators-gsjk8" Feb 16 17:25:37 crc kubenswrapper[4794]: I0216 17:25:37.252982 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt8nk\" (UniqueName: \"kubernetes.io/projected/3e740940-700c-48d4-87ae-f3df5493444b-kube-api-access-wt8nk\") pod \"community-operators-gsjk8\" (UID: \"3e740940-700c-48d4-87ae-f3df5493444b\") " pod="openshift-marketplace/community-operators-gsjk8" Feb 16 17:25:37 crc kubenswrapper[4794]: I0216 17:25:37.253105 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e740940-700c-48d4-87ae-f3df5493444b-utilities\") pod \"community-operators-gsjk8\" (UID: \"3e740940-700c-48d4-87ae-f3df5493444b\") " pod="openshift-marketplace/community-operators-gsjk8" Feb 16 17:25:37 crc kubenswrapper[4794]: I0216 17:25:37.354898 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e740940-700c-48d4-87ae-f3df5493444b-utilities\") pod \"community-operators-gsjk8\" (UID: \"3e740940-700c-48d4-87ae-f3df5493444b\") " pod="openshift-marketplace/community-operators-gsjk8" Feb 16 17:25:37 crc kubenswrapper[4794]: I0216 17:25:37.355373 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e740940-700c-48d4-87ae-f3df5493444b-catalog-content\") pod \"community-operators-gsjk8\" (UID: \"3e740940-700c-48d4-87ae-f3df5493444b\") " pod="openshift-marketplace/community-operators-gsjk8" Feb 16 17:25:37 crc kubenswrapper[4794]: I0216 17:25:37.355536 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt8nk\" (UniqueName: \"kubernetes.io/projected/3e740940-700c-48d4-87ae-f3df5493444b-kube-api-access-wt8nk\") pod \"community-operators-gsjk8\" (UID: \"3e740940-700c-48d4-87ae-f3df5493444b\") " pod="openshift-marketplace/community-operators-gsjk8" Feb 16 17:25:37 crc kubenswrapper[4794]: I0216 17:25:37.356003 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e740940-700c-48d4-87ae-f3df5493444b-utilities\") pod \"community-operators-gsjk8\" (UID: \"3e740940-700c-48d4-87ae-f3df5493444b\") " pod="openshift-marketplace/community-operators-gsjk8" Feb 16 17:25:37 crc kubenswrapper[4794]: I0216 17:25:37.356189 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e740940-700c-48d4-87ae-f3df5493444b-catalog-content\") pod \"community-operators-gsjk8\" (UID: \"3e740940-700c-48d4-87ae-f3df5493444b\") " pod="openshift-marketplace/community-operators-gsjk8" Feb 16 17:25:37 crc kubenswrapper[4794]: I0216 17:25:37.376195 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt8nk\" (UniqueName: \"kubernetes.io/projected/3e740940-700c-48d4-87ae-f3df5493444b-kube-api-access-wt8nk\") pod \"community-operators-gsjk8\" (UID: \"3e740940-700c-48d4-87ae-f3df5493444b\") " pod="openshift-marketplace/community-operators-gsjk8" Feb 16 17:25:37 crc kubenswrapper[4794]: I0216 17:25:37.471938 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gsjk8" Feb 16 17:25:37 crc kubenswrapper[4794]: I0216 17:25:37.999146 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gsjk8"] Feb 16 17:25:38 crc kubenswrapper[4794]: W0216 17:25:38.000865 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e740940_700c_48d4_87ae_f3df5493444b.slice/crio-dfd552e6784e11cd6c2ef4b0a37427a680ce5fac76e0be7562369f58ae747d62 WatchSource:0}: Error finding container dfd552e6784e11cd6c2ef4b0a37427a680ce5fac76e0be7562369f58ae747d62: Status 404 returned error can't find the container with id dfd552e6784e11cd6c2ef4b0a37427a680ce5fac76e0be7562369f58ae747d62 Feb 16 17:25:38 crc kubenswrapper[4794]: I0216 17:25:38.006479 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"cd26d451-60ee-4078-a937-5c4969efc977","Type":"ContainerStarted","Data":"17118b55a17414c9f93247e2d602064933b2b05cf001cc5823a5df0744072153"} Feb 16 17:25:38 crc kubenswrapper[4794]: I0216 17:25:38.039485 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.25904856 podStartE2EDuration="5.039462394s" podCreationTimestamp="2026-02-16 17:25:33 +0000 UTC" firstStartedPulling="2026-02-16 17:25:34.451997471 +0000 UTC m=+1560.400092118" lastFinishedPulling="2026-02-16 17:25:37.232411305 +0000 UTC m=+1563.180505952" observedRunningTime="2026-02-16 17:25:38.029198413 +0000 UTC m=+1563.977293080" watchObservedRunningTime="2026-02-16 17:25:38.039462394 +0000 UTC m=+1563.987557031" Feb 16 17:25:38 crc kubenswrapper[4794]: I0216 17:25:38.791068 4794 scope.go:117] "RemoveContainer" containerID="6e22a7be8018d748f49f3c871459e97f76ee04f859d3d1d0d46b4bbb2dd36691" Feb 16 17:25:38 crc kubenswrapper[4794]: E0216 17:25:38.791617 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:25:39 crc kubenswrapper[4794]: I0216 17:25:39.018771 4794 generic.go:334] "Generic (PLEG): container finished" podID="3e740940-700c-48d4-87ae-f3df5493444b" containerID="991110b4d2bf2c74b4fb22927211c1e49d79cc5baaba1ce796d07136ac93f058" exitCode=0 Feb 16 17:25:39 crc kubenswrapper[4794]: I0216 17:25:39.018998 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gsjk8" event={"ID":"3e740940-700c-48d4-87ae-f3df5493444b","Type":"ContainerDied","Data":"991110b4d2bf2c74b4fb22927211c1e49d79cc5baaba1ce796d07136ac93f058"} Feb 16 17:25:39 crc kubenswrapper[4794]: I0216 17:25:39.019044 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gsjk8" event={"ID":"3e740940-700c-48d4-87ae-f3df5493444b","Type":"ContainerStarted","Data":"dfd552e6784e11cd6c2ef4b0a37427a680ce5fac76e0be7562369f58ae747d62"} Feb 16 17:25:41 crc kubenswrapper[4794]: I0216 17:25:41.053662 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gsjk8" event={"ID":"3e740940-700c-48d4-87ae-f3df5493444b","Type":"ContainerStarted","Data":"3f3e0b5e4215ea01ebd077a37a396f4389e464717d496a27af0088a2e77e4480"} Feb 16 17:25:42 crc kubenswrapper[4794]: I0216 17:25:42.004397 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 17:25:42 crc kubenswrapper[4794]: I0216 17:25:42.068154 4794 generic.go:334] "Generic (PLEG): container finished" podID="3e740940-700c-48d4-87ae-f3df5493444b" containerID="3f3e0b5e4215ea01ebd077a37a396f4389e464717d496a27af0088a2e77e4480" exitCode=0 Feb 16 17:25:42 crc kubenswrapper[4794]: I0216 17:25:42.068201 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gsjk8" event={"ID":"3e740940-700c-48d4-87ae-f3df5493444b","Type":"ContainerDied","Data":"3f3e0b5e4215ea01ebd077a37a396f4389e464717d496a27af0088a2e77e4480"} Feb 16 17:25:42 crc kubenswrapper[4794]: I0216 17:25:42.233157 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 17:25:42 crc kubenswrapper[4794]: I0216 17:25:42.239230 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 17:25:42 crc kubenswrapper[4794]: I0216 17:25:42.240583 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 17:25:43 crc kubenswrapper[4794]: I0216 17:25:43.082722 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gsjk8" event={"ID":"3e740940-700c-48d4-87ae-f3df5493444b","Type":"ContainerStarted","Data":"1def0254bb21ae2aa7449fd1e0980e10152e51f01b65e241ef1cc7c694f782b5"} Feb 16 17:25:43 crc kubenswrapper[4794]: I0216 17:25:43.089685 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 17:25:43 crc kubenswrapper[4794]: I0216 17:25:43.106324 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gsjk8" podStartSLOduration=2.622499913 podStartE2EDuration="6.106281869s" podCreationTimestamp="2026-02-16 17:25:37 +0000 UTC" firstStartedPulling="2026-02-16 17:25:39.021574212 +0000 UTC m=+1564.969668859" lastFinishedPulling="2026-02-16 17:25:42.505356168 +0000 UTC m=+1568.453450815" observedRunningTime="2026-02-16 17:25:43.103006926 +0000 UTC m=+1569.051101573" watchObservedRunningTime="2026-02-16 17:25:43.106281869 +0000 UTC m=+1569.054376516" Feb 16 17:25:44 crc kubenswrapper[4794]: I0216 17:25:44.224179 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 17:25:44 crc kubenswrapper[4794]: I0216 17:25:44.224924 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 17:25:44 crc kubenswrapper[4794]: I0216 17:25:44.239339 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 17:25:44 crc kubenswrapper[4794]: I0216 17:25:44.254230 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 17:25:45 crc kubenswrapper[4794]: I0216 17:25:45.100340 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 17:25:45 crc kubenswrapper[4794]: I0216 17:25:45.110387 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 17:25:47 crc kubenswrapper[4794]: I0216 17:25:47.001492 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 17:25:47 crc kubenswrapper[4794]: I0216 17:25:47.003009 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="bad8e694-f919-4a68-b0ce-95c9b55ba56a" containerName="kube-state-metrics" containerID="cri-o://ce842615109c9f94ae5eb12663a3827bf072cf92dc03df8bb5197cf9df325015" gracePeriod=30 Feb 16 17:25:47 crc kubenswrapper[4794]: I0216 17:25:47.111696 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 17:25:47 crc kubenswrapper[4794]: I0216 17:25:47.112343 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="0e26909b-581a-4945-adf3-58a96cdf5b85" containerName="mysqld-exporter" containerID="cri-o://58c8cbb23abc1c53e580b5dc9b3856a5d8c80d3681c1dfe04ee4dfc70ad59332" gracePeriod=30 Feb 16 17:25:47 crc kubenswrapper[4794]: I0216 17:25:47.134345 4794 generic.go:334] "Generic (PLEG): container finished" podID="bad8e694-f919-4a68-b0ce-95c9b55ba56a" containerID="ce842615109c9f94ae5eb12663a3827bf072cf92dc03df8bb5197cf9df325015" exitCode=2 Feb 16 17:25:47 crc kubenswrapper[4794]: I0216 17:25:47.135280 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bad8e694-f919-4a68-b0ce-95c9b55ba56a","Type":"ContainerDied","Data":"ce842615109c9f94ae5eb12663a3827bf072cf92dc03df8bb5197cf9df325015"} Feb 16 17:25:47 crc kubenswrapper[4794]: I0216 17:25:47.473671 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gsjk8" Feb 16 17:25:47 crc kubenswrapper[4794]: I0216 17:25:47.474041 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gsjk8" Feb 16 17:25:47 crc kubenswrapper[4794]: I0216 17:25:47.540676 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gsjk8" Feb 16 17:25:47 crc kubenswrapper[4794]: I0216 17:25:47.822585 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 17:25:47 crc kubenswrapper[4794]: I0216 17:25:47.830565 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 16 17:25:47 crc kubenswrapper[4794]: I0216 17:25:47.942458 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jspnz\" (UniqueName: \"kubernetes.io/projected/0e26909b-581a-4945-adf3-58a96cdf5b85-kube-api-access-jspnz\") pod \"0e26909b-581a-4945-adf3-58a96cdf5b85\" (UID: \"0e26909b-581a-4945-adf3-58a96cdf5b85\") " Feb 16 17:25:47 crc kubenswrapper[4794]: I0216 17:25:47.942608 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e26909b-581a-4945-adf3-58a96cdf5b85-combined-ca-bundle\") pod \"0e26909b-581a-4945-adf3-58a96cdf5b85\" (UID: \"0e26909b-581a-4945-adf3-58a96cdf5b85\") " Feb 16 17:25:47 crc kubenswrapper[4794]: I0216 17:25:47.942724 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e26909b-581a-4945-adf3-58a96cdf5b85-config-data\") pod \"0e26909b-581a-4945-adf3-58a96cdf5b85\" (UID: \"0e26909b-581a-4945-adf3-58a96cdf5b85\") " Feb 16 17:25:47 crc kubenswrapper[4794]: I0216 17:25:47.942785 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhff4\" (UniqueName: \"kubernetes.io/projected/bad8e694-f919-4a68-b0ce-95c9b55ba56a-kube-api-access-mhff4\") pod \"bad8e694-f919-4a68-b0ce-95c9b55ba56a\" (UID: \"bad8e694-f919-4a68-b0ce-95c9b55ba56a\") " Feb 16 17:25:47 crc kubenswrapper[4794]: I0216 17:25:47.950716 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e26909b-581a-4945-adf3-58a96cdf5b85-kube-api-access-jspnz" (OuterVolumeSpecName: "kube-api-access-jspnz") pod "0e26909b-581a-4945-adf3-58a96cdf5b85" (UID: "0e26909b-581a-4945-adf3-58a96cdf5b85"). InnerVolumeSpecName "kube-api-access-jspnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:25:47 crc kubenswrapper[4794]: I0216 17:25:47.952470 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bad8e694-f919-4a68-b0ce-95c9b55ba56a-kube-api-access-mhff4" (OuterVolumeSpecName: "kube-api-access-mhff4") pod "bad8e694-f919-4a68-b0ce-95c9b55ba56a" (UID: "bad8e694-f919-4a68-b0ce-95c9b55ba56a"). InnerVolumeSpecName "kube-api-access-mhff4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:25:47 crc kubenswrapper[4794]: I0216 17:25:47.985698 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e26909b-581a-4945-adf3-58a96cdf5b85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e26909b-581a-4945-adf3-58a96cdf5b85" (UID: "0e26909b-581a-4945-adf3-58a96cdf5b85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.033869 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e26909b-581a-4945-adf3-58a96cdf5b85-config-data" (OuterVolumeSpecName: "config-data") pod "0e26909b-581a-4945-adf3-58a96cdf5b85" (UID: "0e26909b-581a-4945-adf3-58a96cdf5b85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.044988 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhff4\" (UniqueName: \"kubernetes.io/projected/bad8e694-f919-4a68-b0ce-95c9b55ba56a-kube-api-access-mhff4\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.045032 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jspnz\" (UniqueName: \"kubernetes.io/projected/0e26909b-581a-4945-adf3-58a96cdf5b85-kube-api-access-jspnz\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.045042 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e26909b-581a-4945-adf3-58a96cdf5b85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.045051 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e26909b-581a-4945-adf3-58a96cdf5b85-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.154562 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bad8e694-f919-4a68-b0ce-95c9b55ba56a","Type":"ContainerDied","Data":"e481eb4b741da234a1130dc005d8be965495e49be8d2c7cd0d64649b78b7cab1"} Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.154884 4794 scope.go:117] "RemoveContainer" containerID="ce842615109c9f94ae5eb12663a3827bf072cf92dc03df8bb5197cf9df325015" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.155071 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.158182 4794 generic.go:334] "Generic (PLEG): container finished" podID="0e26909b-581a-4945-adf3-58a96cdf5b85" containerID="58c8cbb23abc1c53e580b5dc9b3856a5d8c80d3681c1dfe04ee4dfc70ad59332" exitCode=2 Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.158381 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"0e26909b-581a-4945-adf3-58a96cdf5b85","Type":"ContainerDied","Data":"58c8cbb23abc1c53e580b5dc9b3856a5d8c80d3681c1dfe04ee4dfc70ad59332"} Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.158428 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"0e26909b-581a-4945-adf3-58a96cdf5b85","Type":"ContainerDied","Data":"92d29819be5843285c04b2fed813cfce457e88f8f791888294353a438790c96b"} Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.158505 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.217086 4794 scope.go:117] "RemoveContainer" containerID="58c8cbb23abc1c53e580b5dc9b3856a5d8c80d3681c1dfe04ee4dfc70ad59332" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.229447 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.246067 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.255500 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gsjk8" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.258567 4794 scope.go:117] "RemoveContainer" containerID="58c8cbb23abc1c53e580b5dc9b3856a5d8c80d3681c1dfe04ee4dfc70ad59332" Feb 16 17:25:48 crc kubenswrapper[4794]: E0216 17:25:48.259974 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58c8cbb23abc1c53e580b5dc9b3856a5d8c80d3681c1dfe04ee4dfc70ad59332\": container with ID starting with 58c8cbb23abc1c53e580b5dc9b3856a5d8c80d3681c1dfe04ee4dfc70ad59332 not found: ID does not exist" containerID="58c8cbb23abc1c53e580b5dc9b3856a5d8c80d3681c1dfe04ee4dfc70ad59332" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.260140 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58c8cbb23abc1c53e580b5dc9b3856a5d8c80d3681c1dfe04ee4dfc70ad59332"} err="failed to get container status \"58c8cbb23abc1c53e580b5dc9b3856a5d8c80d3681c1dfe04ee4dfc70ad59332\": rpc error: code = NotFound desc = could not find container \"58c8cbb23abc1c53e580b5dc9b3856a5d8c80d3681c1dfe04ee4dfc70ad59332\": container with ID starting with 58c8cbb23abc1c53e580b5dc9b3856a5d8c80d3681c1dfe04ee4dfc70ad59332 not found: ID does not exist" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.282418 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.318772 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.351840 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 17:25:48 crc kubenswrapper[4794]: E0216 17:25:48.352409 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e26909b-581a-4945-adf3-58a96cdf5b85" containerName="mysqld-exporter" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.352431 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e26909b-581a-4945-adf3-58a96cdf5b85" containerName="mysqld-exporter" Feb 16 17:25:48 crc kubenswrapper[4794]: E0216 17:25:48.352466 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bad8e694-f919-4a68-b0ce-95c9b55ba56a" containerName="kube-state-metrics" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.352473 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="bad8e694-f919-4a68-b0ce-95c9b55ba56a" containerName="kube-state-metrics" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.352727 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="bad8e694-f919-4a68-b0ce-95c9b55ba56a" containerName="kube-state-metrics" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.352769 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e26909b-581a-4945-adf3-58a96cdf5b85" containerName="mysqld-exporter" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.353606 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.356367 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.356598 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.382626 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.384553 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.386656 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.387816 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.405023 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.419397 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.441871 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gsjk8"] Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.461402 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/14a7777c-3957-4591-959c-746e1557c309-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"14a7777c-3957-4591-959c-746e1557c309\") " pod="openstack/mysqld-exporter-0" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.461458 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czfcd\" (UniqueName: \"kubernetes.io/projected/4100ccdc-4397-45ed-8c44-e877abeb689c-kube-api-access-czfcd\") pod \"kube-state-metrics-0\" (UID: \"4100ccdc-4397-45ed-8c44-e877abeb689c\") " pod="openstack/kube-state-metrics-0" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.461484 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4100ccdc-4397-45ed-8c44-e877abeb689c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4100ccdc-4397-45ed-8c44-e877abeb689c\") " pod="openstack/kube-state-metrics-0" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.461505 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg475\" (UniqueName: \"kubernetes.io/projected/14a7777c-3957-4591-959c-746e1557c309-kube-api-access-fg475\") pod \"mysqld-exporter-0\" (UID: \"14a7777c-3957-4591-959c-746e1557c309\") " pod="openstack/mysqld-exporter-0" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.461556 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14a7777c-3957-4591-959c-746e1557c309-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"14a7777c-3957-4591-959c-746e1557c309\") " pod="openstack/mysqld-exporter-0" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.462042 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4100ccdc-4397-45ed-8c44-e877abeb689c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4100ccdc-4397-45ed-8c44-e877abeb689c\") " pod="openstack/kube-state-metrics-0" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.462411 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14a7777c-3957-4591-959c-746e1557c309-config-data\") pod \"mysqld-exporter-0\" (UID: \"14a7777c-3957-4591-959c-746e1557c309\") " pod="openstack/mysqld-exporter-0" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.462534 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4100ccdc-4397-45ed-8c44-e877abeb689c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4100ccdc-4397-45ed-8c44-e877abeb689c\") " pod="openstack/kube-state-metrics-0" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.565076 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14a7777c-3957-4591-959c-746e1557c309-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"14a7777c-3957-4591-959c-746e1557c309\") " pod="openstack/mysqld-exporter-0" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.565175 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4100ccdc-4397-45ed-8c44-e877abeb689c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4100ccdc-4397-45ed-8c44-e877abeb689c\") " pod="openstack/kube-state-metrics-0" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.565288 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14a7777c-3957-4591-959c-746e1557c309-config-data\") pod \"mysqld-exporter-0\" (UID: \"14a7777c-3957-4591-959c-746e1557c309\") " pod="openstack/mysqld-exporter-0" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.565344 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4100ccdc-4397-45ed-8c44-e877abeb689c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4100ccdc-4397-45ed-8c44-e877abeb689c\") " pod="openstack/kube-state-metrics-0" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.565431 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/14a7777c-3957-4591-959c-746e1557c309-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"14a7777c-3957-4591-959c-746e1557c309\") " pod="openstack/mysqld-exporter-0" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.565457 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czfcd\" (UniqueName: \"kubernetes.io/projected/4100ccdc-4397-45ed-8c44-e877abeb689c-kube-api-access-czfcd\") pod \"kube-state-metrics-0\" (UID: \"4100ccdc-4397-45ed-8c44-e877abeb689c\") " pod="openstack/kube-state-metrics-0" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.565475 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4100ccdc-4397-45ed-8c44-e877abeb689c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4100ccdc-4397-45ed-8c44-e877abeb689c\") " pod="openstack/kube-state-metrics-0" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.565518 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg475\" (UniqueName: \"kubernetes.io/projected/14a7777c-3957-4591-959c-746e1557c309-kube-api-access-fg475\") pod \"mysqld-exporter-0\" (UID: \"14a7777c-3957-4591-959c-746e1557c309\") " pod="openstack/mysqld-exporter-0" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.571171 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4100ccdc-4397-45ed-8c44-e877abeb689c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4100ccdc-4397-45ed-8c44-e877abeb689c\") " pod="openstack/kube-state-metrics-0" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.572657 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/14a7777c-3957-4591-959c-746e1557c309-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"14a7777c-3957-4591-959c-746e1557c309\") " pod="openstack/mysqld-exporter-0" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.575906 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14a7777c-3957-4591-959c-746e1557c309-config-data\") pod \"mysqld-exporter-0\" (UID: \"14a7777c-3957-4591-959c-746e1557c309\") " pod="openstack/mysqld-exporter-0" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.575954 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4100ccdc-4397-45ed-8c44-e877abeb689c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4100ccdc-4397-45ed-8c44-e877abeb689c\") " pod="openstack/kube-state-metrics-0" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.576090 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4100ccdc-4397-45ed-8c44-e877abeb689c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4100ccdc-4397-45ed-8c44-e877abeb689c\") " pod="openstack/kube-state-metrics-0" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.582876 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14a7777c-3957-4591-959c-746e1557c309-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"14a7777c-3957-4591-959c-746e1557c309\") " pod="openstack/mysqld-exporter-0" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.588871 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czfcd\" (UniqueName: \"kubernetes.io/projected/4100ccdc-4397-45ed-8c44-e877abeb689c-kube-api-access-czfcd\") pod \"kube-state-metrics-0\" (UID: \"4100ccdc-4397-45ed-8c44-e877abeb689c\") " pod="openstack/kube-state-metrics-0" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.593444 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg475\" (UniqueName: \"kubernetes.io/projected/14a7777c-3957-4591-959c-746e1557c309-kube-api-access-fg475\") pod \"mysqld-exporter-0\" (UID: \"14a7777c-3957-4591-959c-746e1557c309\") " pod="openstack/mysqld-exporter-0" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.678335 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.706894 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.823337 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e26909b-581a-4945-adf3-58a96cdf5b85" path="/var/lib/kubelet/pods/0e26909b-581a-4945-adf3-58a96cdf5b85/volumes" Feb 16 17:25:48 crc kubenswrapper[4794]: I0216 17:25:48.823853 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bad8e694-f919-4a68-b0ce-95c9b55ba56a" path="/var/lib/kubelet/pods/bad8e694-f919-4a68-b0ce-95c9b55ba56a/volumes" Feb 16 17:25:49 crc kubenswrapper[4794]: I0216 17:25:49.306771 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 17:25:49 crc kubenswrapper[4794]: W0216 17:25:49.319498 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4100ccdc_4397_45ed_8c44_e877abeb689c.slice/crio-9f420906041965e737810718d50c8314b4c1b5875c3fa3e5babfc79b2711183f WatchSource:0}: Error finding container 9f420906041965e737810718d50c8314b4c1b5875c3fa3e5babfc79b2711183f: Status 404 returned error can't find the container with id 9f420906041965e737810718d50c8314b4c1b5875c3fa3e5babfc79b2711183f Feb 16 17:25:49 crc kubenswrapper[4794]: W0216 17:25:49.390426 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14a7777c_3957_4591_959c_746e1557c309.slice/crio-a7b09fc866a6575e3d62c9dbe6a35a247364b3de48b0777dfc13d67b5dc4f284 WatchSource:0}: Error finding container a7b09fc866a6575e3d62c9dbe6a35a247364b3de48b0777dfc13d67b5dc4f284: Status 404 returned error can't find the container with id a7b09fc866a6575e3d62c9dbe6a35a247364b3de48b0777dfc13d67b5dc4f284 Feb 16 17:25:49 crc kubenswrapper[4794]: I0216 17:25:49.391853 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 17:25:49 crc kubenswrapper[4794]: I0216 17:25:49.589009 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:25:49 crc kubenswrapper[4794]: I0216 17:25:49.589370 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4bfb4ae6-8e5f-4047-b00a-496e2cb97275" containerName="ceilometer-central-agent" containerID="cri-o://c667ef05c2615ab3428bdd947fa1adc22ade3d6fd2d037324420ae561f3a6b97" gracePeriod=30 Feb 16 17:25:49 crc kubenswrapper[4794]: I0216 17:25:49.589496 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4bfb4ae6-8e5f-4047-b00a-496e2cb97275" containerName="proxy-httpd" containerID="cri-o://58db2af9a34cbbf55cf99c07c1c587acda540c331431ba55038aec58f776599e" gracePeriod=30 Feb 16 17:25:49 crc kubenswrapper[4794]: I0216 17:25:49.589559 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4bfb4ae6-8e5f-4047-b00a-496e2cb97275" containerName="sg-core" containerID="cri-o://a7e83c8823b194a5c6336b5d6816f58c0ed2e0574e6e09cfdfaf0d37fa1d9376" gracePeriod=30 Feb 16 17:25:49 crc kubenswrapper[4794]: I0216 17:25:49.589604 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4bfb4ae6-8e5f-4047-b00a-496e2cb97275" containerName="ceilometer-notification-agent" containerID="cri-o://78fbbc0f24acd8ba660d34fde5061312a915118c6231539c34a4e0219d165fd0" gracePeriod=30 Feb 16 17:25:50 crc kubenswrapper[4794]: I0216 17:25:50.195905 4794 generic.go:334] "Generic (PLEG): container finished" podID="4bfb4ae6-8e5f-4047-b00a-496e2cb97275" containerID="58db2af9a34cbbf55cf99c07c1c587acda540c331431ba55038aec58f776599e" exitCode=0 Feb 16 17:25:50 crc kubenswrapper[4794]: I0216 17:25:50.196259 4794 generic.go:334] "Generic (PLEG): container finished" podID="4bfb4ae6-8e5f-4047-b00a-496e2cb97275" containerID="a7e83c8823b194a5c6336b5d6816f58c0ed2e0574e6e09cfdfaf0d37fa1d9376" exitCode=2 Feb 16 17:25:50 crc kubenswrapper[4794]: I0216 17:25:50.196273 4794 generic.go:334] "Generic (PLEG): container finished" podID="4bfb4ae6-8e5f-4047-b00a-496e2cb97275" containerID="c667ef05c2615ab3428bdd947fa1adc22ade3d6fd2d037324420ae561f3a6b97" exitCode=0 Feb 16 17:25:50 crc kubenswrapper[4794]: I0216 17:25:50.195981 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bfb4ae6-8e5f-4047-b00a-496e2cb97275","Type":"ContainerDied","Data":"58db2af9a34cbbf55cf99c07c1c587acda540c331431ba55038aec58f776599e"} Feb 16 17:25:50 crc kubenswrapper[4794]: I0216 17:25:50.196401 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bfb4ae6-8e5f-4047-b00a-496e2cb97275","Type":"ContainerDied","Data":"a7e83c8823b194a5c6336b5d6816f58c0ed2e0574e6e09cfdfaf0d37fa1d9376"} Feb 16 17:25:50 crc kubenswrapper[4794]: I0216 17:25:50.196421 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bfb4ae6-8e5f-4047-b00a-496e2cb97275","Type":"ContainerDied","Data":"c667ef05c2615ab3428bdd947fa1adc22ade3d6fd2d037324420ae561f3a6b97"} Feb 16 17:25:50 crc kubenswrapper[4794]: I0216 17:25:50.200530 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4100ccdc-4397-45ed-8c44-e877abeb689c","Type":"ContainerStarted","Data":"2c51c1e34a0c60ef557a2b43aaf30d40d1aac61ba1b4c3ec8c8021354224ceff"} Feb 16 17:25:50 crc kubenswrapper[4794]: I0216 17:25:50.200589 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 16 17:25:50 crc kubenswrapper[4794]: I0216 17:25:50.200606 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4100ccdc-4397-45ed-8c44-e877abeb689c","Type":"ContainerStarted","Data":"9f420906041965e737810718d50c8314b4c1b5875c3fa3e5babfc79b2711183f"} Feb 16 17:25:50 crc kubenswrapper[4794]: I0216 17:25:50.207328 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"14a7777c-3957-4591-959c-746e1557c309","Type":"ContainerStarted","Data":"a7b09fc866a6575e3d62c9dbe6a35a247364b3de48b0777dfc13d67b5dc4f284"} Feb 16 17:25:50 crc kubenswrapper[4794]: I0216 17:25:50.207452 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gsjk8" podUID="3e740940-700c-48d4-87ae-f3df5493444b" containerName="registry-server" containerID="cri-o://1def0254bb21ae2aa7449fd1e0980e10152e51f01b65e241ef1cc7c694f782b5" gracePeriod=2 Feb 16 17:25:50 crc kubenswrapper[4794]: I0216 17:25:50.229965 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.791870995 podStartE2EDuration="2.229946173s" podCreationTimestamp="2026-02-16 17:25:48 +0000 UTC" firstStartedPulling="2026-02-16 17:25:49.321481651 +0000 UTC m=+1575.269576298" lastFinishedPulling="2026-02-16 17:25:49.759556829 +0000 UTC m=+1575.707651476" observedRunningTime="2026-02-16 17:25:50.228219604 +0000 UTC m=+1576.176314251" watchObservedRunningTime="2026-02-16 17:25:50.229946173 +0000 UTC m=+1576.178040820" Feb 16 17:25:50 crc kubenswrapper[4794]: I0216 17:25:50.270149 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=1.747061816 podStartE2EDuration="2.270128601s" podCreationTimestamp="2026-02-16 17:25:48 +0000 UTC" firstStartedPulling="2026-02-16 17:25:49.392901914 +0000 UTC m=+1575.340996561" lastFinishedPulling="2026-02-16 17:25:49.915968699 +0000 UTC m=+1575.864063346" observedRunningTime="2026-02-16 17:25:50.243828116 +0000 UTC m=+1576.191922763" watchObservedRunningTime="2026-02-16 17:25:50.270128601 +0000 UTC m=+1576.218223248" Feb 16 17:25:50 crc kubenswrapper[4794]: I0216 17:25:50.785420 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gsjk8" Feb 16 17:25:50 crc kubenswrapper[4794]: I0216 17:25:50.841408 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e740940-700c-48d4-87ae-f3df5493444b-utilities\") pod \"3e740940-700c-48d4-87ae-f3df5493444b\" (UID: \"3e740940-700c-48d4-87ae-f3df5493444b\") " Feb 16 17:25:50 crc kubenswrapper[4794]: I0216 17:25:50.841692 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt8nk\" (UniqueName: \"kubernetes.io/projected/3e740940-700c-48d4-87ae-f3df5493444b-kube-api-access-wt8nk\") pod \"3e740940-700c-48d4-87ae-f3df5493444b\" (UID: \"3e740940-700c-48d4-87ae-f3df5493444b\") " Feb 16 17:25:50 crc kubenswrapper[4794]: I0216 17:25:50.841948 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e740940-700c-48d4-87ae-f3df5493444b-catalog-content\") pod \"3e740940-700c-48d4-87ae-f3df5493444b\" (UID: \"3e740940-700c-48d4-87ae-f3df5493444b\") " Feb 16 17:25:50 crc kubenswrapper[4794]: I0216 17:25:50.842326 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e740940-700c-48d4-87ae-f3df5493444b-utilities" (OuterVolumeSpecName: "utilities") pod "3e740940-700c-48d4-87ae-f3df5493444b" (UID: "3e740940-700c-48d4-87ae-f3df5493444b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:25:50 crc kubenswrapper[4794]: I0216 17:25:50.843335 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e740940-700c-48d4-87ae-f3df5493444b-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:50 crc kubenswrapper[4794]: I0216 17:25:50.847572 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e740940-700c-48d4-87ae-f3df5493444b-kube-api-access-wt8nk" (OuterVolumeSpecName: "kube-api-access-wt8nk") pod "3e740940-700c-48d4-87ae-f3df5493444b" (UID: "3e740940-700c-48d4-87ae-f3df5493444b"). InnerVolumeSpecName "kube-api-access-wt8nk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:25:50 crc kubenswrapper[4794]: I0216 17:25:50.902463 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e740940-700c-48d4-87ae-f3df5493444b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e740940-700c-48d4-87ae-f3df5493444b" (UID: "3e740940-700c-48d4-87ae-f3df5493444b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:25:50 crc kubenswrapper[4794]: I0216 17:25:50.945924 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wt8nk\" (UniqueName: \"kubernetes.io/projected/3e740940-700c-48d4-87ae-f3df5493444b-kube-api-access-wt8nk\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:50 crc kubenswrapper[4794]: I0216 17:25:50.946216 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e740940-700c-48d4-87ae-f3df5493444b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:51 crc kubenswrapper[4794]: I0216 17:25:51.225099 4794 generic.go:334] "Generic (PLEG): container finished" podID="3e740940-700c-48d4-87ae-f3df5493444b" containerID="1def0254bb21ae2aa7449fd1e0980e10152e51f01b65e241ef1cc7c694f782b5" exitCode=0 Feb 16 17:25:51 crc kubenswrapper[4794]: I0216 17:25:51.225357 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gsjk8" event={"ID":"3e740940-700c-48d4-87ae-f3df5493444b","Type":"ContainerDied","Data":"1def0254bb21ae2aa7449fd1e0980e10152e51f01b65e241ef1cc7c694f782b5"} Feb 16 17:25:51 crc kubenswrapper[4794]: I0216 17:25:51.225396 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gsjk8" event={"ID":"3e740940-700c-48d4-87ae-f3df5493444b","Type":"ContainerDied","Data":"dfd552e6784e11cd6c2ef4b0a37427a680ce5fac76e0be7562369f58ae747d62"} Feb 16 17:25:51 crc kubenswrapper[4794]: I0216 17:25:51.225414 4794 scope.go:117] "RemoveContainer" containerID="1def0254bb21ae2aa7449fd1e0980e10152e51f01b65e241ef1cc7c694f782b5" Feb 16 17:25:51 crc kubenswrapper[4794]: I0216 17:25:51.225322 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gsjk8" Feb 16 17:25:51 crc kubenswrapper[4794]: I0216 17:25:51.230845 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"14a7777c-3957-4591-959c-746e1557c309","Type":"ContainerStarted","Data":"2679f45be1b4460db96abbe13a107b0b5264a10dee39648a4c3ba8cfc5d45a6a"} Feb 16 17:25:51 crc kubenswrapper[4794]: I0216 17:25:51.269561 4794 scope.go:117] "RemoveContainer" containerID="3f3e0b5e4215ea01ebd077a37a396f4389e464717d496a27af0088a2e77e4480" Feb 16 17:25:51 crc kubenswrapper[4794]: I0216 17:25:51.280336 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gsjk8"] Feb 16 17:25:51 crc kubenswrapper[4794]: I0216 17:25:51.296357 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gsjk8"] Feb 16 17:25:51 crc kubenswrapper[4794]: I0216 17:25:51.304233 4794 scope.go:117] "RemoveContainer" containerID="991110b4d2bf2c74b4fb22927211c1e49d79cc5baaba1ce796d07136ac93f058" Feb 16 17:25:51 crc kubenswrapper[4794]: I0216 17:25:51.326723 4794 scope.go:117] "RemoveContainer" containerID="1def0254bb21ae2aa7449fd1e0980e10152e51f01b65e241ef1cc7c694f782b5" Feb 16 17:25:51 crc kubenswrapper[4794]: E0216 17:25:51.327356 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1def0254bb21ae2aa7449fd1e0980e10152e51f01b65e241ef1cc7c694f782b5\": container with ID starting with 1def0254bb21ae2aa7449fd1e0980e10152e51f01b65e241ef1cc7c694f782b5 not found: ID does not exist" containerID="1def0254bb21ae2aa7449fd1e0980e10152e51f01b65e241ef1cc7c694f782b5" Feb 16 17:25:51 crc kubenswrapper[4794]: I0216 17:25:51.327401 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1def0254bb21ae2aa7449fd1e0980e10152e51f01b65e241ef1cc7c694f782b5"} err="failed to get container status \"1def0254bb21ae2aa7449fd1e0980e10152e51f01b65e241ef1cc7c694f782b5\": rpc error: code = NotFound desc = could not find container \"1def0254bb21ae2aa7449fd1e0980e10152e51f01b65e241ef1cc7c694f782b5\": container with ID starting with 1def0254bb21ae2aa7449fd1e0980e10152e51f01b65e241ef1cc7c694f782b5 not found: ID does not exist" Feb 16 17:25:51 crc kubenswrapper[4794]: I0216 17:25:51.327428 4794 scope.go:117] "RemoveContainer" containerID="3f3e0b5e4215ea01ebd077a37a396f4389e464717d496a27af0088a2e77e4480" Feb 16 17:25:51 crc kubenswrapper[4794]: E0216 17:25:51.327702 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f3e0b5e4215ea01ebd077a37a396f4389e464717d496a27af0088a2e77e4480\": container with ID starting with 3f3e0b5e4215ea01ebd077a37a396f4389e464717d496a27af0088a2e77e4480 not found: ID does not exist" containerID="3f3e0b5e4215ea01ebd077a37a396f4389e464717d496a27af0088a2e77e4480" Feb 16 17:25:51 crc kubenswrapper[4794]: I0216 17:25:51.327736 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f3e0b5e4215ea01ebd077a37a396f4389e464717d496a27af0088a2e77e4480"} err="failed to get container status \"3f3e0b5e4215ea01ebd077a37a396f4389e464717d496a27af0088a2e77e4480\": rpc error: code = NotFound desc = could not find container \"3f3e0b5e4215ea01ebd077a37a396f4389e464717d496a27af0088a2e77e4480\": container with ID starting with 3f3e0b5e4215ea01ebd077a37a396f4389e464717d496a27af0088a2e77e4480 not found: ID does not exist" Feb 16 17:25:51 crc kubenswrapper[4794]: I0216 17:25:51.327756 4794 scope.go:117] "RemoveContainer" containerID="991110b4d2bf2c74b4fb22927211c1e49d79cc5baaba1ce796d07136ac93f058" Feb 16 17:25:51 crc kubenswrapper[4794]: E0216 17:25:51.327969 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"991110b4d2bf2c74b4fb22927211c1e49d79cc5baaba1ce796d07136ac93f058\": container with ID starting with 991110b4d2bf2c74b4fb22927211c1e49d79cc5baaba1ce796d07136ac93f058 not found: ID does not exist" containerID="991110b4d2bf2c74b4fb22927211c1e49d79cc5baaba1ce796d07136ac93f058" Feb 16 17:25:51 crc kubenswrapper[4794]: I0216 17:25:51.327993 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"991110b4d2bf2c74b4fb22927211c1e49d79cc5baaba1ce796d07136ac93f058"} err="failed to get container status \"991110b4d2bf2c74b4fb22927211c1e49d79cc5baaba1ce796d07136ac93f058\": rpc error: code = NotFound desc = could not find container \"991110b4d2bf2c74b4fb22927211c1e49d79cc5baaba1ce796d07136ac93f058\": container with ID starting with 991110b4d2bf2c74b4fb22927211c1e49d79cc5baaba1ce796d07136ac93f058 not found: ID does not exist" Feb 16 17:25:51 crc kubenswrapper[4794]: I0216 17:25:51.792125 4794 scope.go:117] "RemoveContainer" containerID="6e22a7be8018d748f49f3c871459e97f76ee04f859d3d1d0d46b4bbb2dd36691" Feb 16 17:25:51 crc kubenswrapper[4794]: E0216 17:25:51.792511 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:25:52 crc kubenswrapper[4794]: I0216 17:25:52.529709 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="bad8e694-f919-4a68-b0ce-95c9b55ba56a" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.0.136:8081/readyz\": dial tcp 10.217.0.136:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 16 17:25:52 crc kubenswrapper[4794]: I0216 17:25:52.712296 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:25:52 crc kubenswrapper[4794]: I0216 17:25:52.794718 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8g5lv\" (UniqueName: \"kubernetes.io/projected/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-kube-api-access-8g5lv\") pod \"4bfb4ae6-8e5f-4047-b00a-496e2cb97275\" (UID: \"4bfb4ae6-8e5f-4047-b00a-496e2cb97275\") " Feb 16 17:25:52 crc kubenswrapper[4794]: I0216 17:25:52.794792 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-sg-core-conf-yaml\") pod \"4bfb4ae6-8e5f-4047-b00a-496e2cb97275\" (UID: \"4bfb4ae6-8e5f-4047-b00a-496e2cb97275\") " Feb 16 17:25:52 crc kubenswrapper[4794]: I0216 17:25:52.794844 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-run-httpd\") pod \"4bfb4ae6-8e5f-4047-b00a-496e2cb97275\" (UID: \"4bfb4ae6-8e5f-4047-b00a-496e2cb97275\") " Feb 16 17:25:52 crc kubenswrapper[4794]: I0216 17:25:52.795021 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-scripts\") pod \"4bfb4ae6-8e5f-4047-b00a-496e2cb97275\" (UID: \"4bfb4ae6-8e5f-4047-b00a-496e2cb97275\") " Feb 16 17:25:52 crc kubenswrapper[4794]: I0216 17:25:52.795093 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-log-httpd\") pod \"4bfb4ae6-8e5f-4047-b00a-496e2cb97275\" (UID: \"4bfb4ae6-8e5f-4047-b00a-496e2cb97275\") " Feb 16 17:25:52 crc kubenswrapper[4794]: I0216 17:25:52.795138 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-combined-ca-bundle\") pod \"4bfb4ae6-8e5f-4047-b00a-496e2cb97275\" (UID: \"4bfb4ae6-8e5f-4047-b00a-496e2cb97275\") " Feb 16 17:25:52 crc kubenswrapper[4794]: I0216 17:25:52.795249 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-config-data\") pod \"4bfb4ae6-8e5f-4047-b00a-496e2cb97275\" (UID: \"4bfb4ae6-8e5f-4047-b00a-496e2cb97275\") " Feb 16 17:25:52 crc kubenswrapper[4794]: I0216 17:25:52.795984 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4bfb4ae6-8e5f-4047-b00a-496e2cb97275" (UID: "4bfb4ae6-8e5f-4047-b00a-496e2cb97275"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:25:52 crc kubenswrapper[4794]: I0216 17:25:52.796263 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4bfb4ae6-8e5f-4047-b00a-496e2cb97275" (UID: "4bfb4ae6-8e5f-4047-b00a-496e2cb97275"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:25:52 crc kubenswrapper[4794]: I0216 17:25:52.802054 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-scripts" (OuterVolumeSpecName: "scripts") pod "4bfb4ae6-8e5f-4047-b00a-496e2cb97275" (UID: "4bfb4ae6-8e5f-4047-b00a-496e2cb97275"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:25:52 crc kubenswrapper[4794]: I0216 17:25:52.803561 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-kube-api-access-8g5lv" (OuterVolumeSpecName: "kube-api-access-8g5lv") pod "4bfb4ae6-8e5f-4047-b00a-496e2cb97275" (UID: "4bfb4ae6-8e5f-4047-b00a-496e2cb97275"). InnerVolumeSpecName "kube-api-access-8g5lv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:25:52 crc kubenswrapper[4794]: I0216 17:25:52.826164 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e740940-700c-48d4-87ae-f3df5493444b" path="/var/lib/kubelet/pods/3e740940-700c-48d4-87ae-f3df5493444b/volumes" Feb 16 17:25:52 crc kubenswrapper[4794]: I0216 17:25:52.848423 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4bfb4ae6-8e5f-4047-b00a-496e2cb97275" (UID: "4bfb4ae6-8e5f-4047-b00a-496e2cb97275"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:25:52 crc kubenswrapper[4794]: I0216 17:25:52.900878 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8g5lv\" (UniqueName: \"kubernetes.io/projected/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-kube-api-access-8g5lv\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:52 crc kubenswrapper[4794]: I0216 17:25:52.900918 4794 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:52 crc kubenswrapper[4794]: I0216 17:25:52.900932 4794 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:52 crc kubenswrapper[4794]: I0216 17:25:52.900946 4794 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:52 crc kubenswrapper[4794]: I0216 17:25:52.900957 4794 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:52 crc kubenswrapper[4794]: I0216 17:25:52.918047 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4bfb4ae6-8e5f-4047-b00a-496e2cb97275" (UID: "4bfb4ae6-8e5f-4047-b00a-496e2cb97275"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:25:52 crc kubenswrapper[4794]: I0216 17:25:52.932831 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-config-data" (OuterVolumeSpecName: "config-data") pod "4bfb4ae6-8e5f-4047-b00a-496e2cb97275" (UID: "4bfb4ae6-8e5f-4047-b00a-496e2cb97275"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.003487 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.003518 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bfb4ae6-8e5f-4047-b00a-496e2cb97275-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.263661 4794 generic.go:334] "Generic (PLEG): container finished" podID="4bfb4ae6-8e5f-4047-b00a-496e2cb97275" containerID="78fbbc0f24acd8ba660d34fde5061312a915118c6231539c34a4e0219d165fd0" exitCode=0 Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.263705 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bfb4ae6-8e5f-4047-b00a-496e2cb97275","Type":"ContainerDied","Data":"78fbbc0f24acd8ba660d34fde5061312a915118c6231539c34a4e0219d165fd0"} Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.263734 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bfb4ae6-8e5f-4047-b00a-496e2cb97275","Type":"ContainerDied","Data":"45859627c7c5270d7a5eb8a2a070e4fdee20efffe9de79359f036c16d63c38c2"} Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.263754 4794 scope.go:117] "RemoveContainer" containerID="58db2af9a34cbbf55cf99c07c1c587acda540c331431ba55038aec58f776599e" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.263877 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.319401 4794 scope.go:117] "RemoveContainer" containerID="a7e83c8823b194a5c6336b5d6816f58c0ed2e0574e6e09cfdfaf0d37fa1d9376" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.319582 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.343979 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.345657 4794 scope.go:117] "RemoveContainer" containerID="78fbbc0f24acd8ba660d34fde5061312a915118c6231539c34a4e0219d165fd0" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.363881 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:25:53 crc kubenswrapper[4794]: E0216 17:25:53.364465 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bfb4ae6-8e5f-4047-b00a-496e2cb97275" containerName="sg-core" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.364490 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bfb4ae6-8e5f-4047-b00a-496e2cb97275" containerName="sg-core" Feb 16 17:25:53 crc kubenswrapper[4794]: E0216 17:25:53.364501 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e740940-700c-48d4-87ae-f3df5493444b" containerName="registry-server" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.364507 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e740940-700c-48d4-87ae-f3df5493444b" containerName="registry-server" Feb 16 17:25:53 crc kubenswrapper[4794]: E0216 17:25:53.364528 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e740940-700c-48d4-87ae-f3df5493444b" containerName="extract-utilities" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.364534 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e740940-700c-48d4-87ae-f3df5493444b" containerName="extract-utilities" Feb 16 17:25:53 crc kubenswrapper[4794]: E0216 17:25:53.364549 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e740940-700c-48d4-87ae-f3df5493444b" containerName="extract-content" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.364555 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e740940-700c-48d4-87ae-f3df5493444b" containerName="extract-content" Feb 16 17:25:53 crc kubenswrapper[4794]: E0216 17:25:53.364576 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bfb4ae6-8e5f-4047-b00a-496e2cb97275" containerName="proxy-httpd" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.364582 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bfb4ae6-8e5f-4047-b00a-496e2cb97275" containerName="proxy-httpd" Feb 16 17:25:53 crc kubenswrapper[4794]: E0216 17:25:53.364600 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bfb4ae6-8e5f-4047-b00a-496e2cb97275" containerName="ceilometer-notification-agent" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.364608 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bfb4ae6-8e5f-4047-b00a-496e2cb97275" containerName="ceilometer-notification-agent" Feb 16 17:25:53 crc kubenswrapper[4794]: E0216 17:25:53.364626 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bfb4ae6-8e5f-4047-b00a-496e2cb97275" containerName="ceilometer-central-agent" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.364632 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bfb4ae6-8e5f-4047-b00a-496e2cb97275" containerName="ceilometer-central-agent" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.364846 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bfb4ae6-8e5f-4047-b00a-496e2cb97275" containerName="proxy-httpd" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.364860 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e740940-700c-48d4-87ae-f3df5493444b" containerName="registry-server" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.364878 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bfb4ae6-8e5f-4047-b00a-496e2cb97275" containerName="ceilometer-central-agent" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.364896 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bfb4ae6-8e5f-4047-b00a-496e2cb97275" containerName="ceilometer-notification-agent" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.364906 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bfb4ae6-8e5f-4047-b00a-496e2cb97275" containerName="sg-core" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.367015 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.369699 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.370027 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.370151 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.383586 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.413460 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15ab3708-479d-4ec2-9125-6c816bf6084f-scripts\") pod \"ceilometer-0\" (UID: \"15ab3708-479d-4ec2-9125-6c816bf6084f\") " pod="openstack/ceilometer-0" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.413515 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/15ab3708-479d-4ec2-9125-6c816bf6084f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"15ab3708-479d-4ec2-9125-6c816bf6084f\") " pod="openstack/ceilometer-0" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.413588 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twl5x\" (UniqueName: \"kubernetes.io/projected/15ab3708-479d-4ec2-9125-6c816bf6084f-kube-api-access-twl5x\") pod \"ceilometer-0\" (UID: \"15ab3708-479d-4ec2-9125-6c816bf6084f\") " pod="openstack/ceilometer-0" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.413611 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15ab3708-479d-4ec2-9125-6c816bf6084f-run-httpd\") pod \"ceilometer-0\" (UID: \"15ab3708-479d-4ec2-9125-6c816bf6084f\") " pod="openstack/ceilometer-0" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.413646 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15ab3708-479d-4ec2-9125-6c816bf6084f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15ab3708-479d-4ec2-9125-6c816bf6084f\") " pod="openstack/ceilometer-0" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.413680 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15ab3708-479d-4ec2-9125-6c816bf6084f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15ab3708-479d-4ec2-9125-6c816bf6084f\") " pod="openstack/ceilometer-0" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.413793 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15ab3708-479d-4ec2-9125-6c816bf6084f-log-httpd\") pod \"ceilometer-0\" (UID: \"15ab3708-479d-4ec2-9125-6c816bf6084f\") " pod="openstack/ceilometer-0" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.413812 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15ab3708-479d-4ec2-9125-6c816bf6084f-config-data\") pod \"ceilometer-0\" (UID: \"15ab3708-479d-4ec2-9125-6c816bf6084f\") " pod="openstack/ceilometer-0" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.419377 4794 scope.go:117] "RemoveContainer" containerID="c667ef05c2615ab3428bdd947fa1adc22ade3d6fd2d037324420ae561f3a6b97" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.439875 4794 scope.go:117] "RemoveContainer" containerID="58db2af9a34cbbf55cf99c07c1c587acda540c331431ba55038aec58f776599e" Feb 16 17:25:53 crc kubenswrapper[4794]: E0216 17:25:53.440407 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58db2af9a34cbbf55cf99c07c1c587acda540c331431ba55038aec58f776599e\": container with ID starting with 58db2af9a34cbbf55cf99c07c1c587acda540c331431ba55038aec58f776599e not found: ID does not exist" containerID="58db2af9a34cbbf55cf99c07c1c587acda540c331431ba55038aec58f776599e" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.440443 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58db2af9a34cbbf55cf99c07c1c587acda540c331431ba55038aec58f776599e"} err="failed to get container status \"58db2af9a34cbbf55cf99c07c1c587acda540c331431ba55038aec58f776599e\": rpc error: code = NotFound desc = could not find container \"58db2af9a34cbbf55cf99c07c1c587acda540c331431ba55038aec58f776599e\": container with ID starting with 58db2af9a34cbbf55cf99c07c1c587acda540c331431ba55038aec58f776599e not found: ID does not exist" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.440470 4794 scope.go:117] "RemoveContainer" containerID="a7e83c8823b194a5c6336b5d6816f58c0ed2e0574e6e09cfdfaf0d37fa1d9376" Feb 16 17:25:53 crc kubenswrapper[4794]: E0216 17:25:53.440834 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7e83c8823b194a5c6336b5d6816f58c0ed2e0574e6e09cfdfaf0d37fa1d9376\": container with ID starting with a7e83c8823b194a5c6336b5d6816f58c0ed2e0574e6e09cfdfaf0d37fa1d9376 not found: ID does not exist" containerID="a7e83c8823b194a5c6336b5d6816f58c0ed2e0574e6e09cfdfaf0d37fa1d9376" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.440852 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7e83c8823b194a5c6336b5d6816f58c0ed2e0574e6e09cfdfaf0d37fa1d9376"} err="failed to get container status \"a7e83c8823b194a5c6336b5d6816f58c0ed2e0574e6e09cfdfaf0d37fa1d9376\": rpc error: code = NotFound desc = could not find container \"a7e83c8823b194a5c6336b5d6816f58c0ed2e0574e6e09cfdfaf0d37fa1d9376\": container with ID starting with a7e83c8823b194a5c6336b5d6816f58c0ed2e0574e6e09cfdfaf0d37fa1d9376 not found: ID does not exist" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.440868 4794 scope.go:117] "RemoveContainer" containerID="78fbbc0f24acd8ba660d34fde5061312a915118c6231539c34a4e0219d165fd0" Feb 16 17:25:53 crc kubenswrapper[4794]: E0216 17:25:53.441261 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78fbbc0f24acd8ba660d34fde5061312a915118c6231539c34a4e0219d165fd0\": container with ID starting with 78fbbc0f24acd8ba660d34fde5061312a915118c6231539c34a4e0219d165fd0 not found: ID does not exist" containerID="78fbbc0f24acd8ba660d34fde5061312a915118c6231539c34a4e0219d165fd0" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.441280 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78fbbc0f24acd8ba660d34fde5061312a915118c6231539c34a4e0219d165fd0"} err="failed to get container status \"78fbbc0f24acd8ba660d34fde5061312a915118c6231539c34a4e0219d165fd0\": rpc error: code = NotFound desc = could not find container \"78fbbc0f24acd8ba660d34fde5061312a915118c6231539c34a4e0219d165fd0\": container with ID starting with 78fbbc0f24acd8ba660d34fde5061312a915118c6231539c34a4e0219d165fd0 not found: ID does not exist" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.441295 4794 scope.go:117] "RemoveContainer" containerID="c667ef05c2615ab3428bdd947fa1adc22ade3d6fd2d037324420ae561f3a6b97" Feb 16 17:25:53 crc kubenswrapper[4794]: E0216 17:25:53.441661 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c667ef05c2615ab3428bdd947fa1adc22ade3d6fd2d037324420ae561f3a6b97\": container with ID starting with c667ef05c2615ab3428bdd947fa1adc22ade3d6fd2d037324420ae561f3a6b97 not found: ID does not exist" containerID="c667ef05c2615ab3428bdd947fa1adc22ade3d6fd2d037324420ae561f3a6b97" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.441680 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c667ef05c2615ab3428bdd947fa1adc22ade3d6fd2d037324420ae561f3a6b97"} err="failed to get container status \"c667ef05c2615ab3428bdd947fa1adc22ade3d6fd2d037324420ae561f3a6b97\": rpc error: code = NotFound desc = could not find container \"c667ef05c2615ab3428bdd947fa1adc22ade3d6fd2d037324420ae561f3a6b97\": container with ID starting with c667ef05c2615ab3428bdd947fa1adc22ade3d6fd2d037324420ae561f3a6b97 not found: ID does not exist" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.516101 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15ab3708-479d-4ec2-9125-6c816bf6084f-log-httpd\") pod \"ceilometer-0\" (UID: \"15ab3708-479d-4ec2-9125-6c816bf6084f\") " pod="openstack/ceilometer-0" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.516684 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15ab3708-479d-4ec2-9125-6c816bf6084f-config-data\") pod \"ceilometer-0\" (UID: \"15ab3708-479d-4ec2-9125-6c816bf6084f\") " pod="openstack/ceilometer-0" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.517213 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15ab3708-479d-4ec2-9125-6c816bf6084f-scripts\") pod \"ceilometer-0\" (UID: \"15ab3708-479d-4ec2-9125-6c816bf6084f\") " pod="openstack/ceilometer-0" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.516623 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15ab3708-479d-4ec2-9125-6c816bf6084f-log-httpd\") pod \"ceilometer-0\" (UID: \"15ab3708-479d-4ec2-9125-6c816bf6084f\") " pod="openstack/ceilometer-0" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.517278 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/15ab3708-479d-4ec2-9125-6c816bf6084f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"15ab3708-479d-4ec2-9125-6c816bf6084f\") " pod="openstack/ceilometer-0" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.517628 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twl5x\" (UniqueName: \"kubernetes.io/projected/15ab3708-479d-4ec2-9125-6c816bf6084f-kube-api-access-twl5x\") pod \"ceilometer-0\" (UID: \"15ab3708-479d-4ec2-9125-6c816bf6084f\") " pod="openstack/ceilometer-0" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.517692 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15ab3708-479d-4ec2-9125-6c816bf6084f-run-httpd\") pod \"ceilometer-0\" (UID: \"15ab3708-479d-4ec2-9125-6c816bf6084f\") " pod="openstack/ceilometer-0" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.517804 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15ab3708-479d-4ec2-9125-6c816bf6084f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15ab3708-479d-4ec2-9125-6c816bf6084f\") " pod="openstack/ceilometer-0" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.517877 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15ab3708-479d-4ec2-9125-6c816bf6084f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15ab3708-479d-4ec2-9125-6c816bf6084f\") " pod="openstack/ceilometer-0" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.518064 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15ab3708-479d-4ec2-9125-6c816bf6084f-run-httpd\") pod \"ceilometer-0\" (UID: \"15ab3708-479d-4ec2-9125-6c816bf6084f\") " pod="openstack/ceilometer-0" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.520734 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15ab3708-479d-4ec2-9125-6c816bf6084f-config-data\") pod \"ceilometer-0\" (UID: \"15ab3708-479d-4ec2-9125-6c816bf6084f\") " pod="openstack/ceilometer-0" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.520808 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/15ab3708-479d-4ec2-9125-6c816bf6084f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"15ab3708-479d-4ec2-9125-6c816bf6084f\") " pod="openstack/ceilometer-0" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.521954 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15ab3708-479d-4ec2-9125-6c816bf6084f-scripts\") pod \"ceilometer-0\" (UID: \"15ab3708-479d-4ec2-9125-6c816bf6084f\") " pod="openstack/ceilometer-0" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.525139 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15ab3708-479d-4ec2-9125-6c816bf6084f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15ab3708-479d-4ec2-9125-6c816bf6084f\") " pod="openstack/ceilometer-0" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.526383 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15ab3708-479d-4ec2-9125-6c816bf6084f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15ab3708-479d-4ec2-9125-6c816bf6084f\") " pod="openstack/ceilometer-0" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.534531 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twl5x\" (UniqueName: \"kubernetes.io/projected/15ab3708-479d-4ec2-9125-6c816bf6084f-kube-api-access-twl5x\") pod \"ceilometer-0\" (UID: \"15ab3708-479d-4ec2-9125-6c816bf6084f\") " pod="openstack/ceilometer-0" Feb 16 17:25:53 crc kubenswrapper[4794]: I0216 17:25:53.708992 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:25:54 crc kubenswrapper[4794]: I0216 17:25:54.213288 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:25:54 crc kubenswrapper[4794]: W0216 17:25:54.226828 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15ab3708_479d_4ec2_9125_6c816bf6084f.slice/crio-fbde4a60d9251142a8b6c39834e55507b5be0388d666851890762bb59351a87e WatchSource:0}: Error finding container fbde4a60d9251142a8b6c39834e55507b5be0388d666851890762bb59351a87e: Status 404 returned error can't find the container with id fbde4a60d9251142a8b6c39834e55507b5be0388d666851890762bb59351a87e Feb 16 17:25:54 crc kubenswrapper[4794]: I0216 17:25:54.282126 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15ab3708-479d-4ec2-9125-6c816bf6084f","Type":"ContainerStarted","Data":"fbde4a60d9251142a8b6c39834e55507b5be0388d666851890762bb59351a87e"} Feb 16 17:25:54 crc kubenswrapper[4794]: I0216 17:25:54.817109 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bfb4ae6-8e5f-4047-b00a-496e2cb97275" path="/var/lib/kubelet/pods/4bfb4ae6-8e5f-4047-b00a-496e2cb97275/volumes" Feb 16 17:25:55 crc kubenswrapper[4794]: I0216 17:25:55.297934 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15ab3708-479d-4ec2-9125-6c816bf6084f","Type":"ContainerStarted","Data":"e5ed689c9e73dd31b2bf1abb59371e3289970ef73d488d04a64f712e8d12fa06"} Feb 16 17:25:56 crc kubenswrapper[4794]: I0216 17:25:56.311133 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15ab3708-479d-4ec2-9125-6c816bf6084f","Type":"ContainerStarted","Data":"0166c3b6fa21a35e55b87e3765e7ef220945d10949ac08cfb1a0ca476bafb0ea"} Feb 16 17:25:56 crc kubenswrapper[4794]: I0216 17:25:56.644523 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-rppbn"] Feb 16 17:25:56 crc kubenswrapper[4794]: I0216 17:25:56.654844 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-rppbn"] Feb 16 17:25:56 crc kubenswrapper[4794]: I0216 17:25:56.732693 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-7gcsf"] Feb 16 17:25:56 crc kubenswrapper[4794]: I0216 17:25:56.734840 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-7gcsf" Feb 16 17:25:56 crc kubenswrapper[4794]: I0216 17:25:56.746276 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-7gcsf"] Feb 16 17:25:56 crc kubenswrapper[4794]: I0216 17:25:56.797456 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c695f880-15cb-45b1-8545-60d8437ec631-combined-ca-bundle\") pod \"heat-db-sync-7gcsf\" (UID: \"c695f880-15cb-45b1-8545-60d8437ec631\") " pod="openstack/heat-db-sync-7gcsf" Feb 16 17:25:56 crc kubenswrapper[4794]: I0216 17:25:56.797542 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h5l2\" (UniqueName: \"kubernetes.io/projected/c695f880-15cb-45b1-8545-60d8437ec631-kube-api-access-2h5l2\") pod \"heat-db-sync-7gcsf\" (UID: \"c695f880-15cb-45b1-8545-60d8437ec631\") " pod="openstack/heat-db-sync-7gcsf" Feb 16 17:25:56 crc kubenswrapper[4794]: I0216 17:25:56.797652 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c695f880-15cb-45b1-8545-60d8437ec631-config-data\") pod \"heat-db-sync-7gcsf\" (UID: \"c695f880-15cb-45b1-8545-60d8437ec631\") " pod="openstack/heat-db-sync-7gcsf" Feb 16 17:25:56 crc kubenswrapper[4794]: I0216 17:25:56.828996 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f3b58ad-6afe-4194-a578-2f4fec69367c" path="/var/lib/kubelet/pods/8f3b58ad-6afe-4194-a578-2f4fec69367c/volumes" Feb 16 17:25:56 crc kubenswrapper[4794]: I0216 17:25:56.899901 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c695f880-15cb-45b1-8545-60d8437ec631-combined-ca-bundle\") pod \"heat-db-sync-7gcsf\" (UID: \"c695f880-15cb-45b1-8545-60d8437ec631\") " pod="openstack/heat-db-sync-7gcsf" Feb 16 17:25:56 crc kubenswrapper[4794]: I0216 17:25:56.899983 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h5l2\" (UniqueName: \"kubernetes.io/projected/c695f880-15cb-45b1-8545-60d8437ec631-kube-api-access-2h5l2\") pod \"heat-db-sync-7gcsf\" (UID: \"c695f880-15cb-45b1-8545-60d8437ec631\") " pod="openstack/heat-db-sync-7gcsf" Feb 16 17:25:56 crc kubenswrapper[4794]: I0216 17:25:56.900297 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c695f880-15cb-45b1-8545-60d8437ec631-config-data\") pod \"heat-db-sync-7gcsf\" (UID: \"c695f880-15cb-45b1-8545-60d8437ec631\") " pod="openstack/heat-db-sync-7gcsf" Feb 16 17:25:56 crc kubenswrapper[4794]: I0216 17:25:56.908458 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c695f880-15cb-45b1-8545-60d8437ec631-config-data\") pod \"heat-db-sync-7gcsf\" (UID: \"c695f880-15cb-45b1-8545-60d8437ec631\") " pod="openstack/heat-db-sync-7gcsf" Feb 16 17:25:56 crc kubenswrapper[4794]: I0216 17:25:56.916905 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c695f880-15cb-45b1-8545-60d8437ec631-combined-ca-bundle\") pod \"heat-db-sync-7gcsf\" (UID: \"c695f880-15cb-45b1-8545-60d8437ec631\") " pod="openstack/heat-db-sync-7gcsf" Feb 16 17:25:56 crc kubenswrapper[4794]: I0216 17:25:56.918927 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h5l2\" (UniqueName: \"kubernetes.io/projected/c695f880-15cb-45b1-8545-60d8437ec631-kube-api-access-2h5l2\") pod \"heat-db-sync-7gcsf\" (UID: \"c695f880-15cb-45b1-8545-60d8437ec631\") " pod="openstack/heat-db-sync-7gcsf" Feb 16 17:25:57 crc kubenswrapper[4794]: I0216 17:25:57.101126 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-7gcsf" Feb 16 17:25:57 crc kubenswrapper[4794]: I0216 17:25:57.602736 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-7gcsf"] Feb 16 17:25:57 crc kubenswrapper[4794]: W0216 17:25:57.604201 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc695f880_15cb_45b1_8545_60d8437ec631.slice/crio-811e3d90f921b01d1c6362809d283d62be51eadd1e7cd63321e4e900ae751d74 WatchSource:0}: Error finding container 811e3d90f921b01d1c6362809d283d62be51eadd1e7cd63321e4e900ae751d74: Status 404 returned error can't find the container with id 811e3d90f921b01d1c6362809d283d62be51eadd1e7cd63321e4e900ae751d74 Feb 16 17:25:57 crc kubenswrapper[4794]: E0216 17:25:57.742982 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 17:25:57 crc kubenswrapper[4794]: E0216 17:25:57.743049 4794 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 17:25:57 crc kubenswrapper[4794]: E0216 17:25:57.743185 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2h5l2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-7gcsf_openstack(c695f880-15cb-45b1-8545-60d8437ec631): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 17:25:57 crc kubenswrapper[4794]: E0216 17:25:57.744371 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:25:58 crc kubenswrapper[4794]: I0216 17:25:58.336317 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15ab3708-479d-4ec2-9125-6c816bf6084f","Type":"ContainerStarted","Data":"4728113f22979c017aee4629603006a992568a942b573234e1ff9d4ebec7aa11"} Feb 16 17:25:58 crc kubenswrapper[4794]: I0216 17:25:58.337188 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-7gcsf" event={"ID":"c695f880-15cb-45b1-8545-60d8437ec631","Type":"ContainerStarted","Data":"811e3d90f921b01d1c6362809d283d62be51eadd1e7cd63321e4e900ae751d74"} Feb 16 17:25:58 crc kubenswrapper[4794]: E0216 17:25:58.339330 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:25:58 crc kubenswrapper[4794]: I0216 17:25:58.723241 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 16 17:25:59 crc kubenswrapper[4794]: E0216 17:25:59.347919 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:25:59 crc kubenswrapper[4794]: I0216 17:25:59.470688 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 17:25:59 crc kubenswrapper[4794]: I0216 17:25:59.921797 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:26:00 crc kubenswrapper[4794]: I0216 17:26:00.359905 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15ab3708-479d-4ec2-9125-6c816bf6084f","Type":"ContainerStarted","Data":"49d13a5d07d3e8b9b3f72107b50fbf76ba393fef8a523d07519570f35220f4be"} Feb 16 17:26:00 crc kubenswrapper[4794]: I0216 17:26:00.360078 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 17:26:00 crc kubenswrapper[4794]: I0216 17:26:00.386743 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.531693112 podStartE2EDuration="7.386721587s" podCreationTimestamp="2026-02-16 17:25:53 +0000 UTC" firstStartedPulling="2026-02-16 17:25:54.230246079 +0000 UTC m=+1580.178340726" lastFinishedPulling="2026-02-16 17:25:59.085274554 +0000 UTC m=+1585.033369201" observedRunningTime="2026-02-16 17:26:00.382603421 +0000 UTC m=+1586.330698068" watchObservedRunningTime="2026-02-16 17:26:00.386721587 +0000 UTC m=+1586.334816234" Feb 16 17:26:00 crc kubenswrapper[4794]: I0216 17:26:00.516526 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 17:26:01 crc kubenswrapper[4794]: I0216 17:26:01.373351 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15ab3708-479d-4ec2-9125-6c816bf6084f" containerName="ceilometer-central-agent" containerID="cri-o://e5ed689c9e73dd31b2bf1abb59371e3289970ef73d488d04a64f712e8d12fa06" gracePeriod=30 Feb 16 17:26:01 crc kubenswrapper[4794]: I0216 17:26:01.373448 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15ab3708-479d-4ec2-9125-6c816bf6084f" containerName="proxy-httpd" containerID="cri-o://49d13a5d07d3e8b9b3f72107b50fbf76ba393fef8a523d07519570f35220f4be" gracePeriod=30 Feb 16 17:26:01 crc kubenswrapper[4794]: I0216 17:26:01.373448 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15ab3708-479d-4ec2-9125-6c816bf6084f" containerName="sg-core" containerID="cri-o://4728113f22979c017aee4629603006a992568a942b573234e1ff9d4ebec7aa11" gracePeriod=30 Feb 16 17:26:01 crc kubenswrapper[4794]: I0216 17:26:01.373560 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15ab3708-479d-4ec2-9125-6c816bf6084f" containerName="ceilometer-notification-agent" containerID="cri-o://0166c3b6fa21a35e55b87e3765e7ef220945d10949ac08cfb1a0ca476bafb0ea" gracePeriod=30 Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.283387 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.345439 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15ab3708-479d-4ec2-9125-6c816bf6084f-run-httpd\") pod \"15ab3708-479d-4ec2-9125-6c816bf6084f\" (UID: \"15ab3708-479d-4ec2-9125-6c816bf6084f\") " Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.345532 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/15ab3708-479d-4ec2-9125-6c816bf6084f-ceilometer-tls-certs\") pod \"15ab3708-479d-4ec2-9125-6c816bf6084f\" (UID: \"15ab3708-479d-4ec2-9125-6c816bf6084f\") " Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.345574 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twl5x\" (UniqueName: \"kubernetes.io/projected/15ab3708-479d-4ec2-9125-6c816bf6084f-kube-api-access-twl5x\") pod \"15ab3708-479d-4ec2-9125-6c816bf6084f\" (UID: \"15ab3708-479d-4ec2-9125-6c816bf6084f\") " Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.345659 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15ab3708-479d-4ec2-9125-6c816bf6084f-sg-core-conf-yaml\") pod \"15ab3708-479d-4ec2-9125-6c816bf6084f\" (UID: \"15ab3708-479d-4ec2-9125-6c816bf6084f\") " Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.345715 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15ab3708-479d-4ec2-9125-6c816bf6084f-scripts\") pod \"15ab3708-479d-4ec2-9125-6c816bf6084f\" (UID: \"15ab3708-479d-4ec2-9125-6c816bf6084f\") " Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.345734 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15ab3708-479d-4ec2-9125-6c816bf6084f-combined-ca-bundle\") pod \"15ab3708-479d-4ec2-9125-6c816bf6084f\" (UID: \"15ab3708-479d-4ec2-9125-6c816bf6084f\") " Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.345861 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15ab3708-479d-4ec2-9125-6c816bf6084f-log-httpd\") pod \"15ab3708-479d-4ec2-9125-6c816bf6084f\" (UID: \"15ab3708-479d-4ec2-9125-6c816bf6084f\") " Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.346166 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15ab3708-479d-4ec2-9125-6c816bf6084f-config-data\") pod \"15ab3708-479d-4ec2-9125-6c816bf6084f\" (UID: \"15ab3708-479d-4ec2-9125-6c816bf6084f\") " Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.358380 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15ab3708-479d-4ec2-9125-6c816bf6084f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "15ab3708-479d-4ec2-9125-6c816bf6084f" (UID: "15ab3708-479d-4ec2-9125-6c816bf6084f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.362169 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15ab3708-479d-4ec2-9125-6c816bf6084f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "15ab3708-479d-4ec2-9125-6c816bf6084f" (UID: "15ab3708-479d-4ec2-9125-6c816bf6084f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.366792 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15ab3708-479d-4ec2-9125-6c816bf6084f-scripts" (OuterVolumeSpecName: "scripts") pod "15ab3708-479d-4ec2-9125-6c816bf6084f" (UID: "15ab3708-479d-4ec2-9125-6c816bf6084f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.389739 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15ab3708-479d-4ec2-9125-6c816bf6084f-kube-api-access-twl5x" (OuterVolumeSpecName: "kube-api-access-twl5x") pod "15ab3708-479d-4ec2-9125-6c816bf6084f" (UID: "15ab3708-479d-4ec2-9125-6c816bf6084f"). InnerVolumeSpecName "kube-api-access-twl5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.408892 4794 generic.go:334] "Generic (PLEG): container finished" podID="15ab3708-479d-4ec2-9125-6c816bf6084f" containerID="49d13a5d07d3e8b9b3f72107b50fbf76ba393fef8a523d07519570f35220f4be" exitCode=0 Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.409186 4794 generic.go:334] "Generic (PLEG): container finished" podID="15ab3708-479d-4ec2-9125-6c816bf6084f" containerID="4728113f22979c017aee4629603006a992568a942b573234e1ff9d4ebec7aa11" exitCode=2 Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.409013 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15ab3708-479d-4ec2-9125-6c816bf6084f","Type":"ContainerDied","Data":"49d13a5d07d3e8b9b3f72107b50fbf76ba393fef8a523d07519570f35220f4be"} Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.409348 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15ab3708-479d-4ec2-9125-6c816bf6084f","Type":"ContainerDied","Data":"4728113f22979c017aee4629603006a992568a942b573234e1ff9d4ebec7aa11"} Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.409379 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15ab3708-479d-4ec2-9125-6c816bf6084f","Type":"ContainerDied","Data":"0166c3b6fa21a35e55b87e3765e7ef220945d10949ac08cfb1a0ca476bafb0ea"} Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.409399 4794 scope.go:117] "RemoveContainer" containerID="49d13a5d07d3e8b9b3f72107b50fbf76ba393fef8a523d07519570f35220f4be" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.409053 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.409294 4794 generic.go:334] "Generic (PLEG): container finished" podID="15ab3708-479d-4ec2-9125-6c816bf6084f" containerID="0166c3b6fa21a35e55b87e3765e7ef220945d10949ac08cfb1a0ca476bafb0ea" exitCode=0 Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.409891 4794 generic.go:334] "Generic (PLEG): container finished" podID="15ab3708-479d-4ec2-9125-6c816bf6084f" containerID="e5ed689c9e73dd31b2bf1abb59371e3289970ef73d488d04a64f712e8d12fa06" exitCode=0 Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.410001 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15ab3708-479d-4ec2-9125-6c816bf6084f","Type":"ContainerDied","Data":"e5ed689c9e73dd31b2bf1abb59371e3289970ef73d488d04a64f712e8d12fa06"} Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.410125 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15ab3708-479d-4ec2-9125-6c816bf6084f","Type":"ContainerDied","Data":"fbde4a60d9251142a8b6c39834e55507b5be0388d666851890762bb59351a87e"} Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.440959 4794 scope.go:117] "RemoveContainer" containerID="4728113f22979c017aee4629603006a992568a942b573234e1ff9d4ebec7aa11" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.444418 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15ab3708-479d-4ec2-9125-6c816bf6084f-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "15ab3708-479d-4ec2-9125-6c816bf6084f" (UID: "15ab3708-479d-4ec2-9125-6c816bf6084f"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.448523 4794 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15ab3708-479d-4ec2-9125-6c816bf6084f-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.448548 4794 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15ab3708-479d-4ec2-9125-6c816bf6084f-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.448557 4794 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/15ab3708-479d-4ec2-9125-6c816bf6084f-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.448568 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twl5x\" (UniqueName: \"kubernetes.io/projected/15ab3708-479d-4ec2-9125-6c816bf6084f-kube-api-access-twl5x\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.448577 4794 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15ab3708-479d-4ec2-9125-6c816bf6084f-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.450594 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15ab3708-479d-4ec2-9125-6c816bf6084f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "15ab3708-479d-4ec2-9125-6c816bf6084f" (UID: "15ab3708-479d-4ec2-9125-6c816bf6084f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.464371 4794 scope.go:117] "RemoveContainer" containerID="0166c3b6fa21a35e55b87e3765e7ef220945d10949ac08cfb1a0ca476bafb0ea" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.499079 4794 scope.go:117] "RemoveContainer" containerID="e5ed689c9e73dd31b2bf1abb59371e3289970ef73d488d04a64f712e8d12fa06" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.527283 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15ab3708-479d-4ec2-9125-6c816bf6084f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15ab3708-479d-4ec2-9125-6c816bf6084f" (UID: "15ab3708-479d-4ec2-9125-6c816bf6084f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.550684 4794 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15ab3708-479d-4ec2-9125-6c816bf6084f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.550720 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15ab3708-479d-4ec2-9125-6c816bf6084f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.583768 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15ab3708-479d-4ec2-9125-6c816bf6084f-config-data" (OuterVolumeSpecName: "config-data") pod "15ab3708-479d-4ec2-9125-6c816bf6084f" (UID: "15ab3708-479d-4ec2-9125-6c816bf6084f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.652697 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15ab3708-479d-4ec2-9125-6c816bf6084f-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.694926 4794 scope.go:117] "RemoveContainer" containerID="49d13a5d07d3e8b9b3f72107b50fbf76ba393fef8a523d07519570f35220f4be" Feb 16 17:26:02 crc kubenswrapper[4794]: E0216 17:26:02.695481 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49d13a5d07d3e8b9b3f72107b50fbf76ba393fef8a523d07519570f35220f4be\": container with ID starting with 49d13a5d07d3e8b9b3f72107b50fbf76ba393fef8a523d07519570f35220f4be not found: ID does not exist" containerID="49d13a5d07d3e8b9b3f72107b50fbf76ba393fef8a523d07519570f35220f4be" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.695532 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49d13a5d07d3e8b9b3f72107b50fbf76ba393fef8a523d07519570f35220f4be"} err="failed to get container status \"49d13a5d07d3e8b9b3f72107b50fbf76ba393fef8a523d07519570f35220f4be\": rpc error: code = NotFound desc = could not find container \"49d13a5d07d3e8b9b3f72107b50fbf76ba393fef8a523d07519570f35220f4be\": container with ID starting with 49d13a5d07d3e8b9b3f72107b50fbf76ba393fef8a523d07519570f35220f4be not found: ID does not exist" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.695555 4794 scope.go:117] "RemoveContainer" containerID="4728113f22979c017aee4629603006a992568a942b573234e1ff9d4ebec7aa11" Feb 16 17:26:02 crc kubenswrapper[4794]: E0216 17:26:02.695929 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4728113f22979c017aee4629603006a992568a942b573234e1ff9d4ebec7aa11\": container with ID starting with 4728113f22979c017aee4629603006a992568a942b573234e1ff9d4ebec7aa11 not found: ID does not exist" containerID="4728113f22979c017aee4629603006a992568a942b573234e1ff9d4ebec7aa11" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.695950 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4728113f22979c017aee4629603006a992568a942b573234e1ff9d4ebec7aa11"} err="failed to get container status \"4728113f22979c017aee4629603006a992568a942b573234e1ff9d4ebec7aa11\": rpc error: code = NotFound desc = could not find container \"4728113f22979c017aee4629603006a992568a942b573234e1ff9d4ebec7aa11\": container with ID starting with 4728113f22979c017aee4629603006a992568a942b573234e1ff9d4ebec7aa11 not found: ID does not exist" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.695966 4794 scope.go:117] "RemoveContainer" containerID="0166c3b6fa21a35e55b87e3765e7ef220945d10949ac08cfb1a0ca476bafb0ea" Feb 16 17:26:02 crc kubenswrapper[4794]: E0216 17:26:02.697667 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0166c3b6fa21a35e55b87e3765e7ef220945d10949ac08cfb1a0ca476bafb0ea\": container with ID starting with 0166c3b6fa21a35e55b87e3765e7ef220945d10949ac08cfb1a0ca476bafb0ea not found: ID does not exist" containerID="0166c3b6fa21a35e55b87e3765e7ef220945d10949ac08cfb1a0ca476bafb0ea" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.697749 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0166c3b6fa21a35e55b87e3765e7ef220945d10949ac08cfb1a0ca476bafb0ea"} err="failed to get container status \"0166c3b6fa21a35e55b87e3765e7ef220945d10949ac08cfb1a0ca476bafb0ea\": rpc error: code = NotFound desc = could not find container \"0166c3b6fa21a35e55b87e3765e7ef220945d10949ac08cfb1a0ca476bafb0ea\": container with ID starting with 0166c3b6fa21a35e55b87e3765e7ef220945d10949ac08cfb1a0ca476bafb0ea not found: ID does not exist" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.697800 4794 scope.go:117] "RemoveContainer" containerID="e5ed689c9e73dd31b2bf1abb59371e3289970ef73d488d04a64f712e8d12fa06" Feb 16 17:26:02 crc kubenswrapper[4794]: E0216 17:26:02.698259 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5ed689c9e73dd31b2bf1abb59371e3289970ef73d488d04a64f712e8d12fa06\": container with ID starting with e5ed689c9e73dd31b2bf1abb59371e3289970ef73d488d04a64f712e8d12fa06 not found: ID does not exist" containerID="e5ed689c9e73dd31b2bf1abb59371e3289970ef73d488d04a64f712e8d12fa06" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.698367 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5ed689c9e73dd31b2bf1abb59371e3289970ef73d488d04a64f712e8d12fa06"} err="failed to get container status \"e5ed689c9e73dd31b2bf1abb59371e3289970ef73d488d04a64f712e8d12fa06\": rpc error: code = NotFound desc = could not find container \"e5ed689c9e73dd31b2bf1abb59371e3289970ef73d488d04a64f712e8d12fa06\": container with ID starting with e5ed689c9e73dd31b2bf1abb59371e3289970ef73d488d04a64f712e8d12fa06 not found: ID does not exist" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.698393 4794 scope.go:117] "RemoveContainer" containerID="49d13a5d07d3e8b9b3f72107b50fbf76ba393fef8a523d07519570f35220f4be" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.698725 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49d13a5d07d3e8b9b3f72107b50fbf76ba393fef8a523d07519570f35220f4be"} err="failed to get container status \"49d13a5d07d3e8b9b3f72107b50fbf76ba393fef8a523d07519570f35220f4be\": rpc error: code = NotFound desc = could not find container \"49d13a5d07d3e8b9b3f72107b50fbf76ba393fef8a523d07519570f35220f4be\": container with ID starting with 49d13a5d07d3e8b9b3f72107b50fbf76ba393fef8a523d07519570f35220f4be not found: ID does not exist" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.698779 4794 scope.go:117] "RemoveContainer" containerID="4728113f22979c017aee4629603006a992568a942b573234e1ff9d4ebec7aa11" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.699809 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4728113f22979c017aee4629603006a992568a942b573234e1ff9d4ebec7aa11"} err="failed to get container status \"4728113f22979c017aee4629603006a992568a942b573234e1ff9d4ebec7aa11\": rpc error: code = NotFound desc = could not find container \"4728113f22979c017aee4629603006a992568a942b573234e1ff9d4ebec7aa11\": container with ID starting with 4728113f22979c017aee4629603006a992568a942b573234e1ff9d4ebec7aa11 not found: ID does not exist" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.699831 4794 scope.go:117] "RemoveContainer" containerID="0166c3b6fa21a35e55b87e3765e7ef220945d10949ac08cfb1a0ca476bafb0ea" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.700233 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0166c3b6fa21a35e55b87e3765e7ef220945d10949ac08cfb1a0ca476bafb0ea"} err="failed to get container status \"0166c3b6fa21a35e55b87e3765e7ef220945d10949ac08cfb1a0ca476bafb0ea\": rpc error: code = NotFound desc = could not find container \"0166c3b6fa21a35e55b87e3765e7ef220945d10949ac08cfb1a0ca476bafb0ea\": container with ID starting with 0166c3b6fa21a35e55b87e3765e7ef220945d10949ac08cfb1a0ca476bafb0ea not found: ID does not exist" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.700366 4794 scope.go:117] "RemoveContainer" containerID="e5ed689c9e73dd31b2bf1abb59371e3289970ef73d488d04a64f712e8d12fa06" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.700760 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5ed689c9e73dd31b2bf1abb59371e3289970ef73d488d04a64f712e8d12fa06"} err="failed to get container status \"e5ed689c9e73dd31b2bf1abb59371e3289970ef73d488d04a64f712e8d12fa06\": rpc error: code = NotFound desc = could not find container \"e5ed689c9e73dd31b2bf1abb59371e3289970ef73d488d04a64f712e8d12fa06\": container with ID starting with e5ed689c9e73dd31b2bf1abb59371e3289970ef73d488d04a64f712e8d12fa06 not found: ID does not exist" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.700797 4794 scope.go:117] "RemoveContainer" containerID="49d13a5d07d3e8b9b3f72107b50fbf76ba393fef8a523d07519570f35220f4be" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.701111 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49d13a5d07d3e8b9b3f72107b50fbf76ba393fef8a523d07519570f35220f4be"} err="failed to get container status \"49d13a5d07d3e8b9b3f72107b50fbf76ba393fef8a523d07519570f35220f4be\": rpc error: code = NotFound desc = could not find container \"49d13a5d07d3e8b9b3f72107b50fbf76ba393fef8a523d07519570f35220f4be\": container with ID starting with 49d13a5d07d3e8b9b3f72107b50fbf76ba393fef8a523d07519570f35220f4be not found: ID does not exist" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.701191 4794 scope.go:117] "RemoveContainer" containerID="4728113f22979c017aee4629603006a992568a942b573234e1ff9d4ebec7aa11" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.701510 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4728113f22979c017aee4629603006a992568a942b573234e1ff9d4ebec7aa11"} err="failed to get container status \"4728113f22979c017aee4629603006a992568a942b573234e1ff9d4ebec7aa11\": rpc error: code = NotFound desc = could not find container \"4728113f22979c017aee4629603006a992568a942b573234e1ff9d4ebec7aa11\": container with ID starting with 4728113f22979c017aee4629603006a992568a942b573234e1ff9d4ebec7aa11 not found: ID does not exist" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.701531 4794 scope.go:117] "RemoveContainer" containerID="0166c3b6fa21a35e55b87e3765e7ef220945d10949ac08cfb1a0ca476bafb0ea" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.701712 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0166c3b6fa21a35e55b87e3765e7ef220945d10949ac08cfb1a0ca476bafb0ea"} err="failed to get container status \"0166c3b6fa21a35e55b87e3765e7ef220945d10949ac08cfb1a0ca476bafb0ea\": rpc error: code = NotFound desc = could not find container \"0166c3b6fa21a35e55b87e3765e7ef220945d10949ac08cfb1a0ca476bafb0ea\": container with ID starting with 0166c3b6fa21a35e55b87e3765e7ef220945d10949ac08cfb1a0ca476bafb0ea not found: ID does not exist" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.701731 4794 scope.go:117] "RemoveContainer" containerID="e5ed689c9e73dd31b2bf1abb59371e3289970ef73d488d04a64f712e8d12fa06" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.701985 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5ed689c9e73dd31b2bf1abb59371e3289970ef73d488d04a64f712e8d12fa06"} err="failed to get container status \"e5ed689c9e73dd31b2bf1abb59371e3289970ef73d488d04a64f712e8d12fa06\": rpc error: code = NotFound desc = could not find container \"e5ed689c9e73dd31b2bf1abb59371e3289970ef73d488d04a64f712e8d12fa06\": container with ID starting with e5ed689c9e73dd31b2bf1abb59371e3289970ef73d488d04a64f712e8d12fa06 not found: ID does not exist" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.702001 4794 scope.go:117] "RemoveContainer" containerID="49d13a5d07d3e8b9b3f72107b50fbf76ba393fef8a523d07519570f35220f4be" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.702170 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49d13a5d07d3e8b9b3f72107b50fbf76ba393fef8a523d07519570f35220f4be"} err="failed to get container status \"49d13a5d07d3e8b9b3f72107b50fbf76ba393fef8a523d07519570f35220f4be\": rpc error: code = NotFound desc = could not find container \"49d13a5d07d3e8b9b3f72107b50fbf76ba393fef8a523d07519570f35220f4be\": container with ID starting with 49d13a5d07d3e8b9b3f72107b50fbf76ba393fef8a523d07519570f35220f4be not found: ID does not exist" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.702202 4794 scope.go:117] "RemoveContainer" containerID="4728113f22979c017aee4629603006a992568a942b573234e1ff9d4ebec7aa11" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.702518 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4728113f22979c017aee4629603006a992568a942b573234e1ff9d4ebec7aa11"} err="failed to get container status \"4728113f22979c017aee4629603006a992568a942b573234e1ff9d4ebec7aa11\": rpc error: code = NotFound desc = could not find container \"4728113f22979c017aee4629603006a992568a942b573234e1ff9d4ebec7aa11\": container with ID starting with 4728113f22979c017aee4629603006a992568a942b573234e1ff9d4ebec7aa11 not found: ID does not exist" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.702603 4794 scope.go:117] "RemoveContainer" containerID="0166c3b6fa21a35e55b87e3765e7ef220945d10949ac08cfb1a0ca476bafb0ea" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.702942 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0166c3b6fa21a35e55b87e3765e7ef220945d10949ac08cfb1a0ca476bafb0ea"} err="failed to get container status \"0166c3b6fa21a35e55b87e3765e7ef220945d10949ac08cfb1a0ca476bafb0ea\": rpc error: code = NotFound desc = could not find container \"0166c3b6fa21a35e55b87e3765e7ef220945d10949ac08cfb1a0ca476bafb0ea\": container with ID starting with 0166c3b6fa21a35e55b87e3765e7ef220945d10949ac08cfb1a0ca476bafb0ea not found: ID does not exist" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.702964 4794 scope.go:117] "RemoveContainer" containerID="e5ed689c9e73dd31b2bf1abb59371e3289970ef73d488d04a64f712e8d12fa06" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.703233 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5ed689c9e73dd31b2bf1abb59371e3289970ef73d488d04a64f712e8d12fa06"} err="failed to get container status \"e5ed689c9e73dd31b2bf1abb59371e3289970ef73d488d04a64f712e8d12fa06\": rpc error: code = NotFound desc = could not find container \"e5ed689c9e73dd31b2bf1abb59371e3289970ef73d488d04a64f712e8d12fa06\": container with ID starting with e5ed689c9e73dd31b2bf1abb59371e3289970ef73d488d04a64f712e8d12fa06 not found: ID does not exist" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.756388 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.776324 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.788546 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:26:02 crc kubenswrapper[4794]: E0216 17:26:02.789196 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15ab3708-479d-4ec2-9125-6c816bf6084f" containerName="proxy-httpd" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.789316 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="15ab3708-479d-4ec2-9125-6c816bf6084f" containerName="proxy-httpd" Feb 16 17:26:02 crc kubenswrapper[4794]: E0216 17:26:02.789413 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15ab3708-479d-4ec2-9125-6c816bf6084f" containerName="ceilometer-central-agent" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.789467 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="15ab3708-479d-4ec2-9125-6c816bf6084f" containerName="ceilometer-central-agent" Feb 16 17:26:02 crc kubenswrapper[4794]: E0216 17:26:02.789525 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15ab3708-479d-4ec2-9125-6c816bf6084f" containerName="ceilometer-notification-agent" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.789573 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="15ab3708-479d-4ec2-9125-6c816bf6084f" containerName="ceilometer-notification-agent" Feb 16 17:26:02 crc kubenswrapper[4794]: E0216 17:26:02.789647 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15ab3708-479d-4ec2-9125-6c816bf6084f" containerName="sg-core" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.789699 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="15ab3708-479d-4ec2-9125-6c816bf6084f" containerName="sg-core" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.790008 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="15ab3708-479d-4ec2-9125-6c816bf6084f" containerName="proxy-httpd" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.790102 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="15ab3708-479d-4ec2-9125-6c816bf6084f" containerName="ceilometer-central-agent" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.790187 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="15ab3708-479d-4ec2-9125-6c816bf6084f" containerName="ceilometer-notification-agent" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.790245 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="15ab3708-479d-4ec2-9125-6c816bf6084f" containerName="sg-core" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.792368 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.794450 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.795362 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.796168 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.809759 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15ab3708-479d-4ec2-9125-6c816bf6084f" path="/var/lib/kubelet/pods/15ab3708-479d-4ec2-9125-6c816bf6084f/volumes" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.816614 4794 scope.go:117] "RemoveContainer" containerID="6e22a7be8018d748f49f3c871459e97f76ee04f859d3d1d0d46b4bbb2dd36691" Feb 16 17:26:02 crc kubenswrapper[4794]: E0216 17:26:02.816870 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.840281 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.859733 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8981f528-1f74-4d56-a93c-22860725b490-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8981f528-1f74-4d56-a93c-22860725b490\") " pod="openstack/ceilometer-0" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.860203 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8981f528-1f74-4d56-a93c-22860725b490-config-data\") pod \"ceilometer-0\" (UID: \"8981f528-1f74-4d56-a93c-22860725b490\") " pod="openstack/ceilometer-0" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.860471 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8981f528-1f74-4d56-a93c-22860725b490-run-httpd\") pod \"ceilometer-0\" (UID: \"8981f528-1f74-4d56-a93c-22860725b490\") " pod="openstack/ceilometer-0" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.860514 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8981f528-1f74-4d56-a93c-22860725b490-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8981f528-1f74-4d56-a93c-22860725b490\") " pod="openstack/ceilometer-0" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.860537 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8981f528-1f74-4d56-a93c-22860725b490-scripts\") pod \"ceilometer-0\" (UID: \"8981f528-1f74-4d56-a93c-22860725b490\") " pod="openstack/ceilometer-0" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.860628 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9v9n\" (UniqueName: \"kubernetes.io/projected/8981f528-1f74-4d56-a93c-22860725b490-kube-api-access-f9v9n\") pod \"ceilometer-0\" (UID: \"8981f528-1f74-4d56-a93c-22860725b490\") " pod="openstack/ceilometer-0" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.860769 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8981f528-1f74-4d56-a93c-22860725b490-log-httpd\") pod \"ceilometer-0\" (UID: \"8981f528-1f74-4d56-a93c-22860725b490\") " pod="openstack/ceilometer-0" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.860823 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8981f528-1f74-4d56-a93c-22860725b490-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8981f528-1f74-4d56-a93c-22860725b490\") " pod="openstack/ceilometer-0" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.962285 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8981f528-1f74-4d56-a93c-22860725b490-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8981f528-1f74-4d56-a93c-22860725b490\") " pod="openstack/ceilometer-0" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.962574 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8981f528-1f74-4d56-a93c-22860725b490-scripts\") pod \"ceilometer-0\" (UID: \"8981f528-1f74-4d56-a93c-22860725b490\") " pod="openstack/ceilometer-0" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.962720 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9v9n\" (UniqueName: \"kubernetes.io/projected/8981f528-1f74-4d56-a93c-22860725b490-kube-api-access-f9v9n\") pod \"ceilometer-0\" (UID: \"8981f528-1f74-4d56-a93c-22860725b490\") " pod="openstack/ceilometer-0" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.962912 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8981f528-1f74-4d56-a93c-22860725b490-log-httpd\") pod \"ceilometer-0\" (UID: \"8981f528-1f74-4d56-a93c-22860725b490\") " pod="openstack/ceilometer-0" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.963003 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8981f528-1f74-4d56-a93c-22860725b490-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8981f528-1f74-4d56-a93c-22860725b490\") " pod="openstack/ceilometer-0" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.963121 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8981f528-1f74-4d56-a93c-22860725b490-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8981f528-1f74-4d56-a93c-22860725b490\") " pod="openstack/ceilometer-0" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.963227 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8981f528-1f74-4d56-a93c-22860725b490-config-data\") pod \"ceilometer-0\" (UID: \"8981f528-1f74-4d56-a93c-22860725b490\") " pod="openstack/ceilometer-0" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.963433 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8981f528-1f74-4d56-a93c-22860725b490-run-httpd\") pod \"ceilometer-0\" (UID: \"8981f528-1f74-4d56-a93c-22860725b490\") " pod="openstack/ceilometer-0" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.963905 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8981f528-1f74-4d56-a93c-22860725b490-log-httpd\") pod \"ceilometer-0\" (UID: \"8981f528-1f74-4d56-a93c-22860725b490\") " pod="openstack/ceilometer-0" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.964152 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8981f528-1f74-4d56-a93c-22860725b490-run-httpd\") pod \"ceilometer-0\" (UID: \"8981f528-1f74-4d56-a93c-22860725b490\") " pod="openstack/ceilometer-0" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.967012 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8981f528-1f74-4d56-a93c-22860725b490-scripts\") pod \"ceilometer-0\" (UID: \"8981f528-1f74-4d56-a93c-22860725b490\") " pod="openstack/ceilometer-0" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.968856 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8981f528-1f74-4d56-a93c-22860725b490-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8981f528-1f74-4d56-a93c-22860725b490\") " pod="openstack/ceilometer-0" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.970441 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8981f528-1f74-4d56-a93c-22860725b490-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8981f528-1f74-4d56-a93c-22860725b490\") " pod="openstack/ceilometer-0" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.970510 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8981f528-1f74-4d56-a93c-22860725b490-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8981f528-1f74-4d56-a93c-22860725b490\") " pod="openstack/ceilometer-0" Feb 16 17:26:02 crc kubenswrapper[4794]: I0216 17:26:02.985316 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8981f528-1f74-4d56-a93c-22860725b490-config-data\") pod \"ceilometer-0\" (UID: \"8981f528-1f74-4d56-a93c-22860725b490\") " pod="openstack/ceilometer-0" Feb 16 17:26:03 crc kubenswrapper[4794]: I0216 17:26:03.006148 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9v9n\" (UniqueName: \"kubernetes.io/projected/8981f528-1f74-4d56-a93c-22860725b490-kube-api-access-f9v9n\") pod \"ceilometer-0\" (UID: \"8981f528-1f74-4d56-a93c-22860725b490\") " pod="openstack/ceilometer-0" Feb 16 17:26:03 crc kubenswrapper[4794]: I0216 17:26:03.115869 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:26:03 crc kubenswrapper[4794]: I0216 17:26:03.674452 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:26:03 crc kubenswrapper[4794]: E0216 17:26:03.834122 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 17:26:03 crc kubenswrapper[4794]: E0216 17:26:03.834761 4794 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 17:26:03 crc kubenswrapper[4794]: E0216 17:26:03.834912 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59fh58dh6ch557h84h55ch564h5bh58fh5c8h5d4h584h669h667h569h59hd5hdbh9dh67ch5f9h59fh597h96h664h687h66dhfch5ddh5b7h88h59cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9v9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(8981f528-1f74-4d56-a93c-22860725b490): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 17:26:04 crc kubenswrapper[4794]: I0216 17:26:04.065558 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="14a6d353-2dbd-49f5-b69f-1fdcd5c13db8" containerName="rabbitmq" containerID="cri-o://a1ccf81377d5eb39238f66da309168f15f2ef4541d8767081e5210e38916edef" gracePeriod=604796 Feb 16 17:26:04 crc kubenswrapper[4794]: I0216 17:26:04.438581 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8981f528-1f74-4d56-a93c-22860725b490","Type":"ContainerStarted","Data":"4a9184984863c353c083cdf94c15f31cd99b688075d2bef88cbb74d5e9c85ddd"} Feb 16 17:26:05 crc kubenswrapper[4794]: I0216 17:26:05.189278 4794 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 17:26:05 crc kubenswrapper[4794]: I0216 17:26:05.452724 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8981f528-1f74-4d56-a93c-22860725b490","Type":"ContainerStarted","Data":"a850f85db9cb1a5ad4cffca0a55e023f8669b893594db8e2eab3ff87cc086bce"} Feb 16 17:26:05 crc kubenswrapper[4794]: I0216 17:26:05.453100 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8981f528-1f74-4d56-a93c-22860725b490","Type":"ContainerStarted","Data":"f7648ae8528d78d37fe591d39987801b80152f806eec7da282f3bd011b84d4e3"} Feb 16 17:26:05 crc kubenswrapper[4794]: I0216 17:26:05.754065 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="47572286-fbbf-4189-9c6f-feb54624ee2a" containerName="rabbitmq" containerID="cri-o://78060d4db70d41c4b478fe59a79e973c4b66567fab8194633868092f4711eba2" gracePeriod=604795 Feb 16 17:26:06 crc kubenswrapper[4794]: E0216 17:26:06.806221 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:26:07 crc kubenswrapper[4794]: I0216 17:26:07.479962 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8981f528-1f74-4d56-a93c-22860725b490","Type":"ContainerStarted","Data":"68f59772e121b9d209a96b56840860e811df48845ef5f662189c01eda51d32a5"} Feb 16 17:26:07 crc kubenswrapper[4794]: I0216 17:26:07.481071 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 17:26:07 crc kubenswrapper[4794]: E0216 17:26:07.482202 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:26:08 crc kubenswrapper[4794]: E0216 17:26:08.492795 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:26:10 crc kubenswrapper[4794]: I0216 17:26:10.512799 4794 generic.go:334] "Generic (PLEG): container finished" podID="14a6d353-2dbd-49f5-b69f-1fdcd5c13db8" containerID="a1ccf81377d5eb39238f66da309168f15f2ef4541d8767081e5210e38916edef" exitCode=0 Feb 16 17:26:10 crc kubenswrapper[4794]: I0216 17:26:10.512842 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8","Type":"ContainerDied","Data":"a1ccf81377d5eb39238f66da309168f15f2ef4541d8767081e5210e38916edef"} Feb 16 17:26:10 crc kubenswrapper[4794]: I0216 17:26:10.815348 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 16 17:26:10 crc kubenswrapper[4794]: I0216 17:26:10.878668 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-erlang-cookie-secret\") pod \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " Feb 16 17:26:10 crc kubenswrapper[4794]: I0216 17:26:10.878737 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-pod-info\") pod \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " Feb 16 17:26:10 crc kubenswrapper[4794]: I0216 17:26:10.878779 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-rabbitmq-tls\") pod \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " Feb 16 17:26:10 crc kubenswrapper[4794]: I0216 17:26:10.878897 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-plugins-conf\") pod \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " Feb 16 17:26:10 crc kubenswrapper[4794]: I0216 17:26:10.878971 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-rabbitmq-plugins\") pod \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " Feb 16 17:26:10 crc kubenswrapper[4794]: I0216 17:26:10.879012 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-rabbitmq-erlang-cookie\") pod \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " Feb 16 17:26:10 crc kubenswrapper[4794]: I0216 17:26:10.879072 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hs5wx\" (UniqueName: \"kubernetes.io/projected/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-kube-api-access-hs5wx\") pod \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " Feb 16 17:26:10 crc kubenswrapper[4794]: I0216 17:26:10.879961 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-50f0f550-0bce-496f-9120-455efff95d36\") pod \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " Feb 16 17:26:10 crc kubenswrapper[4794]: I0216 17:26:10.880025 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-rabbitmq-confd\") pod \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " Feb 16 17:26:10 crc kubenswrapper[4794]: I0216 17:26:10.880054 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "14a6d353-2dbd-49f5-b69f-1fdcd5c13db8" (UID: "14a6d353-2dbd-49f5-b69f-1fdcd5c13db8"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:26:10 crc kubenswrapper[4794]: I0216 17:26:10.880075 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-config-data\") pod \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " Feb 16 17:26:10 crc kubenswrapper[4794]: I0216 17:26:10.880107 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-server-conf\") pod \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\" (UID: \"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8\") " Feb 16 17:26:10 crc kubenswrapper[4794]: I0216 17:26:10.880614 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "14a6d353-2dbd-49f5-b69f-1fdcd5c13db8" (UID: "14a6d353-2dbd-49f5-b69f-1fdcd5c13db8"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:26:10 crc kubenswrapper[4794]: I0216 17:26:10.881433 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "14a6d353-2dbd-49f5-b69f-1fdcd5c13db8" (UID: "14a6d353-2dbd-49f5-b69f-1fdcd5c13db8"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:26:10 crc kubenswrapper[4794]: I0216 17:26:10.881669 4794 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:10 crc kubenswrapper[4794]: I0216 17:26:10.881865 4794 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:10 crc kubenswrapper[4794]: I0216 17:26:10.881879 4794 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:10 crc kubenswrapper[4794]: I0216 17:26:10.887294 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-pod-info" (OuterVolumeSpecName: "pod-info") pod "14a6d353-2dbd-49f5-b69f-1fdcd5c13db8" (UID: "14a6d353-2dbd-49f5-b69f-1fdcd5c13db8"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 17:26:10 crc kubenswrapper[4794]: I0216 17:26:10.888076 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "14a6d353-2dbd-49f5-b69f-1fdcd5c13db8" (UID: "14a6d353-2dbd-49f5-b69f-1fdcd5c13db8"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:26:10 crc kubenswrapper[4794]: I0216 17:26:10.906817 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "14a6d353-2dbd-49f5-b69f-1fdcd5c13db8" (UID: "14a6d353-2dbd-49f5-b69f-1fdcd5c13db8"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:26:10 crc kubenswrapper[4794]: I0216 17:26:10.906904 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-kube-api-access-hs5wx" (OuterVolumeSpecName: "kube-api-access-hs5wx") pod "14a6d353-2dbd-49f5-b69f-1fdcd5c13db8" (UID: "14a6d353-2dbd-49f5-b69f-1fdcd5c13db8"). InnerVolumeSpecName "kube-api-access-hs5wx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:26:10 crc kubenswrapper[4794]: I0216 17:26:10.962722 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-config-data" (OuterVolumeSpecName: "config-data") pod "14a6d353-2dbd-49f5-b69f-1fdcd5c13db8" (UID: "14a6d353-2dbd-49f5-b69f-1fdcd5c13db8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:26:10 crc kubenswrapper[4794]: I0216 17:26:10.981728 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-50f0f550-0bce-496f-9120-455efff95d36" (OuterVolumeSpecName: "persistence") pod "14a6d353-2dbd-49f5-b69f-1fdcd5c13db8" (UID: "14a6d353-2dbd-49f5-b69f-1fdcd5c13db8"). InnerVolumeSpecName "pvc-50f0f550-0bce-496f-9120-455efff95d36". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 17:26:10 crc kubenswrapper[4794]: I0216 17:26:10.983985 4794 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-50f0f550-0bce-496f-9120-455efff95d36\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-50f0f550-0bce-496f-9120-455efff95d36\") on node \"crc\" " Feb 16 17:26:10 crc kubenswrapper[4794]: I0216 17:26:10.984032 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:10 crc kubenswrapper[4794]: I0216 17:26:10.984046 4794 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:10 crc kubenswrapper[4794]: I0216 17:26:10.984057 4794 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-pod-info\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:10 crc kubenswrapper[4794]: I0216 17:26:10.984067 4794 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:10 crc kubenswrapper[4794]: I0216 17:26:10.984075 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hs5wx\" (UniqueName: \"kubernetes.io/projected/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-kube-api-access-hs5wx\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.001406 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-server-conf" (OuterVolumeSpecName: "server-conf") pod "14a6d353-2dbd-49f5-b69f-1fdcd5c13db8" (UID: "14a6d353-2dbd-49f5-b69f-1fdcd5c13db8"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.040367 4794 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.040943 4794 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-50f0f550-0bce-496f-9120-455efff95d36" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-50f0f550-0bce-496f-9120-455efff95d36") on node "crc" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.052589 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "14a6d353-2dbd-49f5-b69f-1fdcd5c13db8" (UID: "14a6d353-2dbd-49f5-b69f-1fdcd5c13db8"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.086551 4794 reconciler_common.go:293] "Volume detached for volume \"pvc-50f0f550-0bce-496f-9120-455efff95d36\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-50f0f550-0bce-496f-9120-455efff95d36\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.086583 4794 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.086593 4794 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8-server-conf\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.527821 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"14a6d353-2dbd-49f5-b69f-1fdcd5c13db8","Type":"ContainerDied","Data":"8dccc3ba620f2918eb1300115aaccebdba42f6717766d76b9570f415509e95bd"} Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.527884 4794 scope.go:117] "RemoveContainer" containerID="a1ccf81377d5eb39238f66da309168f15f2ef4541d8767081e5210e38916edef" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.527894 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.565892 4794 scope.go:117] "RemoveContainer" containerID="b004f25d6252ce636e11c9fcd2ce973a1cb440882c3b2a80e3a5d3acf1ec4abf" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.581349 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.609055 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.656568 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 17:26:11 crc kubenswrapper[4794]: E0216 17:26:11.659582 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14a6d353-2dbd-49f5-b69f-1fdcd5c13db8" containerName="rabbitmq" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.659620 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="14a6d353-2dbd-49f5-b69f-1fdcd5c13db8" containerName="rabbitmq" Feb 16 17:26:11 crc kubenswrapper[4794]: E0216 17:26:11.659663 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14a6d353-2dbd-49f5-b69f-1fdcd5c13db8" containerName="setup-container" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.659692 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="14a6d353-2dbd-49f5-b69f-1fdcd5c13db8" containerName="setup-container" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.660782 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="14a6d353-2dbd-49f5-b69f-1fdcd5c13db8" containerName="rabbitmq" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.663818 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.684319 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.715363 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f02565a7-c476-4aa0-a4b4-bb7caacb4ec7-server-conf\") pod \"rabbitmq-server-2\" (UID: \"f02565a7-c476-4aa0-a4b4-bb7caacb4ec7\") " pod="openstack/rabbitmq-server-2" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.715748 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psxwm\" (UniqueName: \"kubernetes.io/projected/f02565a7-c476-4aa0-a4b4-bb7caacb4ec7-kube-api-access-psxwm\") pod \"rabbitmq-server-2\" (UID: \"f02565a7-c476-4aa0-a4b4-bb7caacb4ec7\") " pod="openstack/rabbitmq-server-2" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.715875 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f02565a7-c476-4aa0-a4b4-bb7caacb4ec7-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"f02565a7-c476-4aa0-a4b4-bb7caacb4ec7\") " pod="openstack/rabbitmq-server-2" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.715898 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f02565a7-c476-4aa0-a4b4-bb7caacb4ec7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"f02565a7-c476-4aa0-a4b4-bb7caacb4ec7\") " pod="openstack/rabbitmq-server-2" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.715939 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-50f0f550-0bce-496f-9120-455efff95d36\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-50f0f550-0bce-496f-9120-455efff95d36\") pod \"rabbitmq-server-2\" (UID: \"f02565a7-c476-4aa0-a4b4-bb7caacb4ec7\") " pod="openstack/rabbitmq-server-2" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.716203 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f02565a7-c476-4aa0-a4b4-bb7caacb4ec7-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"f02565a7-c476-4aa0-a4b4-bb7caacb4ec7\") " pod="openstack/rabbitmq-server-2" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.716656 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f02565a7-c476-4aa0-a4b4-bb7caacb4ec7-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"f02565a7-c476-4aa0-a4b4-bb7caacb4ec7\") " pod="openstack/rabbitmq-server-2" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.716683 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f02565a7-c476-4aa0-a4b4-bb7caacb4ec7-pod-info\") pod \"rabbitmq-server-2\" (UID: \"f02565a7-c476-4aa0-a4b4-bb7caacb4ec7\") " pod="openstack/rabbitmq-server-2" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.716703 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f02565a7-c476-4aa0-a4b4-bb7caacb4ec7-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"f02565a7-c476-4aa0-a4b4-bb7caacb4ec7\") " pod="openstack/rabbitmq-server-2" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.716756 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f02565a7-c476-4aa0-a4b4-bb7caacb4ec7-config-data\") pod \"rabbitmq-server-2\" (UID: \"f02565a7-c476-4aa0-a4b4-bb7caacb4ec7\") " pod="openstack/rabbitmq-server-2" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.716796 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f02565a7-c476-4aa0-a4b4-bb7caacb4ec7-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"f02565a7-c476-4aa0-a4b4-bb7caacb4ec7\") " pod="openstack/rabbitmq-server-2" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.818510 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f02565a7-c476-4aa0-a4b4-bb7caacb4ec7-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"f02565a7-c476-4aa0-a4b4-bb7caacb4ec7\") " pod="openstack/rabbitmq-server-2" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.818834 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f02565a7-c476-4aa0-a4b4-bb7caacb4ec7-server-conf\") pod \"rabbitmq-server-2\" (UID: \"f02565a7-c476-4aa0-a4b4-bb7caacb4ec7\") " pod="openstack/rabbitmq-server-2" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.818926 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psxwm\" (UniqueName: \"kubernetes.io/projected/f02565a7-c476-4aa0-a4b4-bb7caacb4ec7-kube-api-access-psxwm\") pod \"rabbitmq-server-2\" (UID: \"f02565a7-c476-4aa0-a4b4-bb7caacb4ec7\") " pod="openstack/rabbitmq-server-2" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.819098 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f02565a7-c476-4aa0-a4b4-bb7caacb4ec7-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"f02565a7-c476-4aa0-a4b4-bb7caacb4ec7\") " pod="openstack/rabbitmq-server-2" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.819201 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f02565a7-c476-4aa0-a4b4-bb7caacb4ec7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"f02565a7-c476-4aa0-a4b4-bb7caacb4ec7\") " pod="openstack/rabbitmq-server-2" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.819352 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-50f0f550-0bce-496f-9120-455efff95d36\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-50f0f550-0bce-496f-9120-455efff95d36\") pod \"rabbitmq-server-2\" (UID: \"f02565a7-c476-4aa0-a4b4-bb7caacb4ec7\") " pod="openstack/rabbitmq-server-2" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.819566 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f02565a7-c476-4aa0-a4b4-bb7caacb4ec7-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"f02565a7-c476-4aa0-a4b4-bb7caacb4ec7\") " pod="openstack/rabbitmq-server-2" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.819775 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f02565a7-c476-4aa0-a4b4-bb7caacb4ec7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"f02565a7-c476-4aa0-a4b4-bb7caacb4ec7\") " pod="openstack/rabbitmq-server-2" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.820009 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f02565a7-c476-4aa0-a4b4-bb7caacb4ec7-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"f02565a7-c476-4aa0-a4b4-bb7caacb4ec7\") " pod="openstack/rabbitmq-server-2" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.820119 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f02565a7-c476-4aa0-a4b4-bb7caacb4ec7-pod-info\") pod \"rabbitmq-server-2\" (UID: \"f02565a7-c476-4aa0-a4b4-bb7caacb4ec7\") " pod="openstack/rabbitmq-server-2" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.820220 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f02565a7-c476-4aa0-a4b4-bb7caacb4ec7-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"f02565a7-c476-4aa0-a4b4-bb7caacb4ec7\") " pod="openstack/rabbitmq-server-2" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.820397 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f02565a7-c476-4aa0-a4b4-bb7caacb4ec7-config-data\") pod \"rabbitmq-server-2\" (UID: \"f02565a7-c476-4aa0-a4b4-bb7caacb4ec7\") " pod="openstack/rabbitmq-server-2" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.820167 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f02565a7-c476-4aa0-a4b4-bb7caacb4ec7-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"f02565a7-c476-4aa0-a4b4-bb7caacb4ec7\") " pod="openstack/rabbitmq-server-2" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.820078 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f02565a7-c476-4aa0-a4b4-bb7caacb4ec7-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"f02565a7-c476-4aa0-a4b4-bb7caacb4ec7\") " pod="openstack/rabbitmq-server-2" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.820072 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f02565a7-c476-4aa0-a4b4-bb7caacb4ec7-server-conf\") pod \"rabbitmq-server-2\" (UID: \"f02565a7-c476-4aa0-a4b4-bb7caacb4ec7\") " pod="openstack/rabbitmq-server-2" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.821424 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f02565a7-c476-4aa0-a4b4-bb7caacb4ec7-config-data\") pod \"rabbitmq-server-2\" (UID: \"f02565a7-c476-4aa0-a4b4-bb7caacb4ec7\") " pod="openstack/rabbitmq-server-2" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.823515 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f02565a7-c476-4aa0-a4b4-bb7caacb4ec7-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"f02565a7-c476-4aa0-a4b4-bb7caacb4ec7\") " pod="openstack/rabbitmq-server-2" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.823800 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f02565a7-c476-4aa0-a4b4-bb7caacb4ec7-pod-info\") pod \"rabbitmq-server-2\" (UID: \"f02565a7-c476-4aa0-a4b4-bb7caacb4ec7\") " pod="openstack/rabbitmq-server-2" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.824273 4794 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.824318 4794 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-50f0f550-0bce-496f-9120-455efff95d36\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-50f0f550-0bce-496f-9120-455efff95d36\") pod \"rabbitmq-server-2\" (UID: \"f02565a7-c476-4aa0-a4b4-bb7caacb4ec7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4c15002b76397959393cbf983d3dd1ee42d1ae06ec66f0df68175f8304780e0f/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.829835 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f02565a7-c476-4aa0-a4b4-bb7caacb4ec7-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"f02565a7-c476-4aa0-a4b4-bb7caacb4ec7\") " pod="openstack/rabbitmq-server-2" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.835794 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f02565a7-c476-4aa0-a4b4-bb7caacb4ec7-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"f02565a7-c476-4aa0-a4b4-bb7caacb4ec7\") " pod="openstack/rabbitmq-server-2" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.837357 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psxwm\" (UniqueName: \"kubernetes.io/projected/f02565a7-c476-4aa0-a4b4-bb7caacb4ec7-kube-api-access-psxwm\") pod \"rabbitmq-server-2\" (UID: \"f02565a7-c476-4aa0-a4b4-bb7caacb4ec7\") " pod="openstack/rabbitmq-server-2" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.874730 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-50f0f550-0bce-496f-9120-455efff95d36\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-50f0f550-0bce-496f-9120-455efff95d36\") pod \"rabbitmq-server-2\" (UID: \"f02565a7-c476-4aa0-a4b4-bb7caacb4ec7\") " pod="openstack/rabbitmq-server-2" Feb 16 17:26:11 crc kubenswrapper[4794]: I0216 17:26:11.996107 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 16 17:26:12 crc kubenswrapper[4794]: W0216 17:26:12.542916 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf02565a7_c476_4aa0_a4b4_bb7caacb4ec7.slice/crio-de5f74863358ba9c02657da6089e5b535b32e605bfc04539eb749ac27c750fbe WatchSource:0}: Error finding container de5f74863358ba9c02657da6089e5b535b32e605bfc04539eb749ac27c750fbe: Status 404 returned error can't find the container with id de5f74863358ba9c02657da6089e5b535b32e605bfc04539eb749ac27c750fbe Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.543318 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.543563 4794 generic.go:334] "Generic (PLEG): container finished" podID="47572286-fbbf-4189-9c6f-feb54624ee2a" containerID="78060d4db70d41c4b478fe59a79e973c4b66567fab8194633868092f4711eba2" exitCode=0 Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.543655 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"47572286-fbbf-4189-9c6f-feb54624ee2a","Type":"ContainerDied","Data":"78060d4db70d41c4b478fe59a79e973c4b66567fab8194633868092f4711eba2"} Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.543684 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"47572286-fbbf-4189-9c6f-feb54624ee2a","Type":"ContainerDied","Data":"2b0ed0ea2b15a42c330584c07dfdbcd182b1d2f69dca7f086e773464cc8fbb90"} Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.543694 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b0ed0ea2b15a42c330584c07dfdbcd182b1d2f69dca7f086e773464cc8fbb90" Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.715766 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.758170 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nddf6\" (UniqueName: \"kubernetes.io/projected/47572286-fbbf-4189-9c6f-feb54624ee2a-kube-api-access-nddf6\") pod \"47572286-fbbf-4189-9c6f-feb54624ee2a\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.758455 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/47572286-fbbf-4189-9c6f-feb54624ee2a-rabbitmq-confd\") pod \"47572286-fbbf-4189-9c6f-feb54624ee2a\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.758534 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/47572286-fbbf-4189-9c6f-feb54624ee2a-rabbitmq-plugins\") pod \"47572286-fbbf-4189-9c6f-feb54624ee2a\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.759094 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47572286-fbbf-4189-9c6f-feb54624ee2a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "47572286-fbbf-4189-9c6f-feb54624ee2a" (UID: "47572286-fbbf-4189-9c6f-feb54624ee2a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.759274 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e508a32c-3b6e-49e8-a1fe-e546a58f5ecb\") pod \"47572286-fbbf-4189-9c6f-feb54624ee2a\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.759455 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/47572286-fbbf-4189-9c6f-feb54624ee2a-erlang-cookie-secret\") pod \"47572286-fbbf-4189-9c6f-feb54624ee2a\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.759485 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/47572286-fbbf-4189-9c6f-feb54624ee2a-rabbitmq-tls\") pod \"47572286-fbbf-4189-9c6f-feb54624ee2a\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.759591 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/47572286-fbbf-4189-9c6f-feb54624ee2a-rabbitmq-erlang-cookie\") pod \"47572286-fbbf-4189-9c6f-feb54624ee2a\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.759799 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/47572286-fbbf-4189-9c6f-feb54624ee2a-config-data\") pod \"47572286-fbbf-4189-9c6f-feb54624ee2a\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.759858 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/47572286-fbbf-4189-9c6f-feb54624ee2a-pod-info\") pod \"47572286-fbbf-4189-9c6f-feb54624ee2a\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.759889 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/47572286-fbbf-4189-9c6f-feb54624ee2a-plugins-conf\") pod \"47572286-fbbf-4189-9c6f-feb54624ee2a\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.759934 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/47572286-fbbf-4189-9c6f-feb54624ee2a-server-conf\") pod \"47572286-fbbf-4189-9c6f-feb54624ee2a\" (UID: \"47572286-fbbf-4189-9c6f-feb54624ee2a\") " Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.761030 4794 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/47572286-fbbf-4189-9c6f-feb54624ee2a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.763511 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47572286-fbbf-4189-9c6f-feb54624ee2a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "47572286-fbbf-4189-9c6f-feb54624ee2a" (UID: "47572286-fbbf-4189-9c6f-feb54624ee2a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.763893 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47572286-fbbf-4189-9c6f-feb54624ee2a-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "47572286-fbbf-4189-9c6f-feb54624ee2a" (UID: "47572286-fbbf-4189-9c6f-feb54624ee2a"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.765518 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47572286-fbbf-4189-9c6f-feb54624ee2a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "47572286-fbbf-4189-9c6f-feb54624ee2a" (UID: "47572286-fbbf-4189-9c6f-feb54624ee2a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.765574 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47572286-fbbf-4189-9c6f-feb54624ee2a-kube-api-access-nddf6" (OuterVolumeSpecName: "kube-api-access-nddf6") pod "47572286-fbbf-4189-9c6f-feb54624ee2a" (UID: "47572286-fbbf-4189-9c6f-feb54624ee2a"). InnerVolumeSpecName "kube-api-access-nddf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.766416 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47572286-fbbf-4189-9c6f-feb54624ee2a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "47572286-fbbf-4189-9c6f-feb54624ee2a" (UID: "47572286-fbbf-4189-9c6f-feb54624ee2a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.792557 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/47572286-fbbf-4189-9c6f-feb54624ee2a-pod-info" (OuterVolumeSpecName: "pod-info") pod "47572286-fbbf-4189-9c6f-feb54624ee2a" (UID: "47572286-fbbf-4189-9c6f-feb54624ee2a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.828752 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14a6d353-2dbd-49f5-b69f-1fdcd5c13db8" path="/var/lib/kubelet/pods/14a6d353-2dbd-49f5-b69f-1fdcd5c13db8/volumes" Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.846971 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e508a32c-3b6e-49e8-a1fe-e546a58f5ecb" (OuterVolumeSpecName: "persistence") pod "47572286-fbbf-4189-9c6f-feb54624ee2a" (UID: "47572286-fbbf-4189-9c6f-feb54624ee2a"). InnerVolumeSpecName "pvc-e508a32c-3b6e-49e8-a1fe-e546a58f5ecb". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.850933 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47572286-fbbf-4189-9c6f-feb54624ee2a-config-data" (OuterVolumeSpecName: "config-data") pod "47572286-fbbf-4189-9c6f-feb54624ee2a" (UID: "47572286-fbbf-4189-9c6f-feb54624ee2a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.865254 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/47572286-fbbf-4189-9c6f-feb54624ee2a-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.865314 4794 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/47572286-fbbf-4189-9c6f-feb54624ee2a-pod-info\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.865325 4794 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/47572286-fbbf-4189-9c6f-feb54624ee2a-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.865336 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nddf6\" (UniqueName: \"kubernetes.io/projected/47572286-fbbf-4189-9c6f-feb54624ee2a-kube-api-access-nddf6\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.865364 4794 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-e508a32c-3b6e-49e8-a1fe-e546a58f5ecb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e508a32c-3b6e-49e8-a1fe-e546a58f5ecb\") on node \"crc\" " Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.865377 4794 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/47572286-fbbf-4189-9c6f-feb54624ee2a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.865388 4794 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/47572286-fbbf-4189-9c6f-feb54624ee2a-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.865397 4794 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/47572286-fbbf-4189-9c6f-feb54624ee2a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.943126 4794 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.943383 4794 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-e508a32c-3b6e-49e8-a1fe-e546a58f5ecb" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e508a32c-3b6e-49e8-a1fe-e546a58f5ecb") on node "crc" Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.945631 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47572286-fbbf-4189-9c6f-feb54624ee2a-server-conf" (OuterVolumeSpecName: "server-conf") pod "47572286-fbbf-4189-9c6f-feb54624ee2a" (UID: "47572286-fbbf-4189-9c6f-feb54624ee2a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.966465 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47572286-fbbf-4189-9c6f-feb54624ee2a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "47572286-fbbf-4189-9c6f-feb54624ee2a" (UID: "47572286-fbbf-4189-9c6f-feb54624ee2a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.970125 4794 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/47572286-fbbf-4189-9c6f-feb54624ee2a-server-conf\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.970166 4794 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/47572286-fbbf-4189-9c6f-feb54624ee2a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:12 crc kubenswrapper[4794]: I0216 17:26:12.970182 4794 reconciler_common.go:293] "Volume detached for volume \"pvc-e508a32c-3b6e-49e8-a1fe-e546a58f5ecb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e508a32c-3b6e-49e8-a1fe-e546a58f5ecb\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.557438 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.557487 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f02565a7-c476-4aa0-a4b4-bb7caacb4ec7","Type":"ContainerStarted","Data":"de5f74863358ba9c02657da6089e5b535b32e605bfc04539eb749ac27c750fbe"} Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.607202 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.621626 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.633400 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 17:26:13 crc kubenswrapper[4794]: E0216 17:26:13.633843 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47572286-fbbf-4189-9c6f-feb54624ee2a" containerName="setup-container" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.633860 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="47572286-fbbf-4189-9c6f-feb54624ee2a" containerName="setup-container" Feb 16 17:26:13 crc kubenswrapper[4794]: E0216 17:26:13.633877 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47572286-fbbf-4189-9c6f-feb54624ee2a" containerName="rabbitmq" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.633883 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="47572286-fbbf-4189-9c6f-feb54624ee2a" containerName="rabbitmq" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.634147 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="47572286-fbbf-4189-9c6f-feb54624ee2a" containerName="rabbitmq" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.635381 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.637856 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.637856 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.638047 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.638330 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.638591 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.638902 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.647823 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-8m5dd" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.662944 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.688284 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d805784a-6606-49cf-a441-4e17697ab5ea-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d805784a-6606-49cf-a441-4e17697ab5ea\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.688369 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e508a32c-3b6e-49e8-a1fe-e546a58f5ecb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e508a32c-3b6e-49e8-a1fe-e546a58f5ecb\") pod \"rabbitmq-cell1-server-0\" (UID: \"d805784a-6606-49cf-a441-4e17697ab5ea\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.688429 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfs7m\" (UniqueName: \"kubernetes.io/projected/d805784a-6606-49cf-a441-4e17697ab5ea-kube-api-access-qfs7m\") pod \"rabbitmq-cell1-server-0\" (UID: \"d805784a-6606-49cf-a441-4e17697ab5ea\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.688456 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d805784a-6606-49cf-a441-4e17697ab5ea-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d805784a-6606-49cf-a441-4e17697ab5ea\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.688495 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d805784a-6606-49cf-a441-4e17697ab5ea-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d805784a-6606-49cf-a441-4e17697ab5ea\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.688594 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d805784a-6606-49cf-a441-4e17697ab5ea-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d805784a-6606-49cf-a441-4e17697ab5ea\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.688621 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d805784a-6606-49cf-a441-4e17697ab5ea-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d805784a-6606-49cf-a441-4e17697ab5ea\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.688644 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d805784a-6606-49cf-a441-4e17697ab5ea-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d805784a-6606-49cf-a441-4e17697ab5ea\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.688703 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d805784a-6606-49cf-a441-4e17697ab5ea-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d805784a-6606-49cf-a441-4e17697ab5ea\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.688752 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d805784a-6606-49cf-a441-4e17697ab5ea-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d805784a-6606-49cf-a441-4e17697ab5ea\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.688827 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d805784a-6606-49cf-a441-4e17697ab5ea-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d805784a-6606-49cf-a441-4e17697ab5ea\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.790997 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d805784a-6606-49cf-a441-4e17697ab5ea-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d805784a-6606-49cf-a441-4e17697ab5ea\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.791166 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d805784a-6606-49cf-a441-4e17697ab5ea-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d805784a-6606-49cf-a441-4e17697ab5ea\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.791210 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e508a32c-3b6e-49e8-a1fe-e546a58f5ecb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e508a32c-3b6e-49e8-a1fe-e546a58f5ecb\") pod \"rabbitmq-cell1-server-0\" (UID: \"d805784a-6606-49cf-a441-4e17697ab5ea\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.791260 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfs7m\" (UniqueName: \"kubernetes.io/projected/d805784a-6606-49cf-a441-4e17697ab5ea-kube-api-access-qfs7m\") pod \"rabbitmq-cell1-server-0\" (UID: \"d805784a-6606-49cf-a441-4e17697ab5ea\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.791285 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d805784a-6606-49cf-a441-4e17697ab5ea-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d805784a-6606-49cf-a441-4e17697ab5ea\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.791339 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d805784a-6606-49cf-a441-4e17697ab5ea-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d805784a-6606-49cf-a441-4e17697ab5ea\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.791429 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d805784a-6606-49cf-a441-4e17697ab5ea-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d805784a-6606-49cf-a441-4e17697ab5ea\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.791459 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d805784a-6606-49cf-a441-4e17697ab5ea-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d805784a-6606-49cf-a441-4e17697ab5ea\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.791485 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d805784a-6606-49cf-a441-4e17697ab5ea-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d805784a-6606-49cf-a441-4e17697ab5ea\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.791522 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d805784a-6606-49cf-a441-4e17697ab5ea-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d805784a-6606-49cf-a441-4e17697ab5ea\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.791562 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d805784a-6606-49cf-a441-4e17697ab5ea-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d805784a-6606-49cf-a441-4e17697ab5ea\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.792323 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d805784a-6606-49cf-a441-4e17697ab5ea-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d805784a-6606-49cf-a441-4e17697ab5ea\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.792467 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d805784a-6606-49cf-a441-4e17697ab5ea-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d805784a-6606-49cf-a441-4e17697ab5ea\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.792823 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d805784a-6606-49cf-a441-4e17697ab5ea-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d805784a-6606-49cf-a441-4e17697ab5ea\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.793073 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d805784a-6606-49cf-a441-4e17697ab5ea-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d805784a-6606-49cf-a441-4e17697ab5ea\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.793504 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d805784a-6606-49cf-a441-4e17697ab5ea-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d805784a-6606-49cf-a441-4e17697ab5ea\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.798349 4794 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.798392 4794 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e508a32c-3b6e-49e8-a1fe-e546a58f5ecb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e508a32c-3b6e-49e8-a1fe-e546a58f5ecb\") pod \"rabbitmq-cell1-server-0\" (UID: \"d805784a-6606-49cf-a441-4e17697ab5ea\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fd6fe655d1de5a63c78809a5a13c105c52992f0077ee7c00afae181712258956/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.799162 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d805784a-6606-49cf-a441-4e17697ab5ea-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d805784a-6606-49cf-a441-4e17697ab5ea\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.813109 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfs7m\" (UniqueName: \"kubernetes.io/projected/d805784a-6606-49cf-a441-4e17697ab5ea-kube-api-access-qfs7m\") pod \"rabbitmq-cell1-server-0\" (UID: \"d805784a-6606-49cf-a441-4e17697ab5ea\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.871579 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d805784a-6606-49cf-a441-4e17697ab5ea-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d805784a-6606-49cf-a441-4e17697ab5ea\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.872396 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d805784a-6606-49cf-a441-4e17697ab5ea-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d805784a-6606-49cf-a441-4e17697ab5ea\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:13 crc kubenswrapper[4794]: I0216 17:26:13.872404 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d805784a-6606-49cf-a441-4e17697ab5ea-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d805784a-6606-49cf-a441-4e17697ab5ea\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:14 crc kubenswrapper[4794]: I0216 17:26:14.020578 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e508a32c-3b6e-49e8-a1fe-e546a58f5ecb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e508a32c-3b6e-49e8-a1fe-e546a58f5ecb\") pod \"rabbitmq-cell1-server-0\" (UID: \"d805784a-6606-49cf-a441-4e17697ab5ea\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:14 crc kubenswrapper[4794]: I0216 17:26:14.290775 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:14 crc kubenswrapper[4794]: I0216 17:26:14.805955 4794 scope.go:117] "RemoveContainer" containerID="6e22a7be8018d748f49f3c871459e97f76ee04f859d3d1d0d46b4bbb2dd36691" Feb 16 17:26:14 crc kubenswrapper[4794]: E0216 17:26:14.806705 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:26:14 crc kubenswrapper[4794]: I0216 17:26:14.851513 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47572286-fbbf-4189-9c6f-feb54624ee2a" path="/var/lib/kubelet/pods/47572286-fbbf-4189-9c6f-feb54624ee2a/volumes" Feb 16 17:26:14 crc kubenswrapper[4794]: I0216 17:26:14.913780 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 17:26:14 crc kubenswrapper[4794]: I0216 17:26:14.968804 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-dxwjr"] Feb 16 17:26:14 crc kubenswrapper[4794]: E0216 17:26:14.971959 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 17:26:14 crc kubenswrapper[4794]: E0216 17:26:14.972255 4794 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 17:26:14 crc kubenswrapper[4794]: E0216 17:26:14.972482 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2h5l2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-7gcsf_openstack(c695f880-15cb-45b1-8545-60d8437ec631): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 17:26:14 crc kubenswrapper[4794]: I0216 17:26:14.972045 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-dxwjr" Feb 16 17:26:14 crc kubenswrapper[4794]: E0216 17:26:14.975387 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:26:14 crc kubenswrapper[4794]: I0216 17:26:14.984923 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 16 17:26:15 crc kubenswrapper[4794]: I0216 17:26:15.009549 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-dxwjr"] Feb 16 17:26:15 crc kubenswrapper[4794]: I0216 17:26:15.053614 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c14f141-d406-40e2-9846-0c25f152856b-config\") pod \"dnsmasq-dns-5b75489c6f-dxwjr\" (UID: \"5c14f141-d406-40e2-9846-0c25f152856b\") " pod="openstack/dnsmasq-dns-5b75489c6f-dxwjr" Feb 16 17:26:15 crc kubenswrapper[4794]: I0216 17:26:15.053683 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c14f141-d406-40e2-9846-0c25f152856b-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-dxwjr\" (UID: \"5c14f141-d406-40e2-9846-0c25f152856b\") " pod="openstack/dnsmasq-dns-5b75489c6f-dxwjr" Feb 16 17:26:15 crc kubenswrapper[4794]: I0216 17:26:15.053832 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5c14f141-d406-40e2-9846-0c25f152856b-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-dxwjr\" (UID: \"5c14f141-d406-40e2-9846-0c25f152856b\") " pod="openstack/dnsmasq-dns-5b75489c6f-dxwjr" Feb 16 17:26:15 crc kubenswrapper[4794]: I0216 17:26:15.053909 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c14f141-d406-40e2-9846-0c25f152856b-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-dxwjr\" (UID: \"5c14f141-d406-40e2-9846-0c25f152856b\") " pod="openstack/dnsmasq-dns-5b75489c6f-dxwjr" Feb 16 17:26:15 crc kubenswrapper[4794]: I0216 17:26:15.053984 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5c14f141-d406-40e2-9846-0c25f152856b-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-dxwjr\" (UID: \"5c14f141-d406-40e2-9846-0c25f152856b\") " pod="openstack/dnsmasq-dns-5b75489c6f-dxwjr" Feb 16 17:26:15 crc kubenswrapper[4794]: I0216 17:26:15.054247 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c14f141-d406-40e2-9846-0c25f152856b-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-dxwjr\" (UID: \"5c14f141-d406-40e2-9846-0c25f152856b\") " pod="openstack/dnsmasq-dns-5b75489c6f-dxwjr" Feb 16 17:26:15 crc kubenswrapper[4794]: I0216 17:26:15.054412 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4hmg\" (UniqueName: \"kubernetes.io/projected/5c14f141-d406-40e2-9846-0c25f152856b-kube-api-access-k4hmg\") pod \"dnsmasq-dns-5b75489c6f-dxwjr\" (UID: \"5c14f141-d406-40e2-9846-0c25f152856b\") " pod="openstack/dnsmasq-dns-5b75489c6f-dxwjr" Feb 16 17:26:15 crc kubenswrapper[4794]: I0216 17:26:15.156444 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c14f141-d406-40e2-9846-0c25f152856b-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-dxwjr\" (UID: \"5c14f141-d406-40e2-9846-0c25f152856b\") " pod="openstack/dnsmasq-dns-5b75489c6f-dxwjr" Feb 16 17:26:15 crc kubenswrapper[4794]: I0216 17:26:15.156750 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5c14f141-d406-40e2-9846-0c25f152856b-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-dxwjr\" (UID: \"5c14f141-d406-40e2-9846-0c25f152856b\") " pod="openstack/dnsmasq-dns-5b75489c6f-dxwjr" Feb 16 17:26:15 crc kubenswrapper[4794]: I0216 17:26:15.156854 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c14f141-d406-40e2-9846-0c25f152856b-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-dxwjr\" (UID: \"5c14f141-d406-40e2-9846-0c25f152856b\") " pod="openstack/dnsmasq-dns-5b75489c6f-dxwjr" Feb 16 17:26:15 crc kubenswrapper[4794]: I0216 17:26:15.156897 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4hmg\" (UniqueName: \"kubernetes.io/projected/5c14f141-d406-40e2-9846-0c25f152856b-kube-api-access-k4hmg\") pod \"dnsmasq-dns-5b75489c6f-dxwjr\" (UID: \"5c14f141-d406-40e2-9846-0c25f152856b\") " pod="openstack/dnsmasq-dns-5b75489c6f-dxwjr" Feb 16 17:26:15 crc kubenswrapper[4794]: I0216 17:26:15.156994 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c14f141-d406-40e2-9846-0c25f152856b-config\") pod \"dnsmasq-dns-5b75489c6f-dxwjr\" (UID: \"5c14f141-d406-40e2-9846-0c25f152856b\") " pod="openstack/dnsmasq-dns-5b75489c6f-dxwjr" Feb 16 17:26:15 crc kubenswrapper[4794]: I0216 17:26:15.157014 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c14f141-d406-40e2-9846-0c25f152856b-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-dxwjr\" (UID: \"5c14f141-d406-40e2-9846-0c25f152856b\") " pod="openstack/dnsmasq-dns-5b75489c6f-dxwjr" Feb 16 17:26:15 crc kubenswrapper[4794]: I0216 17:26:15.157048 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5c14f141-d406-40e2-9846-0c25f152856b-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-dxwjr\" (UID: \"5c14f141-d406-40e2-9846-0c25f152856b\") " pod="openstack/dnsmasq-dns-5b75489c6f-dxwjr" Feb 16 17:26:15 crc kubenswrapper[4794]: I0216 17:26:15.157938 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c14f141-d406-40e2-9846-0c25f152856b-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-dxwjr\" (UID: \"5c14f141-d406-40e2-9846-0c25f152856b\") " pod="openstack/dnsmasq-dns-5b75489c6f-dxwjr" Feb 16 17:26:15 crc kubenswrapper[4794]: I0216 17:26:15.158079 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5c14f141-d406-40e2-9846-0c25f152856b-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-dxwjr\" (UID: \"5c14f141-d406-40e2-9846-0c25f152856b\") " pod="openstack/dnsmasq-dns-5b75489c6f-dxwjr" Feb 16 17:26:15 crc kubenswrapper[4794]: I0216 17:26:15.158138 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c14f141-d406-40e2-9846-0c25f152856b-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-dxwjr\" (UID: \"5c14f141-d406-40e2-9846-0c25f152856b\") " pod="openstack/dnsmasq-dns-5b75489c6f-dxwjr" Feb 16 17:26:15 crc kubenswrapper[4794]: I0216 17:26:15.158209 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c14f141-d406-40e2-9846-0c25f152856b-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-dxwjr\" (UID: \"5c14f141-d406-40e2-9846-0c25f152856b\") " pod="openstack/dnsmasq-dns-5b75489c6f-dxwjr" Feb 16 17:26:15 crc kubenswrapper[4794]: I0216 17:26:15.158360 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5c14f141-d406-40e2-9846-0c25f152856b-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-dxwjr\" (UID: \"5c14f141-d406-40e2-9846-0c25f152856b\") " pod="openstack/dnsmasq-dns-5b75489c6f-dxwjr" Feb 16 17:26:15 crc kubenswrapper[4794]: I0216 17:26:15.158802 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c14f141-d406-40e2-9846-0c25f152856b-config\") pod \"dnsmasq-dns-5b75489c6f-dxwjr\" (UID: \"5c14f141-d406-40e2-9846-0c25f152856b\") " pod="openstack/dnsmasq-dns-5b75489c6f-dxwjr" Feb 16 17:26:15 crc kubenswrapper[4794]: I0216 17:26:15.177896 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4hmg\" (UniqueName: \"kubernetes.io/projected/5c14f141-d406-40e2-9846-0c25f152856b-kube-api-access-k4hmg\") pod \"dnsmasq-dns-5b75489c6f-dxwjr\" (UID: \"5c14f141-d406-40e2-9846-0c25f152856b\") " pod="openstack/dnsmasq-dns-5b75489c6f-dxwjr" Feb 16 17:26:15 crc kubenswrapper[4794]: I0216 17:26:15.427181 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-dxwjr" Feb 16 17:26:15 crc kubenswrapper[4794]: I0216 17:26:15.590638 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f02565a7-c476-4aa0-a4b4-bb7caacb4ec7","Type":"ContainerStarted","Data":"4ae890f6d659387aca25c7485d4af58a008e13711d5b33575c7e12af8998fefb"} Feb 16 17:26:15 crc kubenswrapper[4794]: I0216 17:26:15.594468 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d805784a-6606-49cf-a441-4e17697ab5ea","Type":"ContainerStarted","Data":"4c150d1ea6f202c93166d6d04c4c81bc195d526eaa94972997434e62fdecc1ab"} Feb 16 17:26:15 crc kubenswrapper[4794]: I0216 17:26:15.967984 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-dxwjr"] Feb 16 17:26:16 crc kubenswrapper[4794]: I0216 17:26:16.609923 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-dxwjr" event={"ID":"5c14f141-d406-40e2-9846-0c25f152856b","Type":"ContainerStarted","Data":"6d663d7fb789d927cc9b6ef9bbb41f226fee91e91fed2b65165910804064808b"} Feb 16 17:26:17 crc kubenswrapper[4794]: I0216 17:26:17.528409 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="47572286-fbbf-4189-9c6f-feb54624ee2a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: i/o timeout" Feb 16 17:26:17 crc kubenswrapper[4794]: I0216 17:26:17.622755 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d805784a-6606-49cf-a441-4e17697ab5ea","Type":"ContainerStarted","Data":"981bf0f065c8220d903cbd77d5b374583011016126149ec3c408f3ca14903a5f"} Feb 16 17:26:17 crc kubenswrapper[4794]: I0216 17:26:17.624829 4794 generic.go:334] "Generic (PLEG): container finished" podID="5c14f141-d406-40e2-9846-0c25f152856b" containerID="989c437821b11317b24b46df016bcc6ac8833a58b1a7c423587ce2b962cf7f6c" exitCode=0 Feb 16 17:26:17 crc kubenswrapper[4794]: I0216 17:26:17.624881 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-dxwjr" event={"ID":"5c14f141-d406-40e2-9846-0c25f152856b","Type":"ContainerDied","Data":"989c437821b11317b24b46df016bcc6ac8833a58b1a7c423587ce2b962cf7f6c"} Feb 16 17:26:18 crc kubenswrapper[4794]: I0216 17:26:18.640207 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-dxwjr" event={"ID":"5c14f141-d406-40e2-9846-0c25f152856b","Type":"ContainerStarted","Data":"373ddae53588307a030562d397fcbbf5fdfe097739a453dbbbd2dabe823eb893"} Feb 16 17:26:18 crc kubenswrapper[4794]: I0216 17:26:18.676852 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b75489c6f-dxwjr" podStartSLOduration=4.676832904 podStartE2EDuration="4.676832904s" podCreationTimestamp="2026-02-16 17:26:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:26:18.668142468 +0000 UTC m=+1604.616237135" watchObservedRunningTime="2026-02-16 17:26:18.676832904 +0000 UTC m=+1604.624927551" Feb 16 17:26:18 crc kubenswrapper[4794]: I0216 17:26:18.812876 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 17:26:18 crc kubenswrapper[4794]: E0216 17:26:18.907075 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 17:26:18 crc kubenswrapper[4794]: E0216 17:26:18.907137 4794 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 17:26:18 crc kubenswrapper[4794]: E0216 17:26:18.907254 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59fh58dh6ch557h84h55ch564h5bh58fh5c8h5d4h584h669h667h569h59hd5hdbh9dh67ch5f9h59fh597h96h664h687h66dhfch5ddh5b7h88h59cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9v9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(8981f528-1f74-4d56-a93c-22860725b490): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 17:26:18 crc kubenswrapper[4794]: E0216 17:26:18.908458 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:26:19 crc kubenswrapper[4794]: I0216 17:26:19.650144 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b75489c6f-dxwjr" Feb 16 17:26:19 crc kubenswrapper[4794]: E0216 17:26:19.651549 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:26:25 crc kubenswrapper[4794]: I0216 17:26:25.429686 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b75489c6f-dxwjr" Feb 16 17:26:25 crc kubenswrapper[4794]: I0216 17:26:25.514396 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-pczg4"] Feb 16 17:26:25 crc kubenswrapper[4794]: I0216 17:26:25.514637 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f84f9ccf-pczg4" podUID="c9abdf39-73a5-420f-8b9b-59831d550111" containerName="dnsmasq-dns" containerID="cri-o://81cb2e84aea8e7f2b2910cce4d5631320a40bf09b87d1dd76fe3d11d640478ad" gracePeriod=10 Feb 16 17:26:25 crc kubenswrapper[4794]: I0216 17:26:25.693279 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-97495"] Feb 16 17:26:25 crc kubenswrapper[4794]: I0216 17:26:25.696007 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d75f767dc-97495" Feb 16 17:26:25 crc kubenswrapper[4794]: I0216 17:26:25.734575 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-97495"] Feb 16 17:26:25 crc kubenswrapper[4794]: I0216 17:26:25.743330 4794 generic.go:334] "Generic (PLEG): container finished" podID="c9abdf39-73a5-420f-8b9b-59831d550111" containerID="81cb2e84aea8e7f2b2910cce4d5631320a40bf09b87d1dd76fe3d11d640478ad" exitCode=0 Feb 16 17:26:25 crc kubenswrapper[4794]: I0216 17:26:25.743376 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-pczg4" event={"ID":"c9abdf39-73a5-420f-8b9b-59831d550111","Type":"ContainerDied","Data":"81cb2e84aea8e7f2b2910cce4d5631320a40bf09b87d1dd76fe3d11d640478ad"} Feb 16 17:26:25 crc kubenswrapper[4794]: I0216 17:26:25.766037 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/00b864cb-0f2d-4ff9-ab38-0463ac283e01-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-97495\" (UID: \"00b864cb-0f2d-4ff9-ab38-0463ac283e01\") " pod="openstack/dnsmasq-dns-5d75f767dc-97495" Feb 16 17:26:25 crc kubenswrapper[4794]: I0216 17:26:25.767436 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00b864cb-0f2d-4ff9-ab38-0463ac283e01-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-97495\" (UID: \"00b864cb-0f2d-4ff9-ab38-0463ac283e01\") " pod="openstack/dnsmasq-dns-5d75f767dc-97495" Feb 16 17:26:25 crc kubenswrapper[4794]: I0216 17:26:25.767510 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00b864cb-0f2d-4ff9-ab38-0463ac283e01-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-97495\" (UID: \"00b864cb-0f2d-4ff9-ab38-0463ac283e01\") " pod="openstack/dnsmasq-dns-5d75f767dc-97495" Feb 16 17:26:25 crc kubenswrapper[4794]: I0216 17:26:25.767581 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d95gv\" (UniqueName: \"kubernetes.io/projected/00b864cb-0f2d-4ff9-ab38-0463ac283e01-kube-api-access-d95gv\") pod \"dnsmasq-dns-5d75f767dc-97495\" (UID: \"00b864cb-0f2d-4ff9-ab38-0463ac283e01\") " pod="openstack/dnsmasq-dns-5d75f767dc-97495" Feb 16 17:26:25 crc kubenswrapper[4794]: I0216 17:26:25.767824 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00b864cb-0f2d-4ff9-ab38-0463ac283e01-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-97495\" (UID: \"00b864cb-0f2d-4ff9-ab38-0463ac283e01\") " pod="openstack/dnsmasq-dns-5d75f767dc-97495" Feb 16 17:26:25 crc kubenswrapper[4794]: I0216 17:26:25.768086 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/00b864cb-0f2d-4ff9-ab38-0463ac283e01-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-97495\" (UID: \"00b864cb-0f2d-4ff9-ab38-0463ac283e01\") " pod="openstack/dnsmasq-dns-5d75f767dc-97495" Feb 16 17:26:25 crc kubenswrapper[4794]: I0216 17:26:25.768739 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00b864cb-0f2d-4ff9-ab38-0463ac283e01-config\") pod \"dnsmasq-dns-5d75f767dc-97495\" (UID: \"00b864cb-0f2d-4ff9-ab38-0463ac283e01\") " pod="openstack/dnsmasq-dns-5d75f767dc-97495" Feb 16 17:26:25 crc kubenswrapper[4794]: I0216 17:26:25.791407 4794 scope.go:117] "RemoveContainer" containerID="6e22a7be8018d748f49f3c871459e97f76ee04f859d3d1d0d46b4bbb2dd36691" Feb 16 17:26:25 crc kubenswrapper[4794]: E0216 17:26:25.791946 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:26:25 crc kubenswrapper[4794]: I0216 17:26:25.875234 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/00b864cb-0f2d-4ff9-ab38-0463ac283e01-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-97495\" (UID: \"00b864cb-0f2d-4ff9-ab38-0463ac283e01\") " pod="openstack/dnsmasq-dns-5d75f767dc-97495" Feb 16 17:26:25 crc kubenswrapper[4794]: I0216 17:26:25.875438 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00b864cb-0f2d-4ff9-ab38-0463ac283e01-config\") pod \"dnsmasq-dns-5d75f767dc-97495\" (UID: \"00b864cb-0f2d-4ff9-ab38-0463ac283e01\") " pod="openstack/dnsmasq-dns-5d75f767dc-97495" Feb 16 17:26:25 crc kubenswrapper[4794]: I0216 17:26:25.875520 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/00b864cb-0f2d-4ff9-ab38-0463ac283e01-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-97495\" (UID: \"00b864cb-0f2d-4ff9-ab38-0463ac283e01\") " pod="openstack/dnsmasq-dns-5d75f767dc-97495" Feb 16 17:26:25 crc kubenswrapper[4794]: I0216 17:26:25.875539 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00b864cb-0f2d-4ff9-ab38-0463ac283e01-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-97495\" (UID: \"00b864cb-0f2d-4ff9-ab38-0463ac283e01\") " pod="openstack/dnsmasq-dns-5d75f767dc-97495" Feb 16 17:26:25 crc kubenswrapper[4794]: I0216 17:26:25.875581 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00b864cb-0f2d-4ff9-ab38-0463ac283e01-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-97495\" (UID: \"00b864cb-0f2d-4ff9-ab38-0463ac283e01\") " pod="openstack/dnsmasq-dns-5d75f767dc-97495" Feb 16 17:26:25 crc kubenswrapper[4794]: I0216 17:26:25.875630 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d95gv\" (UniqueName: \"kubernetes.io/projected/00b864cb-0f2d-4ff9-ab38-0463ac283e01-kube-api-access-d95gv\") pod \"dnsmasq-dns-5d75f767dc-97495\" (UID: \"00b864cb-0f2d-4ff9-ab38-0463ac283e01\") " pod="openstack/dnsmasq-dns-5d75f767dc-97495" Feb 16 17:26:25 crc kubenswrapper[4794]: I0216 17:26:25.875671 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00b864cb-0f2d-4ff9-ab38-0463ac283e01-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-97495\" (UID: \"00b864cb-0f2d-4ff9-ab38-0463ac283e01\") " pod="openstack/dnsmasq-dns-5d75f767dc-97495" Feb 16 17:26:25 crc kubenswrapper[4794]: I0216 17:26:25.876668 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/00b864cb-0f2d-4ff9-ab38-0463ac283e01-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-97495\" (UID: \"00b864cb-0f2d-4ff9-ab38-0463ac283e01\") " pod="openstack/dnsmasq-dns-5d75f767dc-97495" Feb 16 17:26:25 crc kubenswrapper[4794]: I0216 17:26:25.876961 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00b864cb-0f2d-4ff9-ab38-0463ac283e01-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-97495\" (UID: \"00b864cb-0f2d-4ff9-ab38-0463ac283e01\") " pod="openstack/dnsmasq-dns-5d75f767dc-97495" Feb 16 17:26:25 crc kubenswrapper[4794]: I0216 17:26:25.877329 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/00b864cb-0f2d-4ff9-ab38-0463ac283e01-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-97495\" (UID: \"00b864cb-0f2d-4ff9-ab38-0463ac283e01\") " pod="openstack/dnsmasq-dns-5d75f767dc-97495" Feb 16 17:26:25 crc kubenswrapper[4794]: I0216 17:26:25.877560 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00b864cb-0f2d-4ff9-ab38-0463ac283e01-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-97495\" (UID: \"00b864cb-0f2d-4ff9-ab38-0463ac283e01\") " pod="openstack/dnsmasq-dns-5d75f767dc-97495" Feb 16 17:26:25 crc kubenswrapper[4794]: I0216 17:26:25.878409 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00b864cb-0f2d-4ff9-ab38-0463ac283e01-config\") pod \"dnsmasq-dns-5d75f767dc-97495\" (UID: \"00b864cb-0f2d-4ff9-ab38-0463ac283e01\") " pod="openstack/dnsmasq-dns-5d75f767dc-97495" Feb 16 17:26:25 crc kubenswrapper[4794]: I0216 17:26:25.879531 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00b864cb-0f2d-4ff9-ab38-0463ac283e01-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-97495\" (UID: \"00b864cb-0f2d-4ff9-ab38-0463ac283e01\") " pod="openstack/dnsmasq-dns-5d75f767dc-97495" Feb 16 17:26:25 crc kubenswrapper[4794]: I0216 17:26:25.901121 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d95gv\" (UniqueName: \"kubernetes.io/projected/00b864cb-0f2d-4ff9-ab38-0463ac283e01-kube-api-access-d95gv\") pod \"dnsmasq-dns-5d75f767dc-97495\" (UID: \"00b864cb-0f2d-4ff9-ab38-0463ac283e01\") " pod="openstack/dnsmasq-dns-5d75f767dc-97495" Feb 16 17:26:26 crc kubenswrapper[4794]: I0216 17:26:26.018231 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d75f767dc-97495" Feb 16 17:26:26 crc kubenswrapper[4794]: I0216 17:26:26.209730 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-pczg4" Feb 16 17:26:26 crc kubenswrapper[4794]: I0216 17:26:26.285971 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9abdf39-73a5-420f-8b9b-59831d550111-ovsdbserver-sb\") pod \"c9abdf39-73a5-420f-8b9b-59831d550111\" (UID: \"c9abdf39-73a5-420f-8b9b-59831d550111\") " Feb 16 17:26:26 crc kubenswrapper[4794]: I0216 17:26:26.286041 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkbjs\" (UniqueName: \"kubernetes.io/projected/c9abdf39-73a5-420f-8b9b-59831d550111-kube-api-access-fkbjs\") pod \"c9abdf39-73a5-420f-8b9b-59831d550111\" (UID: \"c9abdf39-73a5-420f-8b9b-59831d550111\") " Feb 16 17:26:26 crc kubenswrapper[4794]: I0216 17:26:26.286195 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9abdf39-73a5-420f-8b9b-59831d550111-config\") pod \"c9abdf39-73a5-420f-8b9b-59831d550111\" (UID: \"c9abdf39-73a5-420f-8b9b-59831d550111\") " Feb 16 17:26:26 crc kubenswrapper[4794]: I0216 17:26:26.286377 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9abdf39-73a5-420f-8b9b-59831d550111-dns-svc\") pod \"c9abdf39-73a5-420f-8b9b-59831d550111\" (UID: \"c9abdf39-73a5-420f-8b9b-59831d550111\") " Feb 16 17:26:26 crc kubenswrapper[4794]: I0216 17:26:26.286399 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c9abdf39-73a5-420f-8b9b-59831d550111-dns-swift-storage-0\") pod \"c9abdf39-73a5-420f-8b9b-59831d550111\" (UID: \"c9abdf39-73a5-420f-8b9b-59831d550111\") " Feb 16 17:26:26 crc kubenswrapper[4794]: I0216 17:26:26.286421 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9abdf39-73a5-420f-8b9b-59831d550111-ovsdbserver-nb\") pod \"c9abdf39-73a5-420f-8b9b-59831d550111\" (UID: \"c9abdf39-73a5-420f-8b9b-59831d550111\") " Feb 16 17:26:26 crc kubenswrapper[4794]: I0216 17:26:26.293190 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9abdf39-73a5-420f-8b9b-59831d550111-kube-api-access-fkbjs" (OuterVolumeSpecName: "kube-api-access-fkbjs") pod "c9abdf39-73a5-420f-8b9b-59831d550111" (UID: "c9abdf39-73a5-420f-8b9b-59831d550111"). InnerVolumeSpecName "kube-api-access-fkbjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:26:26 crc kubenswrapper[4794]: I0216 17:26:26.359264 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9abdf39-73a5-420f-8b9b-59831d550111-config" (OuterVolumeSpecName: "config") pod "c9abdf39-73a5-420f-8b9b-59831d550111" (UID: "c9abdf39-73a5-420f-8b9b-59831d550111"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:26:26 crc kubenswrapper[4794]: I0216 17:26:26.382024 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9abdf39-73a5-420f-8b9b-59831d550111-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c9abdf39-73a5-420f-8b9b-59831d550111" (UID: "c9abdf39-73a5-420f-8b9b-59831d550111"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:26:26 crc kubenswrapper[4794]: I0216 17:26:26.387152 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9abdf39-73a5-420f-8b9b-59831d550111-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c9abdf39-73a5-420f-8b9b-59831d550111" (UID: "c9abdf39-73a5-420f-8b9b-59831d550111"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:26:26 crc kubenswrapper[4794]: I0216 17:26:26.388851 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkbjs\" (UniqueName: \"kubernetes.io/projected/c9abdf39-73a5-420f-8b9b-59831d550111-kube-api-access-fkbjs\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:26 crc kubenswrapper[4794]: I0216 17:26:26.388874 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9abdf39-73a5-420f-8b9b-59831d550111-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:26 crc kubenswrapper[4794]: I0216 17:26:26.388885 4794 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9abdf39-73a5-420f-8b9b-59831d550111-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:26 crc kubenswrapper[4794]: I0216 17:26:26.388896 4794 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9abdf39-73a5-420f-8b9b-59831d550111-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:26 crc kubenswrapper[4794]: I0216 17:26:26.444927 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9abdf39-73a5-420f-8b9b-59831d550111-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c9abdf39-73a5-420f-8b9b-59831d550111" (UID: "c9abdf39-73a5-420f-8b9b-59831d550111"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:26:26 crc kubenswrapper[4794]: I0216 17:26:26.450500 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9abdf39-73a5-420f-8b9b-59831d550111-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c9abdf39-73a5-420f-8b9b-59831d550111" (UID: "c9abdf39-73a5-420f-8b9b-59831d550111"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:26:26 crc kubenswrapper[4794]: I0216 17:26:26.491437 4794 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c9abdf39-73a5-420f-8b9b-59831d550111-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:26 crc kubenswrapper[4794]: I0216 17:26:26.491474 4794 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9abdf39-73a5-420f-8b9b-59831d550111-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:26 crc kubenswrapper[4794]: I0216 17:26:26.641318 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-97495"] Feb 16 17:26:26 crc kubenswrapper[4794]: W0216 17:26:26.643081 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00b864cb_0f2d_4ff9_ab38_0463ac283e01.slice/crio-ee67c0a425b8e3eff3e1bd7f4f4470a88a237166c9e88d7a0409d34b0f5e344f WatchSource:0}: Error finding container ee67c0a425b8e3eff3e1bd7f4f4470a88a237166c9e88d7a0409d34b0f5e344f: Status 404 returned error can't find the container with id ee67c0a425b8e3eff3e1bd7f4f4470a88a237166c9e88d7a0409d34b0f5e344f Feb 16 17:26:26 crc kubenswrapper[4794]: I0216 17:26:26.765035 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-97495" event={"ID":"00b864cb-0f2d-4ff9-ab38-0463ac283e01","Type":"ContainerStarted","Data":"ee67c0a425b8e3eff3e1bd7f4f4470a88a237166c9e88d7a0409d34b0f5e344f"} Feb 16 17:26:26 crc kubenswrapper[4794]: I0216 17:26:26.767920 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-pczg4" event={"ID":"c9abdf39-73a5-420f-8b9b-59831d550111","Type":"ContainerDied","Data":"53b01755854ec804139457859821b7d1de227b10bcc305c7db758e469b86352e"} Feb 16 17:26:26 crc kubenswrapper[4794]: I0216 17:26:26.768008 4794 scope.go:117] "RemoveContainer" containerID="81cb2e84aea8e7f2b2910cce4d5631320a40bf09b87d1dd76fe3d11d640478ad" Feb 16 17:26:26 crc kubenswrapper[4794]: I0216 17:26:26.768251 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-pczg4" Feb 16 17:26:26 crc kubenswrapper[4794]: I0216 17:26:26.844110 4794 scope.go:117] "RemoveContainer" containerID="2642727f1e737a0fd54e22ac129c67e5e32a0a08c556a8175d72e5def5391707" Feb 16 17:26:26 crc kubenswrapper[4794]: I0216 17:26:26.879539 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-pczg4"] Feb 16 17:26:26 crc kubenswrapper[4794]: I0216 17:26:26.893510 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-pczg4"] Feb 16 17:26:27 crc kubenswrapper[4794]: I0216 17:26:27.793379 4794 generic.go:334] "Generic (PLEG): container finished" podID="00b864cb-0f2d-4ff9-ab38-0463ac283e01" containerID="4688555111c350436f638f910f2f7b1e2cee42ee8e028abadcc8cb6473450fae" exitCode=0 Feb 16 17:26:27 crc kubenswrapper[4794]: I0216 17:26:27.793449 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-97495" event={"ID":"00b864cb-0f2d-4ff9-ab38-0463ac283e01","Type":"ContainerDied","Data":"4688555111c350436f638f910f2f7b1e2cee42ee8e028abadcc8cb6473450fae"} Feb 16 17:26:28 crc kubenswrapper[4794]: I0216 17:26:28.812941 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9abdf39-73a5-420f-8b9b-59831d550111" path="/var/lib/kubelet/pods/c9abdf39-73a5-420f-8b9b-59831d550111/volumes" Feb 16 17:26:28 crc kubenswrapper[4794]: I0216 17:26:28.818738 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-97495" event={"ID":"00b864cb-0f2d-4ff9-ab38-0463ac283e01","Type":"ContainerStarted","Data":"beb67331b9533f2722b4884c8a0fe5b7939aa5e97ffbb1ec5f742cadd11d38af"} Feb 16 17:26:28 crc kubenswrapper[4794]: I0216 17:26:28.819494 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d75f767dc-97495" Feb 16 17:26:28 crc kubenswrapper[4794]: I0216 17:26:28.851538 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d75f767dc-97495" podStartSLOduration=3.8515191460000002 podStartE2EDuration="3.851519146s" podCreationTimestamp="2026-02-16 17:26:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:26:28.846417772 +0000 UTC m=+1614.794512429" watchObservedRunningTime="2026-02-16 17:26:28.851519146 +0000 UTC m=+1614.799613803" Feb 16 17:26:29 crc kubenswrapper[4794]: E0216 17:26:29.793356 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:26:34 crc kubenswrapper[4794]: E0216 17:26:34.804820 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:26:36 crc kubenswrapper[4794]: I0216 17:26:36.020510 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d75f767dc-97495" Feb 16 17:26:36 crc kubenswrapper[4794]: I0216 17:26:36.099871 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-dxwjr"] Feb 16 17:26:36 crc kubenswrapper[4794]: I0216 17:26:36.100165 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b75489c6f-dxwjr" podUID="5c14f141-d406-40e2-9846-0c25f152856b" containerName="dnsmasq-dns" containerID="cri-o://373ddae53588307a030562d397fcbbf5fdfe097739a453dbbbd2dabe823eb893" gracePeriod=10 Feb 16 17:26:36 crc kubenswrapper[4794]: I0216 17:26:36.748649 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-dxwjr" Feb 16 17:26:36 crc kubenswrapper[4794]: I0216 17:26:36.850903 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5c14f141-d406-40e2-9846-0c25f152856b-dns-swift-storage-0\") pod \"5c14f141-d406-40e2-9846-0c25f152856b\" (UID: \"5c14f141-d406-40e2-9846-0c25f152856b\") " Feb 16 17:26:36 crc kubenswrapper[4794]: I0216 17:26:36.851031 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c14f141-d406-40e2-9846-0c25f152856b-ovsdbserver-sb\") pod \"5c14f141-d406-40e2-9846-0c25f152856b\" (UID: \"5c14f141-d406-40e2-9846-0c25f152856b\") " Feb 16 17:26:36 crc kubenswrapper[4794]: I0216 17:26:36.851060 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c14f141-d406-40e2-9846-0c25f152856b-config\") pod \"5c14f141-d406-40e2-9846-0c25f152856b\" (UID: \"5c14f141-d406-40e2-9846-0c25f152856b\") " Feb 16 17:26:36 crc kubenswrapper[4794]: I0216 17:26:36.851977 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c14f141-d406-40e2-9846-0c25f152856b-dns-svc\") pod \"5c14f141-d406-40e2-9846-0c25f152856b\" (UID: \"5c14f141-d406-40e2-9846-0c25f152856b\") " Feb 16 17:26:36 crc kubenswrapper[4794]: I0216 17:26:36.852017 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4hmg\" (UniqueName: \"kubernetes.io/projected/5c14f141-d406-40e2-9846-0c25f152856b-kube-api-access-k4hmg\") pod \"5c14f141-d406-40e2-9846-0c25f152856b\" (UID: \"5c14f141-d406-40e2-9846-0c25f152856b\") " Feb 16 17:26:36 crc kubenswrapper[4794]: I0216 17:26:36.852070 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5c14f141-d406-40e2-9846-0c25f152856b-openstack-edpm-ipam\") pod \"5c14f141-d406-40e2-9846-0c25f152856b\" (UID: \"5c14f141-d406-40e2-9846-0c25f152856b\") " Feb 16 17:26:36 crc kubenswrapper[4794]: I0216 17:26:36.852160 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c14f141-d406-40e2-9846-0c25f152856b-ovsdbserver-nb\") pod \"5c14f141-d406-40e2-9846-0c25f152856b\" (UID: \"5c14f141-d406-40e2-9846-0c25f152856b\") " Feb 16 17:26:36 crc kubenswrapper[4794]: I0216 17:26:36.875823 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c14f141-d406-40e2-9846-0c25f152856b-kube-api-access-k4hmg" (OuterVolumeSpecName: "kube-api-access-k4hmg") pod "5c14f141-d406-40e2-9846-0c25f152856b" (UID: "5c14f141-d406-40e2-9846-0c25f152856b"). InnerVolumeSpecName "kube-api-access-k4hmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:26:36 crc kubenswrapper[4794]: I0216 17:26:36.915348 4794 generic.go:334] "Generic (PLEG): container finished" podID="5c14f141-d406-40e2-9846-0c25f152856b" containerID="373ddae53588307a030562d397fcbbf5fdfe097739a453dbbbd2dabe823eb893" exitCode=0 Feb 16 17:26:36 crc kubenswrapper[4794]: I0216 17:26:36.915403 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-dxwjr" event={"ID":"5c14f141-d406-40e2-9846-0c25f152856b","Type":"ContainerDied","Data":"373ddae53588307a030562d397fcbbf5fdfe097739a453dbbbd2dabe823eb893"} Feb 16 17:26:36 crc kubenswrapper[4794]: I0216 17:26:36.915436 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-dxwjr" event={"ID":"5c14f141-d406-40e2-9846-0c25f152856b","Type":"ContainerDied","Data":"6d663d7fb789d927cc9b6ef9bbb41f226fee91e91fed2b65165910804064808b"} Feb 16 17:26:36 crc kubenswrapper[4794]: I0216 17:26:36.915459 4794 scope.go:117] "RemoveContainer" containerID="373ddae53588307a030562d397fcbbf5fdfe097739a453dbbbd2dabe823eb893" Feb 16 17:26:36 crc kubenswrapper[4794]: I0216 17:26:36.915866 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-dxwjr" Feb 16 17:26:36 crc kubenswrapper[4794]: I0216 17:26:36.959554 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4hmg\" (UniqueName: \"kubernetes.io/projected/5c14f141-d406-40e2-9846-0c25f152856b-kube-api-access-k4hmg\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:36 crc kubenswrapper[4794]: I0216 17:26:36.980899 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c14f141-d406-40e2-9846-0c25f152856b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5c14f141-d406-40e2-9846-0c25f152856b" (UID: "5c14f141-d406-40e2-9846-0c25f152856b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:26:36 crc kubenswrapper[4794]: I0216 17:26:36.984219 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c14f141-d406-40e2-9846-0c25f152856b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5c14f141-d406-40e2-9846-0c25f152856b" (UID: "5c14f141-d406-40e2-9846-0c25f152856b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:26:36 crc kubenswrapper[4794]: I0216 17:26:36.989138 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c14f141-d406-40e2-9846-0c25f152856b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5c14f141-d406-40e2-9846-0c25f152856b" (UID: "5c14f141-d406-40e2-9846-0c25f152856b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:26:36 crc kubenswrapper[4794]: I0216 17:26:36.998656 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c14f141-d406-40e2-9846-0c25f152856b-config" (OuterVolumeSpecName: "config") pod "5c14f141-d406-40e2-9846-0c25f152856b" (UID: "5c14f141-d406-40e2-9846-0c25f152856b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:26:36 crc kubenswrapper[4794]: I0216 17:26:36.998965 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c14f141-d406-40e2-9846-0c25f152856b-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "5c14f141-d406-40e2-9846-0c25f152856b" (UID: "5c14f141-d406-40e2-9846-0c25f152856b"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:26:36 crc kubenswrapper[4794]: I0216 17:26:36.999713 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c14f141-d406-40e2-9846-0c25f152856b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5c14f141-d406-40e2-9846-0c25f152856b" (UID: "5c14f141-d406-40e2-9846-0c25f152856b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:26:37 crc kubenswrapper[4794]: I0216 17:26:37.034380 4794 scope.go:117] "RemoveContainer" containerID="989c437821b11317b24b46df016bcc6ac8833a58b1a7c423587ce2b962cf7f6c" Feb 16 17:26:37 crc kubenswrapper[4794]: I0216 17:26:37.055582 4794 scope.go:117] "RemoveContainer" containerID="373ddae53588307a030562d397fcbbf5fdfe097739a453dbbbd2dabe823eb893" Feb 16 17:26:37 crc kubenswrapper[4794]: E0216 17:26:37.055907 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"373ddae53588307a030562d397fcbbf5fdfe097739a453dbbbd2dabe823eb893\": container with ID starting with 373ddae53588307a030562d397fcbbf5fdfe097739a453dbbbd2dabe823eb893 not found: ID does not exist" containerID="373ddae53588307a030562d397fcbbf5fdfe097739a453dbbbd2dabe823eb893" Feb 16 17:26:37 crc kubenswrapper[4794]: I0216 17:26:37.055939 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"373ddae53588307a030562d397fcbbf5fdfe097739a453dbbbd2dabe823eb893"} err="failed to get container status \"373ddae53588307a030562d397fcbbf5fdfe097739a453dbbbd2dabe823eb893\": rpc error: code = NotFound desc = could not find container \"373ddae53588307a030562d397fcbbf5fdfe097739a453dbbbd2dabe823eb893\": container with ID starting with 373ddae53588307a030562d397fcbbf5fdfe097739a453dbbbd2dabe823eb893 not found: ID does not exist" Feb 16 17:26:37 crc kubenswrapper[4794]: I0216 17:26:37.055959 4794 scope.go:117] "RemoveContainer" containerID="989c437821b11317b24b46df016bcc6ac8833a58b1a7c423587ce2b962cf7f6c" Feb 16 17:26:37 crc kubenswrapper[4794]: E0216 17:26:37.056216 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"989c437821b11317b24b46df016bcc6ac8833a58b1a7c423587ce2b962cf7f6c\": container with ID starting with 989c437821b11317b24b46df016bcc6ac8833a58b1a7c423587ce2b962cf7f6c not found: ID does not exist" containerID="989c437821b11317b24b46df016bcc6ac8833a58b1a7c423587ce2b962cf7f6c" Feb 16 17:26:37 crc kubenswrapper[4794]: I0216 17:26:37.056276 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"989c437821b11317b24b46df016bcc6ac8833a58b1a7c423587ce2b962cf7f6c"} err="failed to get container status \"989c437821b11317b24b46df016bcc6ac8833a58b1a7c423587ce2b962cf7f6c\": rpc error: code = NotFound desc = could not find container \"989c437821b11317b24b46df016bcc6ac8833a58b1a7c423587ce2b962cf7f6c\": container with ID starting with 989c437821b11317b24b46df016bcc6ac8833a58b1a7c423587ce2b962cf7f6c not found: ID does not exist" Feb 16 17:26:37 crc kubenswrapper[4794]: I0216 17:26:37.061947 4794 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5c14f141-d406-40e2-9846-0c25f152856b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:37 crc kubenswrapper[4794]: I0216 17:26:37.061973 4794 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c14f141-d406-40e2-9846-0c25f152856b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:37 crc kubenswrapper[4794]: I0216 17:26:37.061983 4794 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c14f141-d406-40e2-9846-0c25f152856b-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:37 crc kubenswrapper[4794]: I0216 17:26:37.061991 4794 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c14f141-d406-40e2-9846-0c25f152856b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:37 crc kubenswrapper[4794]: I0216 17:26:37.062000 4794 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5c14f141-d406-40e2-9846-0c25f152856b-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:37 crc kubenswrapper[4794]: I0216 17:26:37.062008 4794 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c14f141-d406-40e2-9846-0c25f152856b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:37 crc kubenswrapper[4794]: I0216 17:26:37.323423 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-dxwjr"] Feb 16 17:26:37 crc kubenswrapper[4794]: I0216 17:26:37.363345 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-dxwjr"] Feb 16 17:26:37 crc kubenswrapper[4794]: I0216 17:26:37.999273 4794 scope.go:117] "RemoveContainer" containerID="92a5854561520f29512043bfa53b1c5f9a1f3caae385e57af28b57dc0df64414" Feb 16 17:26:38 crc kubenswrapper[4794]: I0216 17:26:38.819862 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c14f141-d406-40e2-9846-0c25f152856b" path="/var/lib/kubelet/pods/5c14f141-d406-40e2-9846-0c25f152856b/volumes" Feb 16 17:26:40 crc kubenswrapper[4794]: I0216 17:26:40.791863 4794 scope.go:117] "RemoveContainer" containerID="6e22a7be8018d748f49f3c871459e97f76ee04f859d3d1d0d46b4bbb2dd36691" Feb 16 17:26:40 crc kubenswrapper[4794]: E0216 17:26:40.792733 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:26:42 crc kubenswrapper[4794]: E0216 17:26:42.919095 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 17:26:42 crc kubenswrapper[4794]: E0216 17:26:42.919487 4794 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 17:26:42 crc kubenswrapper[4794]: E0216 17:26:42.919667 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2h5l2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-7gcsf_openstack(c695f880-15cb-45b1-8545-60d8437ec631): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 17:26:42 crc kubenswrapper[4794]: E0216 17:26:42.921258 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:26:47 crc kubenswrapper[4794]: I0216 17:26:47.080383 4794 generic.go:334] "Generic (PLEG): container finished" podID="f02565a7-c476-4aa0-a4b4-bb7caacb4ec7" containerID="4ae890f6d659387aca25c7485d4af58a008e13711d5b33575c7e12af8998fefb" exitCode=0 Feb 16 17:26:47 crc kubenswrapper[4794]: I0216 17:26:47.080437 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f02565a7-c476-4aa0-a4b4-bb7caacb4ec7","Type":"ContainerDied","Data":"4ae890f6d659387aca25c7485d4af58a008e13711d5b33575c7e12af8998fefb"} Feb 16 17:26:48 crc kubenswrapper[4794]: I0216 17:26:48.093504 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f02565a7-c476-4aa0-a4b4-bb7caacb4ec7","Type":"ContainerStarted","Data":"8c0c60fbe5437cfa7f6d908553da1c122b3b1a1d099db09c1cdcabcf9ec9a061"} Feb 16 17:26:48 crc kubenswrapper[4794]: I0216 17:26:48.093984 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 16 17:26:48 crc kubenswrapper[4794]: I0216 17:26:48.137209 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=37.137188109 podStartE2EDuration="37.137188109s" podCreationTimestamp="2026-02-16 17:26:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:26:48.127766581 +0000 UTC m=+1634.075861238" watchObservedRunningTime="2026-02-16 17:26:48.137188109 +0000 UTC m=+1634.085282756" Feb 16 17:26:49 crc kubenswrapper[4794]: E0216 17:26:49.911382 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 17:26:49 crc kubenswrapper[4794]: E0216 17:26:49.911749 4794 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 17:26:49 crc kubenswrapper[4794]: E0216 17:26:49.912131 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59fh58dh6ch557h84h55ch564h5bh58fh5c8h5d4h584h669h667h569h59hd5hdbh9dh67ch5f9h59fh597h96h664h687h66dhfch5ddh5b7h88h59cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9v9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(8981f528-1f74-4d56-a93c-22860725b490): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 17:26:49 crc kubenswrapper[4794]: E0216 17:26:49.913280 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:26:50 crc kubenswrapper[4794]: I0216 17:26:50.112445 4794 generic.go:334] "Generic (PLEG): container finished" podID="d805784a-6606-49cf-a441-4e17697ab5ea" containerID="981bf0f065c8220d903cbd77d5b374583011016126149ec3c408f3ca14903a5f" exitCode=0 Feb 16 17:26:50 crc kubenswrapper[4794]: I0216 17:26:50.112555 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d805784a-6606-49cf-a441-4e17697ab5ea","Type":"ContainerDied","Data":"981bf0f065c8220d903cbd77d5b374583011016126149ec3c408f3ca14903a5f"} Feb 16 17:26:50 crc kubenswrapper[4794]: I0216 17:26:50.475084 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6tj6w"] Feb 16 17:26:50 crc kubenswrapper[4794]: E0216 17:26:50.475873 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9abdf39-73a5-420f-8b9b-59831d550111" containerName="dnsmasq-dns" Feb 16 17:26:50 crc kubenswrapper[4794]: I0216 17:26:50.475891 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9abdf39-73a5-420f-8b9b-59831d550111" containerName="dnsmasq-dns" Feb 16 17:26:50 crc kubenswrapper[4794]: E0216 17:26:50.475908 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c14f141-d406-40e2-9846-0c25f152856b" containerName="dnsmasq-dns" Feb 16 17:26:50 crc kubenswrapper[4794]: I0216 17:26:50.475914 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c14f141-d406-40e2-9846-0c25f152856b" containerName="dnsmasq-dns" Feb 16 17:26:50 crc kubenswrapper[4794]: E0216 17:26:50.475928 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c14f141-d406-40e2-9846-0c25f152856b" containerName="init" Feb 16 17:26:50 crc kubenswrapper[4794]: I0216 17:26:50.475935 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c14f141-d406-40e2-9846-0c25f152856b" containerName="init" Feb 16 17:26:50 crc kubenswrapper[4794]: E0216 17:26:50.475953 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9abdf39-73a5-420f-8b9b-59831d550111" containerName="init" Feb 16 17:26:50 crc kubenswrapper[4794]: I0216 17:26:50.475959 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9abdf39-73a5-420f-8b9b-59831d550111" containerName="init" Feb 16 17:26:50 crc kubenswrapper[4794]: I0216 17:26:50.476165 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c14f141-d406-40e2-9846-0c25f152856b" containerName="dnsmasq-dns" Feb 16 17:26:50 crc kubenswrapper[4794]: I0216 17:26:50.476197 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9abdf39-73a5-420f-8b9b-59831d550111" containerName="dnsmasq-dns" Feb 16 17:26:50 crc kubenswrapper[4794]: I0216 17:26:50.477001 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6tj6w" Feb 16 17:26:50 crc kubenswrapper[4794]: I0216 17:26:50.480737 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 17:26:50 crc kubenswrapper[4794]: I0216 17:26:50.481674 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kshzw" Feb 16 17:26:50 crc kubenswrapper[4794]: I0216 17:26:50.481732 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 17:26:50 crc kubenswrapper[4794]: I0216 17:26:50.481731 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 17:26:50 crc kubenswrapper[4794]: I0216 17:26:50.506224 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6tj6w"] Feb 16 17:26:50 crc kubenswrapper[4794]: I0216 17:26:50.606012 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs62b\" (UniqueName: \"kubernetes.io/projected/08d7a50c-a4ea-45cd-81d7-d962bc1921d5-kube-api-access-rs62b\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6tj6w\" (UID: \"08d7a50c-a4ea-45cd-81d7-d962bc1921d5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6tj6w" Feb 16 17:26:50 crc kubenswrapper[4794]: I0216 17:26:50.606171 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08d7a50c-a4ea-45cd-81d7-d962bc1921d5-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6tj6w\" (UID: \"08d7a50c-a4ea-45cd-81d7-d962bc1921d5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6tj6w" Feb 16 17:26:50 crc kubenswrapper[4794]: I0216 17:26:50.606255 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/08d7a50c-a4ea-45cd-81d7-d962bc1921d5-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6tj6w\" (UID: \"08d7a50c-a4ea-45cd-81d7-d962bc1921d5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6tj6w" Feb 16 17:26:50 crc kubenswrapper[4794]: I0216 17:26:50.606295 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d7a50c-a4ea-45cd-81d7-d962bc1921d5-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6tj6w\" (UID: \"08d7a50c-a4ea-45cd-81d7-d962bc1921d5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6tj6w" Feb 16 17:26:50 crc kubenswrapper[4794]: I0216 17:26:50.708840 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/08d7a50c-a4ea-45cd-81d7-d962bc1921d5-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6tj6w\" (UID: \"08d7a50c-a4ea-45cd-81d7-d962bc1921d5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6tj6w" Feb 16 17:26:50 crc kubenswrapper[4794]: I0216 17:26:50.709256 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d7a50c-a4ea-45cd-81d7-d962bc1921d5-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6tj6w\" (UID: \"08d7a50c-a4ea-45cd-81d7-d962bc1921d5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6tj6w" Feb 16 17:26:50 crc kubenswrapper[4794]: I0216 17:26:50.709495 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs62b\" (UniqueName: \"kubernetes.io/projected/08d7a50c-a4ea-45cd-81d7-d962bc1921d5-kube-api-access-rs62b\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6tj6w\" (UID: \"08d7a50c-a4ea-45cd-81d7-d962bc1921d5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6tj6w" Feb 16 17:26:50 crc kubenswrapper[4794]: I0216 17:26:50.709687 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08d7a50c-a4ea-45cd-81d7-d962bc1921d5-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6tj6w\" (UID: \"08d7a50c-a4ea-45cd-81d7-d962bc1921d5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6tj6w" Feb 16 17:26:50 crc kubenswrapper[4794]: I0216 17:26:50.717236 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d7a50c-a4ea-45cd-81d7-d962bc1921d5-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6tj6w\" (UID: \"08d7a50c-a4ea-45cd-81d7-d962bc1921d5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6tj6w" Feb 16 17:26:50 crc kubenswrapper[4794]: I0216 17:26:50.725867 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08d7a50c-a4ea-45cd-81d7-d962bc1921d5-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6tj6w\" (UID: \"08d7a50c-a4ea-45cd-81d7-d962bc1921d5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6tj6w" Feb 16 17:26:50 crc kubenswrapper[4794]: I0216 17:26:50.726857 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/08d7a50c-a4ea-45cd-81d7-d962bc1921d5-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6tj6w\" (UID: \"08d7a50c-a4ea-45cd-81d7-d962bc1921d5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6tj6w" Feb 16 17:26:50 crc kubenswrapper[4794]: I0216 17:26:50.727174 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs62b\" (UniqueName: \"kubernetes.io/projected/08d7a50c-a4ea-45cd-81d7-d962bc1921d5-kube-api-access-rs62b\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6tj6w\" (UID: \"08d7a50c-a4ea-45cd-81d7-d962bc1921d5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6tj6w" Feb 16 17:26:50 crc kubenswrapper[4794]: I0216 17:26:50.792489 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6tj6w" Feb 16 17:26:51 crc kubenswrapper[4794]: I0216 17:26:51.129098 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d805784a-6606-49cf-a441-4e17697ab5ea","Type":"ContainerStarted","Data":"40e653e02bf94aa96c41c93c0bd66b28cd951f925027d99e95faa7bbd47e81ae"} Feb 16 17:26:51 crc kubenswrapper[4794]: I0216 17:26:51.130487 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:26:51 crc kubenswrapper[4794]: W0216 17:26:51.344291 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08d7a50c_a4ea_45cd_81d7_d962bc1921d5.slice/crio-4297fa189dac34b5916291507f59b0de1429f47b31f9d7b96a8713b75f09045a WatchSource:0}: Error finding container 4297fa189dac34b5916291507f59b0de1429f47b31f9d7b96a8713b75f09045a: Status 404 returned error can't find the container with id 4297fa189dac34b5916291507f59b0de1429f47b31f9d7b96a8713b75f09045a Feb 16 17:26:51 crc kubenswrapper[4794]: I0216 17:26:51.349043 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.349023425 podStartE2EDuration="38.349023425s" podCreationTimestamp="2026-02-16 17:26:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:26:51.167331865 +0000 UTC m=+1637.115426542" watchObservedRunningTime="2026-02-16 17:26:51.349023425 +0000 UTC m=+1637.297118072" Feb 16 17:26:51 crc kubenswrapper[4794]: I0216 17:26:51.355903 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6tj6w"] Feb 16 17:26:52 crc kubenswrapper[4794]: I0216 17:26:52.144535 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6tj6w" event={"ID":"08d7a50c-a4ea-45cd-81d7-d962bc1921d5","Type":"ContainerStarted","Data":"4297fa189dac34b5916291507f59b0de1429f47b31f9d7b96a8713b75f09045a"} Feb 16 17:26:54 crc kubenswrapper[4794]: I0216 17:26:54.807974 4794 scope.go:117] "RemoveContainer" containerID="6e22a7be8018d748f49f3c871459e97f76ee04f859d3d1d0d46b4bbb2dd36691" Feb 16 17:26:54 crc kubenswrapper[4794]: E0216 17:26:54.811682 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:26:57 crc kubenswrapper[4794]: E0216 17:26:57.793264 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:27:01 crc kubenswrapper[4794]: I0216 17:27:01.268393 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6tj6w" event={"ID":"08d7a50c-a4ea-45cd-81d7-d962bc1921d5","Type":"ContainerStarted","Data":"a0a89afadee34e7f07564898c60faa0f6c0d06a91d229669c2937755eb51d066"} Feb 16 17:27:01 crc kubenswrapper[4794]: I0216 17:27:01.294573 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6tj6w" podStartSLOduration=1.926376815 podStartE2EDuration="11.29454755s" podCreationTimestamp="2026-02-16 17:26:50 +0000 UTC" firstStartedPulling="2026-02-16 17:26:51.346834773 +0000 UTC m=+1637.294929420" lastFinishedPulling="2026-02-16 17:27:00.715005508 +0000 UTC m=+1646.663100155" observedRunningTime="2026-02-16 17:27:01.283968729 +0000 UTC m=+1647.232063396" watchObservedRunningTime="2026-02-16 17:27:01.29454755 +0000 UTC m=+1647.242642197" Feb 16 17:27:02 crc kubenswrapper[4794]: I0216 17:27:02.009582 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 16 17:27:02 crc kubenswrapper[4794]: I0216 17:27:02.069445 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 17:27:02 crc kubenswrapper[4794]: E0216 17:27:02.797719 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:27:04 crc kubenswrapper[4794]: I0216 17:27:04.296504 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:27:05 crc kubenswrapper[4794]: I0216 17:27:05.916382 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="8fb6be66-7fef-4554-897b-30d9f4637138" containerName="rabbitmq" containerID="cri-o://523fb59c255a777ff296c7e21c97e54cffbe6d1d35fb7cb70cd1ded47a89b767" gracePeriod=604797 Feb 16 17:27:07 crc kubenswrapper[4794]: I0216 17:27:07.792552 4794 scope.go:117] "RemoveContainer" containerID="6e22a7be8018d748f49f3c871459e97f76ee04f859d3d1d0d46b4bbb2dd36691" Feb 16 17:27:07 crc kubenswrapper[4794]: E0216 17:27:07.794363 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:27:10 crc kubenswrapper[4794]: E0216 17:27:10.794206 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:27:12 crc kubenswrapper[4794]: I0216 17:27:12.403969 4794 generic.go:334] "Generic (PLEG): container finished" podID="8fb6be66-7fef-4554-897b-30d9f4637138" containerID="523fb59c255a777ff296c7e21c97e54cffbe6d1d35fb7cb70cd1ded47a89b767" exitCode=0 Feb 16 17:27:12 crc kubenswrapper[4794]: I0216 17:27:12.404652 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"8fb6be66-7fef-4554-897b-30d9f4637138","Type":"ContainerDied","Data":"523fb59c255a777ff296c7e21c97e54cffbe6d1d35fb7cb70cd1ded47a89b767"} Feb 16 17:27:12 crc kubenswrapper[4794]: I0216 17:27:12.600183 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 16 17:27:12 crc kubenswrapper[4794]: I0216 17:27:12.686151 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bba1a5c-beee-4d93-8d8d-f837ee7c43c1\") pod \"8fb6be66-7fef-4554-897b-30d9f4637138\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " Feb 16 17:27:12 crc kubenswrapper[4794]: I0216 17:27:12.687509 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8fb6be66-7fef-4554-897b-30d9f4637138-plugins-conf\") pod \"8fb6be66-7fef-4554-897b-30d9f4637138\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " Feb 16 17:27:12 crc kubenswrapper[4794]: I0216 17:27:12.687648 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8fb6be66-7fef-4554-897b-30d9f4637138-erlang-cookie-secret\") pod \"8fb6be66-7fef-4554-897b-30d9f4637138\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " Feb 16 17:27:12 crc kubenswrapper[4794]: I0216 17:27:12.687716 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8fb6be66-7fef-4554-897b-30d9f4637138-rabbitmq-tls\") pod \"8fb6be66-7fef-4554-897b-30d9f4637138\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " Feb 16 17:27:12 crc kubenswrapper[4794]: I0216 17:27:12.687760 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8fb6be66-7fef-4554-897b-30d9f4637138-server-conf\") pod \"8fb6be66-7fef-4554-897b-30d9f4637138\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " Feb 16 17:27:12 crc kubenswrapper[4794]: I0216 17:27:12.687891 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8fb6be66-7fef-4554-897b-30d9f4637138-pod-info\") pod \"8fb6be66-7fef-4554-897b-30d9f4637138\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " Feb 16 17:27:12 crc kubenswrapper[4794]: I0216 17:27:12.688037 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8fb6be66-7fef-4554-897b-30d9f4637138-rabbitmq-confd\") pod \"8fb6be66-7fef-4554-897b-30d9f4637138\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " Feb 16 17:27:12 crc kubenswrapper[4794]: I0216 17:27:12.688078 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8fb6be66-7fef-4554-897b-30d9f4637138-rabbitmq-erlang-cookie\") pod \"8fb6be66-7fef-4554-897b-30d9f4637138\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " Feb 16 17:27:12 crc kubenswrapper[4794]: I0216 17:27:12.688141 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8fb6be66-7fef-4554-897b-30d9f4637138-config-data\") pod \"8fb6be66-7fef-4554-897b-30d9f4637138\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " Feb 16 17:27:12 crc kubenswrapper[4794]: I0216 17:27:12.688182 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrs65\" (UniqueName: \"kubernetes.io/projected/8fb6be66-7fef-4554-897b-30d9f4637138-kube-api-access-rrs65\") pod \"8fb6be66-7fef-4554-897b-30d9f4637138\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " Feb 16 17:27:12 crc kubenswrapper[4794]: I0216 17:27:12.688298 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8fb6be66-7fef-4554-897b-30d9f4637138-rabbitmq-plugins\") pod \"8fb6be66-7fef-4554-897b-30d9f4637138\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " Feb 16 17:27:12 crc kubenswrapper[4794]: I0216 17:27:12.688713 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fb6be66-7fef-4554-897b-30d9f4637138-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "8fb6be66-7fef-4554-897b-30d9f4637138" (UID: "8fb6be66-7fef-4554-897b-30d9f4637138"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:27:12 crc kubenswrapper[4794]: I0216 17:27:12.690034 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fb6be66-7fef-4554-897b-30d9f4637138-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "8fb6be66-7fef-4554-897b-30d9f4637138" (UID: "8fb6be66-7fef-4554-897b-30d9f4637138"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:27:12 crc kubenswrapper[4794]: I0216 17:27:12.690789 4794 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8fb6be66-7fef-4554-897b-30d9f4637138-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 16 17:27:12 crc kubenswrapper[4794]: I0216 17:27:12.690823 4794 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8fb6be66-7fef-4554-897b-30d9f4637138-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 16 17:27:12 crc kubenswrapper[4794]: I0216 17:27:12.690851 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fb6be66-7fef-4554-897b-30d9f4637138-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "8fb6be66-7fef-4554-897b-30d9f4637138" (UID: "8fb6be66-7fef-4554-897b-30d9f4637138"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:27:12 crc kubenswrapper[4794]: I0216 17:27:12.701042 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fb6be66-7fef-4554-897b-30d9f4637138-kube-api-access-rrs65" (OuterVolumeSpecName: "kube-api-access-rrs65") pod "8fb6be66-7fef-4554-897b-30d9f4637138" (UID: "8fb6be66-7fef-4554-897b-30d9f4637138"). InnerVolumeSpecName "kube-api-access-rrs65". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:27:12 crc kubenswrapper[4794]: I0216 17:27:12.704939 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fb6be66-7fef-4554-897b-30d9f4637138-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "8fb6be66-7fef-4554-897b-30d9f4637138" (UID: "8fb6be66-7fef-4554-897b-30d9f4637138"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:27:12 crc kubenswrapper[4794]: I0216 17:27:12.709508 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fb6be66-7fef-4554-897b-30d9f4637138-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "8fb6be66-7fef-4554-897b-30d9f4637138" (UID: "8fb6be66-7fef-4554-897b-30d9f4637138"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:27:12 crc kubenswrapper[4794]: I0216 17:27:12.719954 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/8fb6be66-7fef-4554-897b-30d9f4637138-pod-info" (OuterVolumeSpecName: "pod-info") pod "8fb6be66-7fef-4554-897b-30d9f4637138" (UID: "8fb6be66-7fef-4554-897b-30d9f4637138"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 17:27:12 crc kubenswrapper[4794]: E0216 17:27:12.790962 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bba1a5c-beee-4d93-8d8d-f837ee7c43c1 podName:8fb6be66-7fef-4554-897b-30d9f4637138 nodeName:}" failed. No retries permitted until 2026-02-16 17:27:13.290931828 +0000 UTC m=+1659.239026475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "persistence" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bba1a5c-beee-4d93-8d8d-f837ee7c43c1") pod "8fb6be66-7fef-4554-897b-30d9f4637138" (UID: "8fb6be66-7fef-4554-897b-30d9f4637138") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Feb 16 17:27:12 crc kubenswrapper[4794]: I0216 17:27:12.793002 4794 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8fb6be66-7fef-4554-897b-30d9f4637138-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 16 17:27:12 crc kubenswrapper[4794]: I0216 17:27:12.793218 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrs65\" (UniqueName: \"kubernetes.io/projected/8fb6be66-7fef-4554-897b-30d9f4637138-kube-api-access-rrs65\") on node \"crc\" DevicePath \"\"" Feb 16 17:27:12 crc kubenswrapper[4794]: I0216 17:27:12.793242 4794 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8fb6be66-7fef-4554-897b-30d9f4637138-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 16 17:27:12 crc kubenswrapper[4794]: I0216 17:27:12.793254 4794 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8fb6be66-7fef-4554-897b-30d9f4637138-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 16 17:27:12 crc kubenswrapper[4794]: I0216 17:27:12.793267 4794 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8fb6be66-7fef-4554-897b-30d9f4637138-pod-info\") on node \"crc\" DevicePath \"\"" Feb 16 17:27:12 crc kubenswrapper[4794]: I0216 17:27:12.795507 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fb6be66-7fef-4554-897b-30d9f4637138-config-data" (OuterVolumeSpecName: "config-data") pod "8fb6be66-7fef-4554-897b-30d9f4637138" (UID: "8fb6be66-7fef-4554-897b-30d9f4637138"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:27:12 crc kubenswrapper[4794]: I0216 17:27:12.804284 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fb6be66-7fef-4554-897b-30d9f4637138-server-conf" (OuterVolumeSpecName: "server-conf") pod "8fb6be66-7fef-4554-897b-30d9f4637138" (UID: "8fb6be66-7fef-4554-897b-30d9f4637138"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:27:12 crc kubenswrapper[4794]: I0216 17:27:12.854967 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fb6be66-7fef-4554-897b-30d9f4637138-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "8fb6be66-7fef-4554-897b-30d9f4637138" (UID: "8fb6be66-7fef-4554-897b-30d9f4637138"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:27:12 crc kubenswrapper[4794]: I0216 17:27:12.895583 4794 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8fb6be66-7fef-4554-897b-30d9f4637138-server-conf\") on node \"crc\" DevicePath \"\"" Feb 16 17:27:12 crc kubenswrapper[4794]: I0216 17:27:12.895804 4794 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8fb6be66-7fef-4554-897b-30d9f4637138-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 16 17:27:12 crc kubenswrapper[4794]: I0216 17:27:12.895861 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8fb6be66-7fef-4554-897b-30d9f4637138-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.306850 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bba1a5c-beee-4d93-8d8d-f837ee7c43c1\") pod \"8fb6be66-7fef-4554-897b-30d9f4637138\" (UID: \"8fb6be66-7fef-4554-897b-30d9f4637138\") " Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.344429 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bba1a5c-beee-4d93-8d8d-f837ee7c43c1" (OuterVolumeSpecName: "persistence") pod "8fb6be66-7fef-4554-897b-30d9f4637138" (UID: "8fb6be66-7fef-4554-897b-30d9f4637138"). InnerVolumeSpecName "pvc-2bba1a5c-beee-4d93-8d8d-f837ee7c43c1". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.410659 4794 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2bba1a5c-beee-4d93-8d8d-f837ee7c43c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bba1a5c-beee-4d93-8d8d-f837ee7c43c1\") on node \"crc\" " Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.418079 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"8fb6be66-7fef-4554-897b-30d9f4637138","Type":"ContainerDied","Data":"ee57265729545919a3dfbdf0d3a200acd3d16f60923386d102805b7aabe256da"} Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.418148 4794 scope.go:117] "RemoveContainer" containerID="523fb59c255a777ff296c7e21c97e54cffbe6d1d35fb7cb70cd1ded47a89b767" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.418464 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.460721 4794 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.461039 4794 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2bba1a5c-beee-4d93-8d8d-f837ee7c43c1" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bba1a5c-beee-4d93-8d8d-f837ee7c43c1") on node "crc" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.467878 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.477044 4794 scope.go:117] "RemoveContainer" containerID="a5611785ff80a2040a0e9583d8fe5567236fc1088f42337abc77e4841bba2724" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.489562 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.506683 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 17:27:13 crc kubenswrapper[4794]: E0216 17:27:13.510718 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb6be66-7fef-4554-897b-30d9f4637138" containerName="setup-container" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.510882 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb6be66-7fef-4554-897b-30d9f4637138" containerName="setup-container" Feb 16 17:27:13 crc kubenswrapper[4794]: E0216 17:27:13.510981 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb6be66-7fef-4554-897b-30d9f4637138" containerName="rabbitmq" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.511048 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb6be66-7fef-4554-897b-30d9f4637138" containerName="rabbitmq" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.511374 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fb6be66-7fef-4554-897b-30d9f4637138" containerName="rabbitmq" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.512931 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.514063 4794 reconciler_common.go:293] "Volume detached for volume \"pvc-2bba1a5c-beee-4d93-8d8d-f837ee7c43c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bba1a5c-beee-4d93-8d8d-f837ee7c43c1\") on node \"crc\" DevicePath \"\"" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.568543 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.616270 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf2n4\" (UniqueName: \"kubernetes.io/projected/b487594f-298c-477a-bd90-487d9f072b6e-kube-api-access-gf2n4\") pod \"rabbitmq-server-1\" (UID: \"b487594f-298c-477a-bd90-487d9f072b6e\") " pod="openstack/rabbitmq-server-1" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.616351 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b487594f-298c-477a-bd90-487d9f072b6e-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"b487594f-298c-477a-bd90-487d9f072b6e\") " pod="openstack/rabbitmq-server-1" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.616388 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b487594f-298c-477a-bd90-487d9f072b6e-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"b487594f-298c-477a-bd90-487d9f072b6e\") " pod="openstack/rabbitmq-server-1" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.616407 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b487594f-298c-477a-bd90-487d9f072b6e-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"b487594f-298c-477a-bd90-487d9f072b6e\") " pod="openstack/rabbitmq-server-1" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.616443 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b487594f-298c-477a-bd90-487d9f072b6e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"b487594f-298c-477a-bd90-487d9f072b6e\") " pod="openstack/rabbitmq-server-1" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.616458 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b487594f-298c-477a-bd90-487d9f072b6e-server-conf\") pod \"rabbitmq-server-1\" (UID: \"b487594f-298c-477a-bd90-487d9f072b6e\") " pod="openstack/rabbitmq-server-1" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.616492 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b487594f-298c-477a-bd90-487d9f072b6e-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"b487594f-298c-477a-bd90-487d9f072b6e\") " pod="openstack/rabbitmq-server-1" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.616511 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b487594f-298c-477a-bd90-487d9f072b6e-config-data\") pod \"rabbitmq-server-1\" (UID: \"b487594f-298c-477a-bd90-487d9f072b6e\") " pod="openstack/rabbitmq-server-1" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.616575 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b487594f-298c-477a-bd90-487d9f072b6e-pod-info\") pod \"rabbitmq-server-1\" (UID: \"b487594f-298c-477a-bd90-487d9f072b6e\") " pod="openstack/rabbitmq-server-1" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.616644 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b487594f-298c-477a-bd90-487d9f072b6e-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"b487594f-298c-477a-bd90-487d9f072b6e\") " pod="openstack/rabbitmq-server-1" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.616673 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2bba1a5c-beee-4d93-8d8d-f837ee7c43c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bba1a5c-beee-4d93-8d8d-f837ee7c43c1\") pod \"rabbitmq-server-1\" (UID: \"b487594f-298c-477a-bd90-487d9f072b6e\") " pod="openstack/rabbitmq-server-1" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.724791 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b487594f-298c-477a-bd90-487d9f072b6e-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"b487594f-298c-477a-bd90-487d9f072b6e\") " pod="openstack/rabbitmq-server-1" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.725103 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2bba1a5c-beee-4d93-8d8d-f837ee7c43c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bba1a5c-beee-4d93-8d8d-f837ee7c43c1\") pod \"rabbitmq-server-1\" (UID: \"b487594f-298c-477a-bd90-487d9f072b6e\") " pod="openstack/rabbitmq-server-1" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.725247 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf2n4\" (UniqueName: \"kubernetes.io/projected/b487594f-298c-477a-bd90-487d9f072b6e-kube-api-access-gf2n4\") pod \"rabbitmq-server-1\" (UID: \"b487594f-298c-477a-bd90-487d9f072b6e\") " pod="openstack/rabbitmq-server-1" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.725733 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b487594f-298c-477a-bd90-487d9f072b6e-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"b487594f-298c-477a-bd90-487d9f072b6e\") " pod="openstack/rabbitmq-server-1" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.725945 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b487594f-298c-477a-bd90-487d9f072b6e-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"b487594f-298c-477a-bd90-487d9f072b6e\") " pod="openstack/rabbitmq-server-1" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.726024 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b487594f-298c-477a-bd90-487d9f072b6e-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"b487594f-298c-477a-bd90-487d9f072b6e\") " pod="openstack/rabbitmq-server-1" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.726102 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b487594f-298c-477a-bd90-487d9f072b6e-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"b487594f-298c-477a-bd90-487d9f072b6e\") " pod="openstack/rabbitmq-server-1" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.726115 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b487594f-298c-477a-bd90-487d9f072b6e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"b487594f-298c-477a-bd90-487d9f072b6e\") " pod="openstack/rabbitmq-server-1" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.726201 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b487594f-298c-477a-bd90-487d9f072b6e-server-conf\") pod \"rabbitmq-server-1\" (UID: \"b487594f-298c-477a-bd90-487d9f072b6e\") " pod="openstack/rabbitmq-server-1" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.726344 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b487594f-298c-477a-bd90-487d9f072b6e-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"b487594f-298c-477a-bd90-487d9f072b6e\") " pod="openstack/rabbitmq-server-1" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.726399 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b487594f-298c-477a-bd90-487d9f072b6e-config-data\") pod \"rabbitmq-server-1\" (UID: \"b487594f-298c-477a-bd90-487d9f072b6e\") " pod="openstack/rabbitmq-server-1" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.726711 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b487594f-298c-477a-bd90-487d9f072b6e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"b487594f-298c-477a-bd90-487d9f072b6e\") " pod="openstack/rabbitmq-server-1" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.727699 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b487594f-298c-477a-bd90-487d9f072b6e-server-conf\") pod \"rabbitmq-server-1\" (UID: \"b487594f-298c-477a-bd90-487d9f072b6e\") " pod="openstack/rabbitmq-server-1" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.728279 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b487594f-298c-477a-bd90-487d9f072b6e-pod-info\") pod \"rabbitmq-server-1\" (UID: \"b487594f-298c-477a-bd90-487d9f072b6e\") " pod="openstack/rabbitmq-server-1" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.728971 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b487594f-298c-477a-bd90-487d9f072b6e-config-data\") pod \"rabbitmq-server-1\" (UID: \"b487594f-298c-477a-bd90-487d9f072b6e\") " pod="openstack/rabbitmq-server-1" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.729114 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b487594f-298c-477a-bd90-487d9f072b6e-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"b487594f-298c-477a-bd90-487d9f072b6e\") " pod="openstack/rabbitmq-server-1" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.733247 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b487594f-298c-477a-bd90-487d9f072b6e-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"b487594f-298c-477a-bd90-487d9f072b6e\") " pod="openstack/rabbitmq-server-1" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.734270 4794 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.734327 4794 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2bba1a5c-beee-4d93-8d8d-f837ee7c43c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bba1a5c-beee-4d93-8d8d-f837ee7c43c1\") pod \"rabbitmq-server-1\" (UID: \"b487594f-298c-477a-bd90-487d9f072b6e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/242f881427f1e742f812f05a8fc0a139e128bcd26a1d8cef4f20918c4b6df8a4/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.735192 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b487594f-298c-477a-bd90-487d9f072b6e-pod-info\") pod \"rabbitmq-server-1\" (UID: \"b487594f-298c-477a-bd90-487d9f072b6e\") " pod="openstack/rabbitmq-server-1" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.739237 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b487594f-298c-477a-bd90-487d9f072b6e-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"b487594f-298c-477a-bd90-487d9f072b6e\") " pod="openstack/rabbitmq-server-1" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.741240 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b487594f-298c-477a-bd90-487d9f072b6e-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"b487594f-298c-477a-bd90-487d9f072b6e\") " pod="openstack/rabbitmq-server-1" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.752252 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf2n4\" (UniqueName: \"kubernetes.io/projected/b487594f-298c-477a-bd90-487d9f072b6e-kube-api-access-gf2n4\") pod \"rabbitmq-server-1\" (UID: \"b487594f-298c-477a-bd90-487d9f072b6e\") " pod="openstack/rabbitmq-server-1" Feb 16 17:27:13 crc kubenswrapper[4794]: E0216 17:27:13.793001 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.805437 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2bba1a5c-beee-4d93-8d8d-f837ee7c43c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bba1a5c-beee-4d93-8d8d-f837ee7c43c1\") pod \"rabbitmq-server-1\" (UID: \"b487594f-298c-477a-bd90-487d9f072b6e\") " pod="openstack/rabbitmq-server-1" Feb 16 17:27:13 crc kubenswrapper[4794]: I0216 17:27:13.921117 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 16 17:27:14 crc kubenswrapper[4794]: I0216 17:27:14.429759 4794 generic.go:334] "Generic (PLEG): container finished" podID="08d7a50c-a4ea-45cd-81d7-d962bc1921d5" containerID="a0a89afadee34e7f07564898c60faa0f6c0d06a91d229669c2937755eb51d066" exitCode=0 Feb 16 17:27:14 crc kubenswrapper[4794]: I0216 17:27:14.429840 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6tj6w" event={"ID":"08d7a50c-a4ea-45cd-81d7-d962bc1921d5","Type":"ContainerDied","Data":"a0a89afadee34e7f07564898c60faa0f6c0d06a91d229669c2937755eb51d066"} Feb 16 17:27:14 crc kubenswrapper[4794]: W0216 17:27:14.516959 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb487594f_298c_477a_bd90_487d9f072b6e.slice/crio-05e5f143391cfca04dd13a03e781c39f72ddc4723ac417529c9dddcae25a32c3 WatchSource:0}: Error finding container 05e5f143391cfca04dd13a03e781c39f72ddc4723ac417529c9dddcae25a32c3: Status 404 returned error can't find the container with id 05e5f143391cfca04dd13a03e781c39f72ddc4723ac417529c9dddcae25a32c3 Feb 16 17:27:14 crc kubenswrapper[4794]: I0216 17:27:14.519032 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 17:27:14 crc kubenswrapper[4794]: I0216 17:27:14.804324 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fb6be66-7fef-4554-897b-30d9f4637138" path="/var/lib/kubelet/pods/8fb6be66-7fef-4554-897b-30d9f4637138/volumes" Feb 16 17:27:15 crc kubenswrapper[4794]: I0216 17:27:15.446196 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"b487594f-298c-477a-bd90-487d9f072b6e","Type":"ContainerStarted","Data":"05e5f143391cfca04dd13a03e781c39f72ddc4723ac417529c9dddcae25a32c3"} Feb 16 17:27:16 crc kubenswrapper[4794]: I0216 17:27:16.120351 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6tj6w" Feb 16 17:27:16 crc kubenswrapper[4794]: I0216 17:27:16.194817 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08d7a50c-a4ea-45cd-81d7-d962bc1921d5-inventory\") pod \"08d7a50c-a4ea-45cd-81d7-d962bc1921d5\" (UID: \"08d7a50c-a4ea-45cd-81d7-d962bc1921d5\") " Feb 16 17:27:16 crc kubenswrapper[4794]: I0216 17:27:16.194914 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rs62b\" (UniqueName: \"kubernetes.io/projected/08d7a50c-a4ea-45cd-81d7-d962bc1921d5-kube-api-access-rs62b\") pod \"08d7a50c-a4ea-45cd-81d7-d962bc1921d5\" (UID: \"08d7a50c-a4ea-45cd-81d7-d962bc1921d5\") " Feb 16 17:27:16 crc kubenswrapper[4794]: I0216 17:27:16.194966 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d7a50c-a4ea-45cd-81d7-d962bc1921d5-repo-setup-combined-ca-bundle\") pod \"08d7a50c-a4ea-45cd-81d7-d962bc1921d5\" (UID: \"08d7a50c-a4ea-45cd-81d7-d962bc1921d5\") " Feb 16 17:27:16 crc kubenswrapper[4794]: I0216 17:27:16.195066 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/08d7a50c-a4ea-45cd-81d7-d962bc1921d5-ssh-key-openstack-edpm-ipam\") pod \"08d7a50c-a4ea-45cd-81d7-d962bc1921d5\" (UID: \"08d7a50c-a4ea-45cd-81d7-d962bc1921d5\") " Feb 16 17:27:16 crc kubenswrapper[4794]: I0216 17:27:16.201854 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08d7a50c-a4ea-45cd-81d7-d962bc1921d5-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "08d7a50c-a4ea-45cd-81d7-d962bc1921d5" (UID: "08d7a50c-a4ea-45cd-81d7-d962bc1921d5"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:27:16 crc kubenswrapper[4794]: I0216 17:27:16.201887 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08d7a50c-a4ea-45cd-81d7-d962bc1921d5-kube-api-access-rs62b" (OuterVolumeSpecName: "kube-api-access-rs62b") pod "08d7a50c-a4ea-45cd-81d7-d962bc1921d5" (UID: "08d7a50c-a4ea-45cd-81d7-d962bc1921d5"). InnerVolumeSpecName "kube-api-access-rs62b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:27:16 crc kubenswrapper[4794]: I0216 17:27:16.239749 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08d7a50c-a4ea-45cd-81d7-d962bc1921d5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "08d7a50c-a4ea-45cd-81d7-d962bc1921d5" (UID: "08d7a50c-a4ea-45cd-81d7-d962bc1921d5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:27:16 crc kubenswrapper[4794]: I0216 17:27:16.247628 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08d7a50c-a4ea-45cd-81d7-d962bc1921d5-inventory" (OuterVolumeSpecName: "inventory") pod "08d7a50c-a4ea-45cd-81d7-d962bc1921d5" (UID: "08d7a50c-a4ea-45cd-81d7-d962bc1921d5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:27:16 crc kubenswrapper[4794]: I0216 17:27:16.298241 4794 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08d7a50c-a4ea-45cd-81d7-d962bc1921d5-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 17:27:16 crc kubenswrapper[4794]: I0216 17:27:16.298287 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rs62b\" (UniqueName: \"kubernetes.io/projected/08d7a50c-a4ea-45cd-81d7-d962bc1921d5-kube-api-access-rs62b\") on node \"crc\" DevicePath \"\"" Feb 16 17:27:16 crc kubenswrapper[4794]: I0216 17:27:16.298301 4794 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d7a50c-a4ea-45cd-81d7-d962bc1921d5-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:27:16 crc kubenswrapper[4794]: I0216 17:27:16.298315 4794 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/08d7a50c-a4ea-45cd-81d7-d962bc1921d5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 17:27:16 crc kubenswrapper[4794]: I0216 17:27:16.459779 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"b487594f-298c-477a-bd90-487d9f072b6e","Type":"ContainerStarted","Data":"0e3044c4b0425f4372eadb91b85e9137578fa7b63c453b19bb61c87afaf347e6"} Feb 16 17:27:16 crc kubenswrapper[4794]: I0216 17:27:16.463929 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6tj6w" event={"ID":"08d7a50c-a4ea-45cd-81d7-d962bc1921d5","Type":"ContainerDied","Data":"4297fa189dac34b5916291507f59b0de1429f47b31f9d7b96a8713b75f09045a"} Feb 16 17:27:16 crc kubenswrapper[4794]: I0216 17:27:16.463981 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4297fa189dac34b5916291507f59b0de1429f47b31f9d7b96a8713b75f09045a" Feb 16 17:27:16 crc kubenswrapper[4794]: I0216 17:27:16.464169 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6tj6w" Feb 16 17:27:16 crc kubenswrapper[4794]: I0216 17:27:16.572289 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-2l6mb"] Feb 16 17:27:16 crc kubenswrapper[4794]: E0216 17:27:16.572877 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08d7a50c-a4ea-45cd-81d7-d962bc1921d5" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 16 17:27:16 crc kubenswrapper[4794]: I0216 17:27:16.572904 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="08d7a50c-a4ea-45cd-81d7-d962bc1921d5" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 16 17:27:16 crc kubenswrapper[4794]: I0216 17:27:16.573241 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="08d7a50c-a4ea-45cd-81d7-d962bc1921d5" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 16 17:27:16 crc kubenswrapper[4794]: I0216 17:27:16.574392 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2l6mb" Feb 16 17:27:16 crc kubenswrapper[4794]: I0216 17:27:16.578232 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kshzw" Feb 16 17:27:16 crc kubenswrapper[4794]: I0216 17:27:16.578489 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 17:27:16 crc kubenswrapper[4794]: I0216 17:27:16.578611 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 17:27:16 crc kubenswrapper[4794]: I0216 17:27:16.578722 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 17:27:16 crc kubenswrapper[4794]: I0216 17:27:16.589376 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-2l6mb"] Feb 16 17:27:16 crc kubenswrapper[4794]: I0216 17:27:16.726325 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4679cdf0-0e90-4126-91b5-5411ea4d9452-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2l6mb\" (UID: \"4679cdf0-0e90-4126-91b5-5411ea4d9452\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2l6mb" Feb 16 17:27:16 crc kubenswrapper[4794]: I0216 17:27:16.726404 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4679cdf0-0e90-4126-91b5-5411ea4d9452-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2l6mb\" (UID: \"4679cdf0-0e90-4126-91b5-5411ea4d9452\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2l6mb" Feb 16 17:27:16 crc kubenswrapper[4794]: I0216 17:27:16.726582 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdqv2\" (UniqueName: \"kubernetes.io/projected/4679cdf0-0e90-4126-91b5-5411ea4d9452-kube-api-access-jdqv2\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2l6mb\" (UID: \"4679cdf0-0e90-4126-91b5-5411ea4d9452\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2l6mb" Feb 16 17:27:16 crc kubenswrapper[4794]: I0216 17:27:16.829035 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4679cdf0-0e90-4126-91b5-5411ea4d9452-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2l6mb\" (UID: \"4679cdf0-0e90-4126-91b5-5411ea4d9452\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2l6mb" Feb 16 17:27:16 crc kubenswrapper[4794]: I0216 17:27:16.829101 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4679cdf0-0e90-4126-91b5-5411ea4d9452-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2l6mb\" (UID: \"4679cdf0-0e90-4126-91b5-5411ea4d9452\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2l6mb" Feb 16 17:27:16 crc kubenswrapper[4794]: I0216 17:27:16.829247 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdqv2\" (UniqueName: \"kubernetes.io/projected/4679cdf0-0e90-4126-91b5-5411ea4d9452-kube-api-access-jdqv2\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2l6mb\" (UID: \"4679cdf0-0e90-4126-91b5-5411ea4d9452\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2l6mb" Feb 16 17:27:16 crc kubenswrapper[4794]: I0216 17:27:16.835956 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4679cdf0-0e90-4126-91b5-5411ea4d9452-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2l6mb\" (UID: \"4679cdf0-0e90-4126-91b5-5411ea4d9452\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2l6mb" Feb 16 17:27:16 crc kubenswrapper[4794]: I0216 17:27:16.836964 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4679cdf0-0e90-4126-91b5-5411ea4d9452-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2l6mb\" (UID: \"4679cdf0-0e90-4126-91b5-5411ea4d9452\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2l6mb" Feb 16 17:27:16 crc kubenswrapper[4794]: I0216 17:27:16.844033 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdqv2\" (UniqueName: \"kubernetes.io/projected/4679cdf0-0e90-4126-91b5-5411ea4d9452-kube-api-access-jdqv2\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2l6mb\" (UID: \"4679cdf0-0e90-4126-91b5-5411ea4d9452\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2l6mb" Feb 16 17:27:16 crc kubenswrapper[4794]: I0216 17:27:16.912078 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2l6mb" Feb 16 17:27:17 crc kubenswrapper[4794]: I0216 17:27:17.513952 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="8fb6be66-7fef-4554-897b-30d9f4637138" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: i/o timeout" Feb 16 17:27:17 crc kubenswrapper[4794]: I0216 17:27:17.534532 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-2l6mb"] Feb 16 17:27:18 crc kubenswrapper[4794]: I0216 17:27:18.485890 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2l6mb" event={"ID":"4679cdf0-0e90-4126-91b5-5411ea4d9452","Type":"ContainerStarted","Data":"27c8238f48326ed199582e79c6a0b6215161b382e18c29489cb31190e5596999"} Feb 16 17:27:18 crc kubenswrapper[4794]: I0216 17:27:18.486318 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2l6mb" event={"ID":"4679cdf0-0e90-4126-91b5-5411ea4d9452","Type":"ContainerStarted","Data":"6ca9c217089a1b8113f5cc64c8a5551e4f453c9421b7d0413b8c866eb54e4e1d"} Feb 16 17:27:18 crc kubenswrapper[4794]: I0216 17:27:18.508159 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2l6mb" podStartSLOduration=1.882133245 podStartE2EDuration="2.50814116s" podCreationTimestamp="2026-02-16 17:27:16 +0000 UTC" firstStartedPulling="2026-02-16 17:27:17.54213739 +0000 UTC m=+1663.490232037" lastFinishedPulling="2026-02-16 17:27:18.168145305 +0000 UTC m=+1664.116239952" observedRunningTime="2026-02-16 17:27:18.498417173 +0000 UTC m=+1664.446511820" watchObservedRunningTime="2026-02-16 17:27:18.50814116 +0000 UTC m=+1664.456235807" Feb 16 17:27:18 crc kubenswrapper[4794]: I0216 17:27:18.792254 4794 scope.go:117] "RemoveContainer" containerID="6e22a7be8018d748f49f3c871459e97f76ee04f859d3d1d0d46b4bbb2dd36691" Feb 16 17:27:18 crc kubenswrapper[4794]: E0216 17:27:18.792562 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:27:21 crc kubenswrapper[4794]: I0216 17:27:21.521222 4794 generic.go:334] "Generic (PLEG): container finished" podID="4679cdf0-0e90-4126-91b5-5411ea4d9452" containerID="27c8238f48326ed199582e79c6a0b6215161b382e18c29489cb31190e5596999" exitCode=0 Feb 16 17:27:21 crc kubenswrapper[4794]: I0216 17:27:21.521380 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2l6mb" event={"ID":"4679cdf0-0e90-4126-91b5-5411ea4d9452","Type":"ContainerDied","Data":"27c8238f48326ed199582e79c6a0b6215161b382e18c29489cb31190e5596999"} Feb 16 17:27:22 crc kubenswrapper[4794]: E0216 17:27:22.793086 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:27:23 crc kubenswrapper[4794]: I0216 17:27:23.019815 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2l6mb" Feb 16 17:27:23 crc kubenswrapper[4794]: I0216 17:27:23.103925 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdqv2\" (UniqueName: \"kubernetes.io/projected/4679cdf0-0e90-4126-91b5-5411ea4d9452-kube-api-access-jdqv2\") pod \"4679cdf0-0e90-4126-91b5-5411ea4d9452\" (UID: \"4679cdf0-0e90-4126-91b5-5411ea4d9452\") " Feb 16 17:27:23 crc kubenswrapper[4794]: I0216 17:27:23.104029 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4679cdf0-0e90-4126-91b5-5411ea4d9452-ssh-key-openstack-edpm-ipam\") pod \"4679cdf0-0e90-4126-91b5-5411ea4d9452\" (UID: \"4679cdf0-0e90-4126-91b5-5411ea4d9452\") " Feb 16 17:27:23 crc kubenswrapper[4794]: I0216 17:27:23.104170 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4679cdf0-0e90-4126-91b5-5411ea4d9452-inventory\") pod \"4679cdf0-0e90-4126-91b5-5411ea4d9452\" (UID: \"4679cdf0-0e90-4126-91b5-5411ea4d9452\") " Feb 16 17:27:23 crc kubenswrapper[4794]: I0216 17:27:23.109655 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4679cdf0-0e90-4126-91b5-5411ea4d9452-kube-api-access-jdqv2" (OuterVolumeSpecName: "kube-api-access-jdqv2") pod "4679cdf0-0e90-4126-91b5-5411ea4d9452" (UID: "4679cdf0-0e90-4126-91b5-5411ea4d9452"). InnerVolumeSpecName "kube-api-access-jdqv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:27:23 crc kubenswrapper[4794]: E0216 17:27:23.139618 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4679cdf0-0e90-4126-91b5-5411ea4d9452-inventory podName:4679cdf0-0e90-4126-91b5-5411ea4d9452 nodeName:}" failed. No retries permitted until 2026-02-16 17:27:23.639591795 +0000 UTC m=+1669.587686432 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "inventory" (UniqueName: "kubernetes.io/secret/4679cdf0-0e90-4126-91b5-5411ea4d9452-inventory") pod "4679cdf0-0e90-4126-91b5-5411ea4d9452" (UID: "4679cdf0-0e90-4126-91b5-5411ea4d9452") : error deleting /var/lib/kubelet/pods/4679cdf0-0e90-4126-91b5-5411ea4d9452/volume-subpaths: remove /var/lib/kubelet/pods/4679cdf0-0e90-4126-91b5-5411ea4d9452/volume-subpaths: no such file or directory Feb 16 17:27:23 crc kubenswrapper[4794]: I0216 17:27:23.142520 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4679cdf0-0e90-4126-91b5-5411ea4d9452-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4679cdf0-0e90-4126-91b5-5411ea4d9452" (UID: "4679cdf0-0e90-4126-91b5-5411ea4d9452"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:27:23 crc kubenswrapper[4794]: I0216 17:27:23.206729 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdqv2\" (UniqueName: \"kubernetes.io/projected/4679cdf0-0e90-4126-91b5-5411ea4d9452-kube-api-access-jdqv2\") on node \"crc\" DevicePath \"\"" Feb 16 17:27:23 crc kubenswrapper[4794]: I0216 17:27:23.206940 4794 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4679cdf0-0e90-4126-91b5-5411ea4d9452-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 17:27:23 crc kubenswrapper[4794]: I0216 17:27:23.550887 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2l6mb" event={"ID":"4679cdf0-0e90-4126-91b5-5411ea4d9452","Type":"ContainerDied","Data":"6ca9c217089a1b8113f5cc64c8a5551e4f453c9421b7d0413b8c866eb54e4e1d"} Feb 16 17:27:23 crc kubenswrapper[4794]: I0216 17:27:23.551230 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ca9c217089a1b8113f5cc64c8a5551e4f453c9421b7d0413b8c866eb54e4e1d" Feb 16 17:27:23 crc kubenswrapper[4794]: I0216 17:27:23.551289 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2l6mb" Feb 16 17:27:23 crc kubenswrapper[4794]: I0216 17:27:23.719277 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4679cdf0-0e90-4126-91b5-5411ea4d9452-inventory\") pod \"4679cdf0-0e90-4126-91b5-5411ea4d9452\" (UID: \"4679cdf0-0e90-4126-91b5-5411ea4d9452\") " Feb 16 17:27:23 crc kubenswrapper[4794]: I0216 17:27:23.733566 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4679cdf0-0e90-4126-91b5-5411ea4d9452-inventory" (OuterVolumeSpecName: "inventory") pod "4679cdf0-0e90-4126-91b5-5411ea4d9452" (UID: "4679cdf0-0e90-4126-91b5-5411ea4d9452"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:27:23 crc kubenswrapper[4794]: I0216 17:27:23.752976 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kzl44"] Feb 16 17:27:23 crc kubenswrapper[4794]: E0216 17:27:23.753971 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4679cdf0-0e90-4126-91b5-5411ea4d9452" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 16 17:27:23 crc kubenswrapper[4794]: I0216 17:27:23.754085 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="4679cdf0-0e90-4126-91b5-5411ea4d9452" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 16 17:27:23 crc kubenswrapper[4794]: I0216 17:27:23.754454 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="4679cdf0-0e90-4126-91b5-5411ea4d9452" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 16 17:27:23 crc kubenswrapper[4794]: I0216 17:27:23.755591 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kzl44" Feb 16 17:27:23 crc kubenswrapper[4794]: I0216 17:27:23.776556 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kzl44"] Feb 16 17:27:23 crc kubenswrapper[4794]: I0216 17:27:23.822911 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00aac5cd-2d06-4021-9d8d-5724b2ad87bc-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kzl44\" (UID: \"00aac5cd-2d06-4021-9d8d-5724b2ad87bc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kzl44" Feb 16 17:27:23 crc kubenswrapper[4794]: I0216 17:27:23.823003 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00aac5cd-2d06-4021-9d8d-5724b2ad87bc-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kzl44\" (UID: \"00aac5cd-2d06-4021-9d8d-5724b2ad87bc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kzl44" Feb 16 17:27:23 crc kubenswrapper[4794]: I0216 17:27:23.823088 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00aac5cd-2d06-4021-9d8d-5724b2ad87bc-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kzl44\" (UID: \"00aac5cd-2d06-4021-9d8d-5724b2ad87bc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kzl44" Feb 16 17:27:23 crc kubenswrapper[4794]: I0216 17:27:23.823112 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxjhk\" (UniqueName: \"kubernetes.io/projected/00aac5cd-2d06-4021-9d8d-5724b2ad87bc-kube-api-access-lxjhk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kzl44\" (UID: \"00aac5cd-2d06-4021-9d8d-5724b2ad87bc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kzl44" Feb 16 17:27:23 crc kubenswrapper[4794]: I0216 17:27:23.823224 4794 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4679cdf0-0e90-4126-91b5-5411ea4d9452-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 17:27:23 crc kubenswrapper[4794]: I0216 17:27:23.924894 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00aac5cd-2d06-4021-9d8d-5724b2ad87bc-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kzl44\" (UID: \"00aac5cd-2d06-4021-9d8d-5724b2ad87bc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kzl44" Feb 16 17:27:23 crc kubenswrapper[4794]: I0216 17:27:23.924940 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxjhk\" (UniqueName: \"kubernetes.io/projected/00aac5cd-2d06-4021-9d8d-5724b2ad87bc-kube-api-access-lxjhk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kzl44\" (UID: \"00aac5cd-2d06-4021-9d8d-5724b2ad87bc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kzl44" Feb 16 17:27:23 crc kubenswrapper[4794]: I0216 17:27:23.925110 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00aac5cd-2d06-4021-9d8d-5724b2ad87bc-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kzl44\" (UID: \"00aac5cd-2d06-4021-9d8d-5724b2ad87bc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kzl44" Feb 16 17:27:23 crc kubenswrapper[4794]: I0216 17:27:23.925168 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00aac5cd-2d06-4021-9d8d-5724b2ad87bc-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kzl44\" (UID: \"00aac5cd-2d06-4021-9d8d-5724b2ad87bc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kzl44" Feb 16 17:27:23 crc kubenswrapper[4794]: I0216 17:27:23.931177 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00aac5cd-2d06-4021-9d8d-5724b2ad87bc-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kzl44\" (UID: \"00aac5cd-2d06-4021-9d8d-5724b2ad87bc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kzl44" Feb 16 17:27:23 crc kubenswrapper[4794]: I0216 17:27:23.932773 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00aac5cd-2d06-4021-9d8d-5724b2ad87bc-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kzl44\" (UID: \"00aac5cd-2d06-4021-9d8d-5724b2ad87bc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kzl44" Feb 16 17:27:23 crc kubenswrapper[4794]: I0216 17:27:23.934531 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00aac5cd-2d06-4021-9d8d-5724b2ad87bc-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kzl44\" (UID: \"00aac5cd-2d06-4021-9d8d-5724b2ad87bc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kzl44" Feb 16 17:27:23 crc kubenswrapper[4794]: I0216 17:27:23.944900 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxjhk\" (UniqueName: \"kubernetes.io/projected/00aac5cd-2d06-4021-9d8d-5724b2ad87bc-kube-api-access-lxjhk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kzl44\" (UID: \"00aac5cd-2d06-4021-9d8d-5724b2ad87bc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kzl44" Feb 16 17:27:24 crc kubenswrapper[4794]: I0216 17:27:24.107348 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kzl44" Feb 16 17:27:24 crc kubenswrapper[4794]: I0216 17:27:24.734789 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kzl44"] Feb 16 17:27:24 crc kubenswrapper[4794]: E0216 17:27:24.808917 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:27:25 crc kubenswrapper[4794]: I0216 17:27:25.575790 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kzl44" event={"ID":"00aac5cd-2d06-4021-9d8d-5724b2ad87bc","Type":"ContainerStarted","Data":"9d679cb0fd48155e67d7a08aaa2b07178827aaddad991c51011b1448792c347c"} Feb 16 17:27:25 crc kubenswrapper[4794]: I0216 17:27:25.576125 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kzl44" event={"ID":"00aac5cd-2d06-4021-9d8d-5724b2ad87bc","Type":"ContainerStarted","Data":"3dca23c78a745788d4ca81b43bc70ab0047dd0c3f88bfd3465d527ff347e7eba"} Feb 16 17:27:25 crc kubenswrapper[4794]: I0216 17:27:25.614483 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kzl44" podStartSLOduration=2.2105102150000002 podStartE2EDuration="2.61446415s" podCreationTimestamp="2026-02-16 17:27:23 +0000 UTC" firstStartedPulling="2026-02-16 17:27:24.7313797 +0000 UTC m=+1670.679474347" lastFinishedPulling="2026-02-16 17:27:25.135333635 +0000 UTC m=+1671.083428282" observedRunningTime="2026-02-16 17:27:25.602721006 +0000 UTC m=+1671.550815653" watchObservedRunningTime="2026-02-16 17:27:25.61446415 +0000 UTC m=+1671.562558797" Feb 16 17:27:33 crc kubenswrapper[4794]: I0216 17:27:33.791700 4794 scope.go:117] "RemoveContainer" containerID="6e22a7be8018d748f49f3c871459e97f76ee04f859d3d1d0d46b4bbb2dd36691" Feb 16 17:27:33 crc kubenswrapper[4794]: E0216 17:27:33.792594 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:27:35 crc kubenswrapper[4794]: E0216 17:27:35.918865 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 17:27:35 crc kubenswrapper[4794]: E0216 17:27:35.919443 4794 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 17:27:35 crc kubenswrapper[4794]: E0216 17:27:35.919574 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2h5l2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-7gcsf_openstack(c695f880-15cb-45b1-8545-60d8437ec631): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 17:27:35 crc kubenswrapper[4794]: E0216 17:27:35.920636 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:27:38 crc kubenswrapper[4794]: I0216 17:27:38.201447 4794 scope.go:117] "RemoveContainer" containerID="24980b9a1476a21f65f65abd38013b3b1da41240e84dcd20727b56adf4c610f9" Feb 16 17:27:38 crc kubenswrapper[4794]: I0216 17:27:38.303330 4794 scope.go:117] "RemoveContainer" containerID="78060d4db70d41c4b478fe59a79e973c4b66567fab8194633868092f4711eba2" Feb 16 17:27:38 crc kubenswrapper[4794]: I0216 17:27:38.351538 4794 scope.go:117] "RemoveContainer" containerID="90ce9cab9f6d005ccfe078c26004325c9eaba9f760b189549e61db3ce47e0448" Feb 16 17:27:38 crc kubenswrapper[4794]: I0216 17:27:38.426641 4794 scope.go:117] "RemoveContainer" containerID="59917e61f52528956f2e22aba28ce904d4a6214fa1d600aeff7c7ed4187f0a79" Feb 16 17:27:38 crc kubenswrapper[4794]: E0216 17:27:38.910637 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 17:27:38 crc kubenswrapper[4794]: E0216 17:27:38.910709 4794 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 17:27:38 crc kubenswrapper[4794]: E0216 17:27:38.910865 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59fh58dh6ch557h84h55ch564h5bh58fh5c8h5d4h584h669h667h569h59hd5hdbh9dh67ch5f9h59fh597h96h664h687h66dhfch5ddh5b7h88h59cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9v9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(8981f528-1f74-4d56-a93c-22860725b490): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 17:27:38 crc kubenswrapper[4794]: E0216 17:27:38.912112 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:27:48 crc kubenswrapper[4794]: I0216 17:27:48.793630 4794 scope.go:117] "RemoveContainer" containerID="6e22a7be8018d748f49f3c871459e97f76ee04f859d3d1d0d46b4bbb2dd36691" Feb 16 17:27:48 crc kubenswrapper[4794]: E0216 17:27:48.794516 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:27:48 crc kubenswrapper[4794]: I0216 17:27:48.828844 4794 generic.go:334] "Generic (PLEG): container finished" podID="b487594f-298c-477a-bd90-487d9f072b6e" containerID="0e3044c4b0425f4372eadb91b85e9137578fa7b63c453b19bb61c87afaf347e6" exitCode=0 Feb 16 17:27:48 crc kubenswrapper[4794]: I0216 17:27:48.828890 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"b487594f-298c-477a-bd90-487d9f072b6e","Type":"ContainerDied","Data":"0e3044c4b0425f4372eadb91b85e9137578fa7b63c453b19bb61c87afaf347e6"} Feb 16 17:27:49 crc kubenswrapper[4794]: E0216 17:27:49.793567 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:27:49 crc kubenswrapper[4794]: E0216 17:27:49.793882 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:27:49 crc kubenswrapper[4794]: I0216 17:27:49.841225 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"b487594f-298c-477a-bd90-487d9f072b6e","Type":"ContainerStarted","Data":"d89cee34d4c9cca4a6599e740b6a0207b29815a3c47cfbe1482c684d6363413b"} Feb 16 17:27:49 crc kubenswrapper[4794]: I0216 17:27:49.841464 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 16 17:27:49 crc kubenswrapper[4794]: I0216 17:27:49.870671 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=36.870652647 podStartE2EDuration="36.870652647s" podCreationTimestamp="2026-02-16 17:27:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:27:49.860876259 +0000 UTC m=+1695.808970906" watchObservedRunningTime="2026-02-16 17:27:49.870652647 +0000 UTC m=+1695.818747294" Feb 16 17:27:59 crc kubenswrapper[4794]: I0216 17:27:59.792828 4794 scope.go:117] "RemoveContainer" containerID="6e22a7be8018d748f49f3c871459e97f76ee04f859d3d1d0d46b4bbb2dd36691" Feb 16 17:27:59 crc kubenswrapper[4794]: E0216 17:27:59.793580 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:28:01 crc kubenswrapper[4794]: E0216 17:28:01.794502 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:28:02 crc kubenswrapper[4794]: E0216 17:28:02.807985 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:28:03 crc kubenswrapper[4794]: I0216 17:28:03.923475 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 16 17:28:04 crc kubenswrapper[4794]: I0216 17:28:04.014011 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 17:28:08 crc kubenswrapper[4794]: I0216 17:28:08.329534 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="026253d8-eaea-4c12-91e0-455331cdaa5e" containerName="rabbitmq" containerID="cri-o://a25e90d93e73c28d15594943c02e0b9a83bbf60e93bace161d3cb1740bb284e8" gracePeriod=604796 Feb 16 17:28:12 crc kubenswrapper[4794]: I0216 17:28:12.463083 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="026253d8-eaea-4c12-91e0-455331cdaa5e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Feb 16 17:28:12 crc kubenswrapper[4794]: E0216 17:28:12.795574 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:28:14 crc kubenswrapper[4794]: I0216 17:28:14.800166 4794 scope.go:117] "RemoveContainer" containerID="6e22a7be8018d748f49f3c871459e97f76ee04f859d3d1d0d46b4bbb2dd36691" Feb 16 17:28:14 crc kubenswrapper[4794]: E0216 17:28:14.801004 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:28:14 crc kubenswrapper[4794]: I0216 17:28:14.980174 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.059809 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/026253d8-eaea-4c12-91e0-455331cdaa5e-rabbitmq-erlang-cookie\") pod \"026253d8-eaea-4c12-91e0-455331cdaa5e\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.059891 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/026253d8-eaea-4c12-91e0-455331cdaa5e-rabbitmq-confd\") pod \"026253d8-eaea-4c12-91e0-455331cdaa5e\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.059917 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/026253d8-eaea-4c12-91e0-455331cdaa5e-rabbitmq-plugins\") pod \"026253d8-eaea-4c12-91e0-455331cdaa5e\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.059947 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/026253d8-eaea-4c12-91e0-455331cdaa5e-plugins-conf\") pod \"026253d8-eaea-4c12-91e0-455331cdaa5e\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.059978 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/026253d8-eaea-4c12-91e0-455331cdaa5e-config-data\") pod \"026253d8-eaea-4c12-91e0-455331cdaa5e\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.060691 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89a365df-e5d2-47cd-ba73-ad62767e7783\") pod \"026253d8-eaea-4c12-91e0-455331cdaa5e\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.060755 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/026253d8-eaea-4c12-91e0-455331cdaa5e-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "026253d8-eaea-4c12-91e0-455331cdaa5e" (UID: "026253d8-eaea-4c12-91e0-455331cdaa5e"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.060792 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/026253d8-eaea-4c12-91e0-455331cdaa5e-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "026253d8-eaea-4c12-91e0-455331cdaa5e" (UID: "026253d8-eaea-4c12-91e0-455331cdaa5e"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.060829 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/026253d8-eaea-4c12-91e0-455331cdaa5e-server-conf\") pod \"026253d8-eaea-4c12-91e0-455331cdaa5e\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.060864 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/026253d8-eaea-4c12-91e0-455331cdaa5e-pod-info\") pod \"026253d8-eaea-4c12-91e0-455331cdaa5e\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.060881 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/026253d8-eaea-4c12-91e0-455331cdaa5e-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "026253d8-eaea-4c12-91e0-455331cdaa5e" (UID: "026253d8-eaea-4c12-91e0-455331cdaa5e"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.060907 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4skrz\" (UniqueName: \"kubernetes.io/projected/026253d8-eaea-4c12-91e0-455331cdaa5e-kube-api-access-4skrz\") pod \"026253d8-eaea-4c12-91e0-455331cdaa5e\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.060978 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/026253d8-eaea-4c12-91e0-455331cdaa5e-rabbitmq-tls\") pod \"026253d8-eaea-4c12-91e0-455331cdaa5e\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.061101 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/026253d8-eaea-4c12-91e0-455331cdaa5e-erlang-cookie-secret\") pod \"026253d8-eaea-4c12-91e0-455331cdaa5e\" (UID: \"026253d8-eaea-4c12-91e0-455331cdaa5e\") " Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.061947 4794 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/026253d8-eaea-4c12-91e0-455331cdaa5e-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.061969 4794 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/026253d8-eaea-4c12-91e0-455331cdaa5e-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.061979 4794 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/026253d8-eaea-4c12-91e0-455331cdaa5e-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.067296 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/026253d8-eaea-4c12-91e0-455331cdaa5e-kube-api-access-4skrz" (OuterVolumeSpecName: "kube-api-access-4skrz") pod "026253d8-eaea-4c12-91e0-455331cdaa5e" (UID: "026253d8-eaea-4c12-91e0-455331cdaa5e"). InnerVolumeSpecName "kube-api-access-4skrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.082715 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/026253d8-eaea-4c12-91e0-455331cdaa5e-pod-info" (OuterVolumeSpecName: "pod-info") pod "026253d8-eaea-4c12-91e0-455331cdaa5e" (UID: "026253d8-eaea-4c12-91e0-455331cdaa5e"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.084558 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/026253d8-eaea-4c12-91e0-455331cdaa5e-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "026253d8-eaea-4c12-91e0-455331cdaa5e" (UID: "026253d8-eaea-4c12-91e0-455331cdaa5e"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.102478 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/026253d8-eaea-4c12-91e0-455331cdaa5e-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "026253d8-eaea-4c12-91e0-455331cdaa5e" (UID: "026253d8-eaea-4c12-91e0-455331cdaa5e"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.133044 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89a365df-e5d2-47cd-ba73-ad62767e7783" (OuterVolumeSpecName: "persistence") pod "026253d8-eaea-4c12-91e0-455331cdaa5e" (UID: "026253d8-eaea-4c12-91e0-455331cdaa5e"). InnerVolumeSpecName "pvc-89a365df-e5d2-47cd-ba73-ad62767e7783". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.137398 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/026253d8-eaea-4c12-91e0-455331cdaa5e-config-data" (OuterVolumeSpecName: "config-data") pod "026253d8-eaea-4c12-91e0-455331cdaa5e" (UID: "026253d8-eaea-4c12-91e0-455331cdaa5e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.164892 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/026253d8-eaea-4c12-91e0-455331cdaa5e-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.164957 4794 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-89a365df-e5d2-47cd-ba73-ad62767e7783\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89a365df-e5d2-47cd-ba73-ad62767e7783\") on node \"crc\" " Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.164973 4794 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/026253d8-eaea-4c12-91e0-455331cdaa5e-pod-info\") on node \"crc\" DevicePath \"\"" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.164986 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4skrz\" (UniqueName: \"kubernetes.io/projected/026253d8-eaea-4c12-91e0-455331cdaa5e-kube-api-access-4skrz\") on node \"crc\" DevicePath \"\"" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.164998 4794 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/026253d8-eaea-4c12-91e0-455331cdaa5e-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.165009 4794 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/026253d8-eaea-4c12-91e0-455331cdaa5e-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.186151 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/026253d8-eaea-4c12-91e0-455331cdaa5e-server-conf" (OuterVolumeSpecName: "server-conf") pod "026253d8-eaea-4c12-91e0-455331cdaa5e" (UID: "026253d8-eaea-4c12-91e0-455331cdaa5e"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.217116 4794 generic.go:334] "Generic (PLEG): container finished" podID="026253d8-eaea-4c12-91e0-455331cdaa5e" containerID="a25e90d93e73c28d15594943c02e0b9a83bbf60e93bace161d3cb1740bb284e8" exitCode=0 Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.217162 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"026253d8-eaea-4c12-91e0-455331cdaa5e","Type":"ContainerDied","Data":"a25e90d93e73c28d15594943c02e0b9a83bbf60e93bace161d3cb1740bb284e8"} Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.217187 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"026253d8-eaea-4c12-91e0-455331cdaa5e","Type":"ContainerDied","Data":"25237b1f3d7add63fa3f53454163ee819bb25eb18acd6ba04da6b9f4b494bb8a"} Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.217204 4794 scope.go:117] "RemoveContainer" containerID="a25e90d93e73c28d15594943c02e0b9a83bbf60e93bace161d3cb1740bb284e8" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.217370 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.226664 4794 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.226800 4794 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-89a365df-e5d2-47cd-ba73-ad62767e7783" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89a365df-e5d2-47cd-ba73-ad62767e7783") on node "crc" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.258496 4794 scope.go:117] "RemoveContainer" containerID="5f28321fb236a1745593b9c7644f21bfbf3b8430f0f512f514c4f8f1c040ee02" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.267734 4794 reconciler_common.go:293] "Volume detached for volume \"pvc-89a365df-e5d2-47cd-ba73-ad62767e7783\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89a365df-e5d2-47cd-ba73-ad62767e7783\") on node \"crc\" DevicePath \"\"" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.267763 4794 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/026253d8-eaea-4c12-91e0-455331cdaa5e-server-conf\") on node \"crc\" DevicePath \"\"" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.278887 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/026253d8-eaea-4c12-91e0-455331cdaa5e-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "026253d8-eaea-4c12-91e0-455331cdaa5e" (UID: "026253d8-eaea-4c12-91e0-455331cdaa5e"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.282743 4794 scope.go:117] "RemoveContainer" containerID="a25e90d93e73c28d15594943c02e0b9a83bbf60e93bace161d3cb1740bb284e8" Feb 16 17:28:15 crc kubenswrapper[4794]: E0216 17:28:15.283790 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a25e90d93e73c28d15594943c02e0b9a83bbf60e93bace161d3cb1740bb284e8\": container with ID starting with a25e90d93e73c28d15594943c02e0b9a83bbf60e93bace161d3cb1740bb284e8 not found: ID does not exist" containerID="a25e90d93e73c28d15594943c02e0b9a83bbf60e93bace161d3cb1740bb284e8" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.283821 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a25e90d93e73c28d15594943c02e0b9a83bbf60e93bace161d3cb1740bb284e8"} err="failed to get container status \"a25e90d93e73c28d15594943c02e0b9a83bbf60e93bace161d3cb1740bb284e8\": rpc error: code = NotFound desc = could not find container \"a25e90d93e73c28d15594943c02e0b9a83bbf60e93bace161d3cb1740bb284e8\": container with ID starting with a25e90d93e73c28d15594943c02e0b9a83bbf60e93bace161d3cb1740bb284e8 not found: ID does not exist" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.283842 4794 scope.go:117] "RemoveContainer" containerID="5f28321fb236a1745593b9c7644f21bfbf3b8430f0f512f514c4f8f1c040ee02" Feb 16 17:28:15 crc kubenswrapper[4794]: E0216 17:28:15.284245 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f28321fb236a1745593b9c7644f21bfbf3b8430f0f512f514c4f8f1c040ee02\": container with ID starting with 5f28321fb236a1745593b9c7644f21bfbf3b8430f0f512f514c4f8f1c040ee02 not found: ID does not exist" containerID="5f28321fb236a1745593b9c7644f21bfbf3b8430f0f512f514c4f8f1c040ee02" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.284291 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f28321fb236a1745593b9c7644f21bfbf3b8430f0f512f514c4f8f1c040ee02"} err="failed to get container status \"5f28321fb236a1745593b9c7644f21bfbf3b8430f0f512f514c4f8f1c040ee02\": rpc error: code = NotFound desc = could not find container \"5f28321fb236a1745593b9c7644f21bfbf3b8430f0f512f514c4f8f1c040ee02\": container with ID starting with 5f28321fb236a1745593b9c7644f21bfbf3b8430f0f512f514c4f8f1c040ee02 not found: ID does not exist" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.370202 4794 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/026253d8-eaea-4c12-91e0-455331cdaa5e-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.558105 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.569676 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.599828 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 17:28:15 crc kubenswrapper[4794]: E0216 17:28:15.600514 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="026253d8-eaea-4c12-91e0-455331cdaa5e" containerName="rabbitmq" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.600537 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="026253d8-eaea-4c12-91e0-455331cdaa5e" containerName="rabbitmq" Feb 16 17:28:15 crc kubenswrapper[4794]: E0216 17:28:15.600579 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="026253d8-eaea-4c12-91e0-455331cdaa5e" containerName="setup-container" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.600588 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="026253d8-eaea-4c12-91e0-455331cdaa5e" containerName="setup-container" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.600905 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="026253d8-eaea-4c12-91e0-455331cdaa5e" containerName="rabbitmq" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.602803 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.616154 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.676891 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ba133018-dec1-47aa-92e3-a0e3440dec49-config-data\") pod \"rabbitmq-server-0\" (UID: \"ba133018-dec1-47aa-92e3-a0e3440dec49\") " pod="openstack/rabbitmq-server-0" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.676963 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ba133018-dec1-47aa-92e3-a0e3440dec49-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ba133018-dec1-47aa-92e3-a0e3440dec49\") " pod="openstack/rabbitmq-server-0" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.676997 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-89a365df-e5d2-47cd-ba73-ad62767e7783\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89a365df-e5d2-47cd-ba73-ad62767e7783\") pod \"rabbitmq-server-0\" (UID: \"ba133018-dec1-47aa-92e3-a0e3440dec49\") " pod="openstack/rabbitmq-server-0" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.677142 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ba133018-dec1-47aa-92e3-a0e3440dec49-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ba133018-dec1-47aa-92e3-a0e3440dec49\") " pod="openstack/rabbitmq-server-0" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.677275 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ba133018-dec1-47aa-92e3-a0e3440dec49-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ba133018-dec1-47aa-92e3-a0e3440dec49\") " pod="openstack/rabbitmq-server-0" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.677357 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ba133018-dec1-47aa-92e3-a0e3440dec49-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ba133018-dec1-47aa-92e3-a0e3440dec49\") " pod="openstack/rabbitmq-server-0" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.677404 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ba133018-dec1-47aa-92e3-a0e3440dec49-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ba133018-dec1-47aa-92e3-a0e3440dec49\") " pod="openstack/rabbitmq-server-0" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.677436 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ba133018-dec1-47aa-92e3-a0e3440dec49-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ba133018-dec1-47aa-92e3-a0e3440dec49\") " pod="openstack/rabbitmq-server-0" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.677462 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ba133018-dec1-47aa-92e3-a0e3440dec49-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ba133018-dec1-47aa-92e3-a0e3440dec49\") " pod="openstack/rabbitmq-server-0" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.677523 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hhdt\" (UniqueName: \"kubernetes.io/projected/ba133018-dec1-47aa-92e3-a0e3440dec49-kube-api-access-9hhdt\") pod \"rabbitmq-server-0\" (UID: \"ba133018-dec1-47aa-92e3-a0e3440dec49\") " pod="openstack/rabbitmq-server-0" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.677554 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ba133018-dec1-47aa-92e3-a0e3440dec49-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ba133018-dec1-47aa-92e3-a0e3440dec49\") " pod="openstack/rabbitmq-server-0" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.779059 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ba133018-dec1-47aa-92e3-a0e3440dec49-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ba133018-dec1-47aa-92e3-a0e3440dec49\") " pod="openstack/rabbitmq-server-0" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.779123 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ba133018-dec1-47aa-92e3-a0e3440dec49-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ba133018-dec1-47aa-92e3-a0e3440dec49\") " pod="openstack/rabbitmq-server-0" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.779152 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ba133018-dec1-47aa-92e3-a0e3440dec49-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ba133018-dec1-47aa-92e3-a0e3440dec49\") " pod="openstack/rabbitmq-server-0" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.779181 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ba133018-dec1-47aa-92e3-a0e3440dec49-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ba133018-dec1-47aa-92e3-a0e3440dec49\") " pod="openstack/rabbitmq-server-0" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.779204 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ba133018-dec1-47aa-92e3-a0e3440dec49-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ba133018-dec1-47aa-92e3-a0e3440dec49\") " pod="openstack/rabbitmq-server-0" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.779233 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hhdt\" (UniqueName: \"kubernetes.io/projected/ba133018-dec1-47aa-92e3-a0e3440dec49-kube-api-access-9hhdt\") pod \"rabbitmq-server-0\" (UID: \"ba133018-dec1-47aa-92e3-a0e3440dec49\") " pod="openstack/rabbitmq-server-0" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.779268 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ba133018-dec1-47aa-92e3-a0e3440dec49-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ba133018-dec1-47aa-92e3-a0e3440dec49\") " pod="openstack/rabbitmq-server-0" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.779323 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ba133018-dec1-47aa-92e3-a0e3440dec49-config-data\") pod \"rabbitmq-server-0\" (UID: \"ba133018-dec1-47aa-92e3-a0e3440dec49\") " pod="openstack/rabbitmq-server-0" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.779372 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ba133018-dec1-47aa-92e3-a0e3440dec49-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ba133018-dec1-47aa-92e3-a0e3440dec49\") " pod="openstack/rabbitmq-server-0" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.779395 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-89a365df-e5d2-47cd-ba73-ad62767e7783\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89a365df-e5d2-47cd-ba73-ad62767e7783\") pod \"rabbitmq-server-0\" (UID: \"ba133018-dec1-47aa-92e3-a0e3440dec49\") " pod="openstack/rabbitmq-server-0" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.779451 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ba133018-dec1-47aa-92e3-a0e3440dec49-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ba133018-dec1-47aa-92e3-a0e3440dec49\") " pod="openstack/rabbitmq-server-0" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.779935 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ba133018-dec1-47aa-92e3-a0e3440dec49-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ba133018-dec1-47aa-92e3-a0e3440dec49\") " pod="openstack/rabbitmq-server-0" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.780252 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ba133018-dec1-47aa-92e3-a0e3440dec49-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ba133018-dec1-47aa-92e3-a0e3440dec49\") " pod="openstack/rabbitmq-server-0" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.780896 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ba133018-dec1-47aa-92e3-a0e3440dec49-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ba133018-dec1-47aa-92e3-a0e3440dec49\") " pod="openstack/rabbitmq-server-0" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.781290 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ba133018-dec1-47aa-92e3-a0e3440dec49-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ba133018-dec1-47aa-92e3-a0e3440dec49\") " pod="openstack/rabbitmq-server-0" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.781658 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ba133018-dec1-47aa-92e3-a0e3440dec49-config-data\") pod \"rabbitmq-server-0\" (UID: \"ba133018-dec1-47aa-92e3-a0e3440dec49\") " pod="openstack/rabbitmq-server-0" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.783763 4794 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.783798 4794 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-89a365df-e5d2-47cd-ba73-ad62767e7783\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89a365df-e5d2-47cd-ba73-ad62767e7783\") pod \"rabbitmq-server-0\" (UID: \"ba133018-dec1-47aa-92e3-a0e3440dec49\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c0b4d361c22c333b13f1a0671c782685ad05346f3c98eaa4d7999cbaa1be313f/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.785124 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ba133018-dec1-47aa-92e3-a0e3440dec49-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ba133018-dec1-47aa-92e3-a0e3440dec49\") " pod="openstack/rabbitmq-server-0" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.786076 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ba133018-dec1-47aa-92e3-a0e3440dec49-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ba133018-dec1-47aa-92e3-a0e3440dec49\") " pod="openstack/rabbitmq-server-0" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.786118 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ba133018-dec1-47aa-92e3-a0e3440dec49-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ba133018-dec1-47aa-92e3-a0e3440dec49\") " pod="openstack/rabbitmq-server-0" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.788230 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ba133018-dec1-47aa-92e3-a0e3440dec49-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ba133018-dec1-47aa-92e3-a0e3440dec49\") " pod="openstack/rabbitmq-server-0" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.811538 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hhdt\" (UniqueName: \"kubernetes.io/projected/ba133018-dec1-47aa-92e3-a0e3440dec49-kube-api-access-9hhdt\") pod \"rabbitmq-server-0\" (UID: \"ba133018-dec1-47aa-92e3-a0e3440dec49\") " pod="openstack/rabbitmq-server-0" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.891054 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-89a365df-e5d2-47cd-ba73-ad62767e7783\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89a365df-e5d2-47cd-ba73-ad62767e7783\") pod \"rabbitmq-server-0\" (UID: \"ba133018-dec1-47aa-92e3-a0e3440dec49\") " pod="openstack/rabbitmq-server-0" Feb 16 17:28:15 crc kubenswrapper[4794]: I0216 17:28:15.972353 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 17:28:16 crc kubenswrapper[4794]: I0216 17:28:16.453572 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 17:28:16 crc kubenswrapper[4794]: I0216 17:28:16.810163 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="026253d8-eaea-4c12-91e0-455331cdaa5e" path="/var/lib/kubelet/pods/026253d8-eaea-4c12-91e0-455331cdaa5e/volumes" Feb 16 17:28:17 crc kubenswrapper[4794]: I0216 17:28:17.268986 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ba133018-dec1-47aa-92e3-a0e3440dec49","Type":"ContainerStarted","Data":"1f75362c596f7b4a437563434651dbd0e61ca2b9c3912e05cf1fffae536f6389"} Feb 16 17:28:17 crc kubenswrapper[4794]: E0216 17:28:17.795456 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:28:19 crc kubenswrapper[4794]: I0216 17:28:19.300534 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ba133018-dec1-47aa-92e3-a0e3440dec49","Type":"ContainerStarted","Data":"2c6cc683895a81bcfb148cc09a98c9a60fc2281c0b9de5332208cbb8f95c38ab"} Feb 16 17:28:24 crc kubenswrapper[4794]: E0216 17:28:24.802578 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:28:27 crc kubenswrapper[4794]: I0216 17:28:27.822348 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-6cb67474dc-d4tmw" podUID="cd56173e-c7f0-4309-97a9-4bd89c7704f3" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 16 17:28:28 crc kubenswrapper[4794]: I0216 17:28:28.791918 4794 scope.go:117] "RemoveContainer" containerID="6e22a7be8018d748f49f3c871459e97f76ee04f859d3d1d0d46b4bbb2dd36691" Feb 16 17:28:28 crc kubenswrapper[4794]: E0216 17:28:28.792229 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:28:31 crc kubenswrapper[4794]: E0216 17:28:31.793901 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:28:36 crc kubenswrapper[4794]: E0216 17:28:36.798932 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:28:38 crc kubenswrapper[4794]: I0216 17:28:38.567879 4794 scope.go:117] "RemoveContainer" containerID="42461fff9709ad54490eb287b1b85b0f2b88b64ac08a0a527d25b18ecc56ec7b" Feb 16 17:28:38 crc kubenswrapper[4794]: I0216 17:28:38.612370 4794 scope.go:117] "RemoveContainer" containerID="c16525667c36dad66eba954d729d6b86a5266e61911552421e34290fff35174d" Feb 16 17:28:41 crc kubenswrapper[4794]: I0216 17:28:41.792083 4794 scope.go:117] "RemoveContainer" containerID="6e22a7be8018d748f49f3c871459e97f76ee04f859d3d1d0d46b4bbb2dd36691" Feb 16 17:28:41 crc kubenswrapper[4794]: E0216 17:28:41.792905 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:28:43 crc kubenswrapper[4794]: E0216 17:28:43.794094 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:28:47 crc kubenswrapper[4794]: E0216 17:28:47.795064 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:28:50 crc kubenswrapper[4794]: I0216 17:28:50.683855 4794 generic.go:334] "Generic (PLEG): container finished" podID="ba133018-dec1-47aa-92e3-a0e3440dec49" containerID="2c6cc683895a81bcfb148cc09a98c9a60fc2281c0b9de5332208cbb8f95c38ab" exitCode=0 Feb 16 17:28:50 crc kubenswrapper[4794]: I0216 17:28:50.683950 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ba133018-dec1-47aa-92e3-a0e3440dec49","Type":"ContainerDied","Data":"2c6cc683895a81bcfb148cc09a98c9a60fc2281c0b9de5332208cbb8f95c38ab"} Feb 16 17:28:51 crc kubenswrapper[4794]: I0216 17:28:51.696861 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ba133018-dec1-47aa-92e3-a0e3440dec49","Type":"ContainerStarted","Data":"326ad8333b7681818f5129fd4f98d3c692194217c560251df44820ba17d0b2d9"} Feb 16 17:28:51 crc kubenswrapper[4794]: I0216 17:28:51.698359 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 16 17:28:51 crc kubenswrapper[4794]: I0216 17:28:51.724910 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.724885642 podStartE2EDuration="36.724885642s" podCreationTimestamp="2026-02-16 17:28:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:28:51.717011499 +0000 UTC m=+1757.665106156" watchObservedRunningTime="2026-02-16 17:28:51.724885642 +0000 UTC m=+1757.672980289" Feb 16 17:28:54 crc kubenswrapper[4794]: I0216 17:28:54.804071 4794 scope.go:117] "RemoveContainer" containerID="6e22a7be8018d748f49f3c871459e97f76ee04f859d3d1d0d46b4bbb2dd36691" Feb 16 17:28:54 crc kubenswrapper[4794]: E0216 17:28:54.804851 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:28:58 crc kubenswrapper[4794]: E0216 17:28:58.921133 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 17:28:58 crc kubenswrapper[4794]: E0216 17:28:58.921720 4794 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 17:28:58 crc kubenswrapper[4794]: E0216 17:28:58.921859 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2h5l2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-7gcsf_openstack(c695f880-15cb-45b1-8545-60d8437ec631): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 17:28:58 crc kubenswrapper[4794]: E0216 17:28:58.923134 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:29:02 crc kubenswrapper[4794]: E0216 17:29:02.919185 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 17:29:02 crc kubenswrapper[4794]: E0216 17:29:02.920027 4794 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 17:29:02 crc kubenswrapper[4794]: E0216 17:29:02.920171 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59fh58dh6ch557h84h55ch564h5bh58fh5c8h5d4h584h669h667h569h59hd5hdbh9dh67ch5f9h59fh597h96h664h687h66dhfch5ddh5b7h88h59cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9v9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(8981f528-1f74-4d56-a93c-22860725b490): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 17:29:02 crc kubenswrapper[4794]: E0216 17:29:02.921884 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:29:05 crc kubenswrapper[4794]: I0216 17:29:05.907931 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zjdxz"] Feb 16 17:29:05 crc kubenswrapper[4794]: I0216 17:29:05.911214 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zjdxz" Feb 16 17:29:05 crc kubenswrapper[4794]: I0216 17:29:05.937148 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zjdxz"] Feb 16 17:29:05 crc kubenswrapper[4794]: I0216 17:29:05.974552 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 16 17:29:06 crc kubenswrapper[4794]: I0216 17:29:06.120059 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e84beebf-64d8-47d0-8594-f7a028c9bf5d-utilities\") pod \"certified-operators-zjdxz\" (UID: \"e84beebf-64d8-47d0-8594-f7a028c9bf5d\") " pod="openshift-marketplace/certified-operators-zjdxz" Feb 16 17:29:06 crc kubenswrapper[4794]: I0216 17:29:06.122051 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbkdl\" (UniqueName: \"kubernetes.io/projected/e84beebf-64d8-47d0-8594-f7a028c9bf5d-kube-api-access-jbkdl\") pod \"certified-operators-zjdxz\" (UID: \"e84beebf-64d8-47d0-8594-f7a028c9bf5d\") " pod="openshift-marketplace/certified-operators-zjdxz" Feb 16 17:29:06 crc kubenswrapper[4794]: I0216 17:29:06.122589 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e84beebf-64d8-47d0-8594-f7a028c9bf5d-catalog-content\") pod \"certified-operators-zjdxz\" (UID: \"e84beebf-64d8-47d0-8594-f7a028c9bf5d\") " pod="openshift-marketplace/certified-operators-zjdxz" Feb 16 17:29:06 crc kubenswrapper[4794]: I0216 17:29:06.224819 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbkdl\" (UniqueName: \"kubernetes.io/projected/e84beebf-64d8-47d0-8594-f7a028c9bf5d-kube-api-access-jbkdl\") pod \"certified-operators-zjdxz\" (UID: \"e84beebf-64d8-47d0-8594-f7a028c9bf5d\") " pod="openshift-marketplace/certified-operators-zjdxz" Feb 16 17:29:06 crc kubenswrapper[4794]: I0216 17:29:06.224950 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e84beebf-64d8-47d0-8594-f7a028c9bf5d-catalog-content\") pod \"certified-operators-zjdxz\" (UID: \"e84beebf-64d8-47d0-8594-f7a028c9bf5d\") " pod="openshift-marketplace/certified-operators-zjdxz" Feb 16 17:29:06 crc kubenswrapper[4794]: I0216 17:29:06.225044 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e84beebf-64d8-47d0-8594-f7a028c9bf5d-utilities\") pod \"certified-operators-zjdxz\" (UID: \"e84beebf-64d8-47d0-8594-f7a028c9bf5d\") " pod="openshift-marketplace/certified-operators-zjdxz" Feb 16 17:29:06 crc kubenswrapper[4794]: I0216 17:29:06.225593 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e84beebf-64d8-47d0-8594-f7a028c9bf5d-utilities\") pod \"certified-operators-zjdxz\" (UID: \"e84beebf-64d8-47d0-8594-f7a028c9bf5d\") " pod="openshift-marketplace/certified-operators-zjdxz" Feb 16 17:29:06 crc kubenswrapper[4794]: I0216 17:29:06.225813 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e84beebf-64d8-47d0-8594-f7a028c9bf5d-catalog-content\") pod \"certified-operators-zjdxz\" (UID: \"e84beebf-64d8-47d0-8594-f7a028c9bf5d\") " pod="openshift-marketplace/certified-operators-zjdxz" Feb 16 17:29:06 crc kubenswrapper[4794]: I0216 17:29:06.243900 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbkdl\" (UniqueName: \"kubernetes.io/projected/e84beebf-64d8-47d0-8594-f7a028c9bf5d-kube-api-access-jbkdl\") pod \"certified-operators-zjdxz\" (UID: \"e84beebf-64d8-47d0-8594-f7a028c9bf5d\") " pod="openshift-marketplace/certified-operators-zjdxz" Feb 16 17:29:06 crc kubenswrapper[4794]: I0216 17:29:06.272180 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zjdxz" Feb 16 17:29:06 crc kubenswrapper[4794]: I0216 17:29:06.933628 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zjdxz"] Feb 16 17:29:07 crc kubenswrapper[4794]: I0216 17:29:07.792345 4794 scope.go:117] "RemoveContainer" containerID="6e22a7be8018d748f49f3c871459e97f76ee04f859d3d1d0d46b4bbb2dd36691" Feb 16 17:29:07 crc kubenswrapper[4794]: E0216 17:29:07.793542 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:29:07 crc kubenswrapper[4794]: I0216 17:29:07.906950 4794 generic.go:334] "Generic (PLEG): container finished" podID="e84beebf-64d8-47d0-8594-f7a028c9bf5d" containerID="e2d0f48ae026c59da33a03339b9a480707c549b60a3fb8e66c2e590894b8a48f" exitCode=0 Feb 16 17:29:07 crc kubenswrapper[4794]: I0216 17:29:07.906992 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjdxz" event={"ID":"e84beebf-64d8-47d0-8594-f7a028c9bf5d","Type":"ContainerDied","Data":"e2d0f48ae026c59da33a03339b9a480707c549b60a3fb8e66c2e590894b8a48f"} Feb 16 17:29:07 crc kubenswrapper[4794]: I0216 17:29:07.907065 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjdxz" event={"ID":"e84beebf-64d8-47d0-8594-f7a028c9bf5d","Type":"ContainerStarted","Data":"a089f739d90aa30d3ca72bfafd66ff8a124b80f541b68124f7a939733db7ec81"} Feb 16 17:29:08 crc kubenswrapper[4794]: I0216 17:29:08.470563 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z6xcs"] Feb 16 17:29:08 crc kubenswrapper[4794]: I0216 17:29:08.473643 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z6xcs" Feb 16 17:29:08 crc kubenswrapper[4794]: I0216 17:29:08.493335 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9604cc47-484a-4e5f-bafb-2d4648095cc9-utilities\") pod \"redhat-operators-z6xcs\" (UID: \"9604cc47-484a-4e5f-bafb-2d4648095cc9\") " pod="openshift-marketplace/redhat-operators-z6xcs" Feb 16 17:29:08 crc kubenswrapper[4794]: I0216 17:29:08.493853 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9604cc47-484a-4e5f-bafb-2d4648095cc9-catalog-content\") pod \"redhat-operators-z6xcs\" (UID: \"9604cc47-484a-4e5f-bafb-2d4648095cc9\") " pod="openshift-marketplace/redhat-operators-z6xcs" Feb 16 17:29:08 crc kubenswrapper[4794]: I0216 17:29:08.494187 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-278k6\" (UniqueName: \"kubernetes.io/projected/9604cc47-484a-4e5f-bafb-2d4648095cc9-kube-api-access-278k6\") pod \"redhat-operators-z6xcs\" (UID: \"9604cc47-484a-4e5f-bafb-2d4648095cc9\") " pod="openshift-marketplace/redhat-operators-z6xcs" Feb 16 17:29:08 crc kubenswrapper[4794]: I0216 17:29:08.494322 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z6xcs"] Feb 16 17:29:08 crc kubenswrapper[4794]: I0216 17:29:08.595455 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9604cc47-484a-4e5f-bafb-2d4648095cc9-catalog-content\") pod \"redhat-operators-z6xcs\" (UID: \"9604cc47-484a-4e5f-bafb-2d4648095cc9\") " pod="openshift-marketplace/redhat-operators-z6xcs" Feb 16 17:29:08 crc kubenswrapper[4794]: I0216 17:29:08.595531 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-278k6\" (UniqueName: \"kubernetes.io/projected/9604cc47-484a-4e5f-bafb-2d4648095cc9-kube-api-access-278k6\") pod \"redhat-operators-z6xcs\" (UID: \"9604cc47-484a-4e5f-bafb-2d4648095cc9\") " pod="openshift-marketplace/redhat-operators-z6xcs" Feb 16 17:29:08 crc kubenswrapper[4794]: I0216 17:29:08.595591 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9604cc47-484a-4e5f-bafb-2d4648095cc9-utilities\") pod \"redhat-operators-z6xcs\" (UID: \"9604cc47-484a-4e5f-bafb-2d4648095cc9\") " pod="openshift-marketplace/redhat-operators-z6xcs" Feb 16 17:29:08 crc kubenswrapper[4794]: I0216 17:29:08.596116 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9604cc47-484a-4e5f-bafb-2d4648095cc9-utilities\") pod \"redhat-operators-z6xcs\" (UID: \"9604cc47-484a-4e5f-bafb-2d4648095cc9\") " pod="openshift-marketplace/redhat-operators-z6xcs" Feb 16 17:29:08 crc kubenswrapper[4794]: I0216 17:29:08.596353 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9604cc47-484a-4e5f-bafb-2d4648095cc9-catalog-content\") pod \"redhat-operators-z6xcs\" (UID: \"9604cc47-484a-4e5f-bafb-2d4648095cc9\") " pod="openshift-marketplace/redhat-operators-z6xcs" Feb 16 17:29:08 crc kubenswrapper[4794]: I0216 17:29:08.623333 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-278k6\" (UniqueName: \"kubernetes.io/projected/9604cc47-484a-4e5f-bafb-2d4648095cc9-kube-api-access-278k6\") pod \"redhat-operators-z6xcs\" (UID: \"9604cc47-484a-4e5f-bafb-2d4648095cc9\") " pod="openshift-marketplace/redhat-operators-z6xcs" Feb 16 17:29:08 crc kubenswrapper[4794]: I0216 17:29:08.797346 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z6xcs" Feb 16 17:29:08 crc kubenswrapper[4794]: I0216 17:29:08.922428 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjdxz" event={"ID":"e84beebf-64d8-47d0-8594-f7a028c9bf5d","Type":"ContainerStarted","Data":"31cfb10655ef42e55bf7cf252841fbb3768855f44e4e355da025abd09d51e637"} Feb 16 17:29:09 crc kubenswrapper[4794]: I0216 17:29:09.355560 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z6xcs"] Feb 16 17:29:09 crc kubenswrapper[4794]: I0216 17:29:09.937415 4794 generic.go:334] "Generic (PLEG): container finished" podID="9604cc47-484a-4e5f-bafb-2d4648095cc9" containerID="e40b9f75ee70158fe2b16eb9b23568a93bacd50e4b6738b77bfcbe3bf31bb64a" exitCode=0 Feb 16 17:29:09 crc kubenswrapper[4794]: I0216 17:29:09.937500 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z6xcs" event={"ID":"9604cc47-484a-4e5f-bafb-2d4648095cc9","Type":"ContainerDied","Data":"e40b9f75ee70158fe2b16eb9b23568a93bacd50e4b6738b77bfcbe3bf31bb64a"} Feb 16 17:29:09 crc kubenswrapper[4794]: I0216 17:29:09.938563 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z6xcs" event={"ID":"9604cc47-484a-4e5f-bafb-2d4648095cc9","Type":"ContainerStarted","Data":"dfd8f0ab35d6bc443727fb376ace1f62fe177e13e82a332ee62b57c34e28aa7e"} Feb 16 17:29:10 crc kubenswrapper[4794]: E0216 17:29:10.792621 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:29:10 crc kubenswrapper[4794]: I0216 17:29:10.951681 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z6xcs" event={"ID":"9604cc47-484a-4e5f-bafb-2d4648095cc9","Type":"ContainerStarted","Data":"3df267718bacf36442be3ddd5e0c674420cf843b11dcb22450a77776906a3752"} Feb 16 17:29:10 crc kubenswrapper[4794]: I0216 17:29:10.953682 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjdxz" event={"ID":"e84beebf-64d8-47d0-8594-f7a028c9bf5d","Type":"ContainerDied","Data":"31cfb10655ef42e55bf7cf252841fbb3768855f44e4e355da025abd09d51e637"} Feb 16 17:29:10 crc kubenswrapper[4794]: I0216 17:29:10.953829 4794 generic.go:334] "Generic (PLEG): container finished" podID="e84beebf-64d8-47d0-8594-f7a028c9bf5d" containerID="31cfb10655ef42e55bf7cf252841fbb3768855f44e4e355da025abd09d51e637" exitCode=0 Feb 16 17:29:11 crc kubenswrapper[4794]: I0216 17:29:11.966656 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjdxz" event={"ID":"e84beebf-64d8-47d0-8594-f7a028c9bf5d","Type":"ContainerStarted","Data":"eecc6d90f660beec0154550373bac525687a8621025e83d5c8eac6661e9ca01d"} Feb 16 17:29:11 crc kubenswrapper[4794]: I0216 17:29:11.989671 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zjdxz" podStartSLOduration=3.297612279 podStartE2EDuration="6.989655489s" podCreationTimestamp="2026-02-16 17:29:05 +0000 UTC" firstStartedPulling="2026-02-16 17:29:07.90981296 +0000 UTC m=+1773.857907607" lastFinishedPulling="2026-02-16 17:29:11.60185618 +0000 UTC m=+1777.549950817" observedRunningTime="2026-02-16 17:29:11.986571422 +0000 UTC m=+1777.934666069" watchObservedRunningTime="2026-02-16 17:29:11.989655489 +0000 UTC m=+1777.937750136" Feb 16 17:29:16 crc kubenswrapper[4794]: I0216 17:29:16.011861 4794 generic.go:334] "Generic (PLEG): container finished" podID="9604cc47-484a-4e5f-bafb-2d4648095cc9" containerID="3df267718bacf36442be3ddd5e0c674420cf843b11dcb22450a77776906a3752" exitCode=0 Feb 16 17:29:16 crc kubenswrapper[4794]: I0216 17:29:16.011947 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z6xcs" event={"ID":"9604cc47-484a-4e5f-bafb-2d4648095cc9","Type":"ContainerDied","Data":"3df267718bacf36442be3ddd5e0c674420cf843b11dcb22450a77776906a3752"} Feb 16 17:29:16 crc kubenswrapper[4794]: I0216 17:29:16.273527 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zjdxz" Feb 16 17:29:16 crc kubenswrapper[4794]: I0216 17:29:16.274223 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zjdxz" Feb 16 17:29:17 crc kubenswrapper[4794]: I0216 17:29:17.024485 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z6xcs" event={"ID":"9604cc47-484a-4e5f-bafb-2d4648095cc9","Type":"ContainerStarted","Data":"9722e83ae0c1d8351d8a56ff548087e4b4f3af957b828d78fc49b0980a8aed1c"} Feb 16 17:29:17 crc kubenswrapper[4794]: I0216 17:29:17.054205 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z6xcs" podStartSLOduration=2.5915946610000002 podStartE2EDuration="9.054176512s" podCreationTimestamp="2026-02-16 17:29:08 +0000 UTC" firstStartedPulling="2026-02-16 17:29:09.942519235 +0000 UTC m=+1775.890613882" lastFinishedPulling="2026-02-16 17:29:16.405101086 +0000 UTC m=+1782.353195733" observedRunningTime="2026-02-16 17:29:17.047640636 +0000 UTC m=+1782.995735273" watchObservedRunningTime="2026-02-16 17:29:17.054176512 +0000 UTC m=+1783.002271179" Feb 16 17:29:17 crc kubenswrapper[4794]: I0216 17:29:17.328947 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-zjdxz" podUID="e84beebf-64d8-47d0-8594-f7a028c9bf5d" containerName="registry-server" probeResult="failure" output=< Feb 16 17:29:17 crc kubenswrapper[4794]: timeout: failed to connect service ":50051" within 1s Feb 16 17:29:17 crc kubenswrapper[4794]: > Feb 16 17:29:17 crc kubenswrapper[4794]: E0216 17:29:17.793886 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:29:18 crc kubenswrapper[4794]: I0216 17:29:18.808862 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z6xcs" Feb 16 17:29:18 crc kubenswrapper[4794]: I0216 17:29:18.808898 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z6xcs" Feb 16 17:29:19 crc kubenswrapper[4794]: I0216 17:29:19.864587 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z6xcs" podUID="9604cc47-484a-4e5f-bafb-2d4648095cc9" containerName="registry-server" probeResult="failure" output=< Feb 16 17:29:19 crc kubenswrapper[4794]: timeout: failed to connect service ":50051" within 1s Feb 16 17:29:19 crc kubenswrapper[4794]: > Feb 16 17:29:22 crc kubenswrapper[4794]: I0216 17:29:22.791762 4794 scope.go:117] "RemoveContainer" containerID="6e22a7be8018d748f49f3c871459e97f76ee04f859d3d1d0d46b4bbb2dd36691" Feb 16 17:29:23 crc kubenswrapper[4794]: I0216 17:29:23.090738 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerStarted","Data":"43fc9dd7f2fae3a5a4c080fa56f687e2435f83f7280f2c8e8a10fb66c8654d44"} Feb 16 17:29:25 crc kubenswrapper[4794]: E0216 17:29:25.794019 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:29:27 crc kubenswrapper[4794]: I0216 17:29:27.579508 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-zjdxz" podUID="e84beebf-64d8-47d0-8594-f7a028c9bf5d" containerName="registry-server" probeResult="failure" output=< Feb 16 17:29:27 crc kubenswrapper[4794]: timeout: failed to connect service ":50051" within 1s Feb 16 17:29:27 crc kubenswrapper[4794]: > Feb 16 17:29:29 crc kubenswrapper[4794]: I0216 17:29:29.855446 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z6xcs" podUID="9604cc47-484a-4e5f-bafb-2d4648095cc9" containerName="registry-server" probeResult="failure" output=< Feb 16 17:29:29 crc kubenswrapper[4794]: timeout: failed to connect service ":50051" within 1s Feb 16 17:29:29 crc kubenswrapper[4794]: > Feb 16 17:29:31 crc kubenswrapper[4794]: E0216 17:29:31.793644 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:29:36 crc kubenswrapper[4794]: I0216 17:29:36.318447 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zjdxz" Feb 16 17:29:36 crc kubenswrapper[4794]: I0216 17:29:36.376643 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zjdxz" Feb 16 17:29:36 crc kubenswrapper[4794]: E0216 17:29:36.795015 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:29:37 crc kubenswrapper[4794]: I0216 17:29:37.102087 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zjdxz"] Feb 16 17:29:37 crc kubenswrapper[4794]: I0216 17:29:37.752776 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zjdxz" podUID="e84beebf-64d8-47d0-8594-f7a028c9bf5d" containerName="registry-server" containerID="cri-o://eecc6d90f660beec0154550373bac525687a8621025e83d5c8eac6661e9ca01d" gracePeriod=2 Feb 16 17:29:38 crc kubenswrapper[4794]: I0216 17:29:38.410043 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zjdxz" Feb 16 17:29:38 crc kubenswrapper[4794]: I0216 17:29:38.585408 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e84beebf-64d8-47d0-8594-f7a028c9bf5d-utilities\") pod \"e84beebf-64d8-47d0-8594-f7a028c9bf5d\" (UID: \"e84beebf-64d8-47d0-8594-f7a028c9bf5d\") " Feb 16 17:29:38 crc kubenswrapper[4794]: I0216 17:29:38.585492 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e84beebf-64d8-47d0-8594-f7a028c9bf5d-catalog-content\") pod \"e84beebf-64d8-47d0-8594-f7a028c9bf5d\" (UID: \"e84beebf-64d8-47d0-8594-f7a028c9bf5d\") " Feb 16 17:29:38 crc kubenswrapper[4794]: I0216 17:29:38.585596 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbkdl\" (UniqueName: \"kubernetes.io/projected/e84beebf-64d8-47d0-8594-f7a028c9bf5d-kube-api-access-jbkdl\") pod \"e84beebf-64d8-47d0-8594-f7a028c9bf5d\" (UID: \"e84beebf-64d8-47d0-8594-f7a028c9bf5d\") " Feb 16 17:29:38 crc kubenswrapper[4794]: I0216 17:29:38.587012 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e84beebf-64d8-47d0-8594-f7a028c9bf5d-utilities" (OuterVolumeSpecName: "utilities") pod "e84beebf-64d8-47d0-8594-f7a028c9bf5d" (UID: "e84beebf-64d8-47d0-8594-f7a028c9bf5d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:29:38 crc kubenswrapper[4794]: I0216 17:29:38.592710 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e84beebf-64d8-47d0-8594-f7a028c9bf5d-kube-api-access-jbkdl" (OuterVolumeSpecName: "kube-api-access-jbkdl") pod "e84beebf-64d8-47d0-8594-f7a028c9bf5d" (UID: "e84beebf-64d8-47d0-8594-f7a028c9bf5d"). InnerVolumeSpecName "kube-api-access-jbkdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:29:38 crc kubenswrapper[4794]: I0216 17:29:38.646385 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e84beebf-64d8-47d0-8594-f7a028c9bf5d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e84beebf-64d8-47d0-8594-f7a028c9bf5d" (UID: "e84beebf-64d8-47d0-8594-f7a028c9bf5d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:29:38 crc kubenswrapper[4794]: I0216 17:29:38.687982 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e84beebf-64d8-47d0-8594-f7a028c9bf5d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:29:38 crc kubenswrapper[4794]: I0216 17:29:38.688021 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e84beebf-64d8-47d0-8594-f7a028c9bf5d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:29:38 crc kubenswrapper[4794]: I0216 17:29:38.688037 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbkdl\" (UniqueName: \"kubernetes.io/projected/e84beebf-64d8-47d0-8594-f7a028c9bf5d-kube-api-access-jbkdl\") on node \"crc\" DevicePath \"\"" Feb 16 17:29:38 crc kubenswrapper[4794]: I0216 17:29:38.768794 4794 generic.go:334] "Generic (PLEG): container finished" podID="e84beebf-64d8-47d0-8594-f7a028c9bf5d" containerID="eecc6d90f660beec0154550373bac525687a8621025e83d5c8eac6661e9ca01d" exitCode=0 Feb 16 17:29:38 crc kubenswrapper[4794]: I0216 17:29:38.768865 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjdxz" event={"ID":"e84beebf-64d8-47d0-8594-f7a028c9bf5d","Type":"ContainerDied","Data":"eecc6d90f660beec0154550373bac525687a8621025e83d5c8eac6661e9ca01d"} Feb 16 17:29:38 crc kubenswrapper[4794]: I0216 17:29:38.768889 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zjdxz" Feb 16 17:29:38 crc kubenswrapper[4794]: I0216 17:29:38.768914 4794 scope.go:117] "RemoveContainer" containerID="eecc6d90f660beec0154550373bac525687a8621025e83d5c8eac6661e9ca01d" Feb 16 17:29:38 crc kubenswrapper[4794]: I0216 17:29:38.768902 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjdxz" event={"ID":"e84beebf-64d8-47d0-8594-f7a028c9bf5d","Type":"ContainerDied","Data":"a089f739d90aa30d3ca72bfafd66ff8a124b80f541b68124f7a939733db7ec81"} Feb 16 17:29:38 crc kubenswrapper[4794]: I0216 17:29:38.798821 4794 scope.go:117] "RemoveContainer" containerID="31cfb10655ef42e55bf7cf252841fbb3768855f44e4e355da025abd09d51e637" Feb 16 17:29:38 crc kubenswrapper[4794]: I0216 17:29:38.812198 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zjdxz"] Feb 16 17:29:38 crc kubenswrapper[4794]: I0216 17:29:38.833332 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zjdxz"] Feb 16 17:29:38 crc kubenswrapper[4794]: I0216 17:29:38.843624 4794 scope.go:117] "RemoveContainer" containerID="e2d0f48ae026c59da33a03339b9a480707c549b60a3fb8e66c2e590894b8a48f" Feb 16 17:29:38 crc kubenswrapper[4794]: I0216 17:29:38.900095 4794 scope.go:117] "RemoveContainer" containerID="eecc6d90f660beec0154550373bac525687a8621025e83d5c8eac6661e9ca01d" Feb 16 17:29:38 crc kubenswrapper[4794]: E0216 17:29:38.900492 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eecc6d90f660beec0154550373bac525687a8621025e83d5c8eac6661e9ca01d\": container with ID starting with eecc6d90f660beec0154550373bac525687a8621025e83d5c8eac6661e9ca01d not found: ID does not exist" containerID="eecc6d90f660beec0154550373bac525687a8621025e83d5c8eac6661e9ca01d" Feb 16 17:29:38 crc kubenswrapper[4794]: I0216 17:29:38.900527 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eecc6d90f660beec0154550373bac525687a8621025e83d5c8eac6661e9ca01d"} err="failed to get container status \"eecc6d90f660beec0154550373bac525687a8621025e83d5c8eac6661e9ca01d\": rpc error: code = NotFound desc = could not find container \"eecc6d90f660beec0154550373bac525687a8621025e83d5c8eac6661e9ca01d\": container with ID starting with eecc6d90f660beec0154550373bac525687a8621025e83d5c8eac6661e9ca01d not found: ID does not exist" Feb 16 17:29:38 crc kubenswrapper[4794]: I0216 17:29:38.900555 4794 scope.go:117] "RemoveContainer" containerID="31cfb10655ef42e55bf7cf252841fbb3768855f44e4e355da025abd09d51e637" Feb 16 17:29:38 crc kubenswrapper[4794]: E0216 17:29:38.901218 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31cfb10655ef42e55bf7cf252841fbb3768855f44e4e355da025abd09d51e637\": container with ID starting with 31cfb10655ef42e55bf7cf252841fbb3768855f44e4e355da025abd09d51e637 not found: ID does not exist" containerID="31cfb10655ef42e55bf7cf252841fbb3768855f44e4e355da025abd09d51e637" Feb 16 17:29:38 crc kubenswrapper[4794]: I0216 17:29:38.901403 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31cfb10655ef42e55bf7cf252841fbb3768855f44e4e355da025abd09d51e637"} err="failed to get container status \"31cfb10655ef42e55bf7cf252841fbb3768855f44e4e355da025abd09d51e637\": rpc error: code = NotFound desc = could not find container \"31cfb10655ef42e55bf7cf252841fbb3768855f44e4e355da025abd09d51e637\": container with ID starting with 31cfb10655ef42e55bf7cf252841fbb3768855f44e4e355da025abd09d51e637 not found: ID does not exist" Feb 16 17:29:38 crc kubenswrapper[4794]: I0216 17:29:38.901530 4794 scope.go:117] "RemoveContainer" containerID="e2d0f48ae026c59da33a03339b9a480707c549b60a3fb8e66c2e590894b8a48f" Feb 16 17:29:38 crc kubenswrapper[4794]: E0216 17:29:38.902253 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2d0f48ae026c59da33a03339b9a480707c549b60a3fb8e66c2e590894b8a48f\": container with ID starting with e2d0f48ae026c59da33a03339b9a480707c549b60a3fb8e66c2e590894b8a48f not found: ID does not exist" containerID="e2d0f48ae026c59da33a03339b9a480707c549b60a3fb8e66c2e590894b8a48f" Feb 16 17:29:38 crc kubenswrapper[4794]: I0216 17:29:38.902287 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2d0f48ae026c59da33a03339b9a480707c549b60a3fb8e66c2e590894b8a48f"} err="failed to get container status \"e2d0f48ae026c59da33a03339b9a480707c549b60a3fb8e66c2e590894b8a48f\": rpc error: code = NotFound desc = could not find container \"e2d0f48ae026c59da33a03339b9a480707c549b60a3fb8e66c2e590894b8a48f\": container with ID starting with e2d0f48ae026c59da33a03339b9a480707c549b60a3fb8e66c2e590894b8a48f not found: ID does not exist" Feb 16 17:29:39 crc kubenswrapper[4794]: I0216 17:29:39.860474 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z6xcs" podUID="9604cc47-484a-4e5f-bafb-2d4648095cc9" containerName="registry-server" probeResult="failure" output=< Feb 16 17:29:39 crc kubenswrapper[4794]: timeout: failed to connect service ":50051" within 1s Feb 16 17:29:39 crc kubenswrapper[4794]: > Feb 16 17:29:40 crc kubenswrapper[4794]: I0216 17:29:40.804564 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e84beebf-64d8-47d0-8594-f7a028c9bf5d" path="/var/lib/kubelet/pods/e84beebf-64d8-47d0-8594-f7a028c9bf5d/volumes" Feb 16 17:29:43 crc kubenswrapper[4794]: E0216 17:29:43.794632 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:29:48 crc kubenswrapper[4794]: I0216 17:29:48.853746 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z6xcs" Feb 16 17:29:48 crc kubenswrapper[4794]: I0216 17:29:48.918837 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z6xcs" Feb 16 17:29:49 crc kubenswrapper[4794]: I0216 17:29:49.106942 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z6xcs"] Feb 16 17:29:49 crc kubenswrapper[4794]: I0216 17:29:49.912754 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z6xcs" podUID="9604cc47-484a-4e5f-bafb-2d4648095cc9" containerName="registry-server" containerID="cri-o://9722e83ae0c1d8351d8a56ff548087e4b4f3af957b828d78fc49b0980a8aed1c" gracePeriod=2 Feb 16 17:29:50 crc kubenswrapper[4794]: I0216 17:29:50.388751 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z6xcs" Feb 16 17:29:50 crc kubenswrapper[4794]: I0216 17:29:50.498339 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9604cc47-484a-4e5f-bafb-2d4648095cc9-catalog-content\") pod \"9604cc47-484a-4e5f-bafb-2d4648095cc9\" (UID: \"9604cc47-484a-4e5f-bafb-2d4648095cc9\") " Feb 16 17:29:50 crc kubenswrapper[4794]: I0216 17:29:50.498386 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-278k6\" (UniqueName: \"kubernetes.io/projected/9604cc47-484a-4e5f-bafb-2d4648095cc9-kube-api-access-278k6\") pod \"9604cc47-484a-4e5f-bafb-2d4648095cc9\" (UID: \"9604cc47-484a-4e5f-bafb-2d4648095cc9\") " Feb 16 17:29:50 crc kubenswrapper[4794]: I0216 17:29:50.498420 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9604cc47-484a-4e5f-bafb-2d4648095cc9-utilities\") pod \"9604cc47-484a-4e5f-bafb-2d4648095cc9\" (UID: \"9604cc47-484a-4e5f-bafb-2d4648095cc9\") " Feb 16 17:29:50 crc kubenswrapper[4794]: I0216 17:29:50.499546 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9604cc47-484a-4e5f-bafb-2d4648095cc9-utilities" (OuterVolumeSpecName: "utilities") pod "9604cc47-484a-4e5f-bafb-2d4648095cc9" (UID: "9604cc47-484a-4e5f-bafb-2d4648095cc9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:29:50 crc kubenswrapper[4794]: I0216 17:29:50.508895 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9604cc47-484a-4e5f-bafb-2d4648095cc9-kube-api-access-278k6" (OuterVolumeSpecName: "kube-api-access-278k6") pod "9604cc47-484a-4e5f-bafb-2d4648095cc9" (UID: "9604cc47-484a-4e5f-bafb-2d4648095cc9"). InnerVolumeSpecName "kube-api-access-278k6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:29:50 crc kubenswrapper[4794]: I0216 17:29:50.601835 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-278k6\" (UniqueName: \"kubernetes.io/projected/9604cc47-484a-4e5f-bafb-2d4648095cc9-kube-api-access-278k6\") on node \"crc\" DevicePath \"\"" Feb 16 17:29:50 crc kubenswrapper[4794]: I0216 17:29:50.601877 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9604cc47-484a-4e5f-bafb-2d4648095cc9-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:29:50 crc kubenswrapper[4794]: I0216 17:29:50.617712 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9604cc47-484a-4e5f-bafb-2d4648095cc9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9604cc47-484a-4e5f-bafb-2d4648095cc9" (UID: "9604cc47-484a-4e5f-bafb-2d4648095cc9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:29:50 crc kubenswrapper[4794]: I0216 17:29:50.704573 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9604cc47-484a-4e5f-bafb-2d4648095cc9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:29:50 crc kubenswrapper[4794]: E0216 17:29:50.793677 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:29:50 crc kubenswrapper[4794]: I0216 17:29:50.935590 4794 generic.go:334] "Generic (PLEG): container finished" podID="9604cc47-484a-4e5f-bafb-2d4648095cc9" containerID="9722e83ae0c1d8351d8a56ff548087e4b4f3af957b828d78fc49b0980a8aed1c" exitCode=0 Feb 16 17:29:50 crc kubenswrapper[4794]: I0216 17:29:50.935766 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z6xcs" Feb 16 17:29:50 crc kubenswrapper[4794]: I0216 17:29:50.935805 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z6xcs" event={"ID":"9604cc47-484a-4e5f-bafb-2d4648095cc9","Type":"ContainerDied","Data":"9722e83ae0c1d8351d8a56ff548087e4b4f3af957b828d78fc49b0980a8aed1c"} Feb 16 17:29:50 crc kubenswrapper[4794]: I0216 17:29:50.936901 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z6xcs" event={"ID":"9604cc47-484a-4e5f-bafb-2d4648095cc9","Type":"ContainerDied","Data":"dfd8f0ab35d6bc443727fb376ace1f62fe177e13e82a332ee62b57c34e28aa7e"} Feb 16 17:29:50 crc kubenswrapper[4794]: I0216 17:29:50.936924 4794 scope.go:117] "RemoveContainer" containerID="9722e83ae0c1d8351d8a56ff548087e4b4f3af957b828d78fc49b0980a8aed1c" Feb 16 17:29:50 crc kubenswrapper[4794]: I0216 17:29:50.967990 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z6xcs"] Feb 16 17:29:50 crc kubenswrapper[4794]: I0216 17:29:50.975958 4794 scope.go:117] "RemoveContainer" containerID="3df267718bacf36442be3ddd5e0c674420cf843b11dcb22450a77776906a3752" Feb 16 17:29:50 crc kubenswrapper[4794]: I0216 17:29:50.988704 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z6xcs"] Feb 16 17:29:51 crc kubenswrapper[4794]: I0216 17:29:51.001559 4794 scope.go:117] "RemoveContainer" containerID="e40b9f75ee70158fe2b16eb9b23568a93bacd50e4b6738b77bfcbe3bf31bb64a" Feb 16 17:29:51 crc kubenswrapper[4794]: I0216 17:29:51.083601 4794 scope.go:117] "RemoveContainer" containerID="9722e83ae0c1d8351d8a56ff548087e4b4f3af957b828d78fc49b0980a8aed1c" Feb 16 17:29:51 crc kubenswrapper[4794]: E0216 17:29:51.083928 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9722e83ae0c1d8351d8a56ff548087e4b4f3af957b828d78fc49b0980a8aed1c\": container with ID starting with 9722e83ae0c1d8351d8a56ff548087e4b4f3af957b828d78fc49b0980a8aed1c not found: ID does not exist" containerID="9722e83ae0c1d8351d8a56ff548087e4b4f3af957b828d78fc49b0980a8aed1c" Feb 16 17:29:51 crc kubenswrapper[4794]: I0216 17:29:51.083970 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9722e83ae0c1d8351d8a56ff548087e4b4f3af957b828d78fc49b0980a8aed1c"} err="failed to get container status \"9722e83ae0c1d8351d8a56ff548087e4b4f3af957b828d78fc49b0980a8aed1c\": rpc error: code = NotFound desc = could not find container \"9722e83ae0c1d8351d8a56ff548087e4b4f3af957b828d78fc49b0980a8aed1c\": container with ID starting with 9722e83ae0c1d8351d8a56ff548087e4b4f3af957b828d78fc49b0980a8aed1c not found: ID does not exist" Feb 16 17:29:51 crc kubenswrapper[4794]: I0216 17:29:51.084000 4794 scope.go:117] "RemoveContainer" containerID="3df267718bacf36442be3ddd5e0c674420cf843b11dcb22450a77776906a3752" Feb 16 17:29:51 crc kubenswrapper[4794]: E0216 17:29:51.084531 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3df267718bacf36442be3ddd5e0c674420cf843b11dcb22450a77776906a3752\": container with ID starting with 3df267718bacf36442be3ddd5e0c674420cf843b11dcb22450a77776906a3752 not found: ID does not exist" containerID="3df267718bacf36442be3ddd5e0c674420cf843b11dcb22450a77776906a3752" Feb 16 17:29:51 crc kubenswrapper[4794]: I0216 17:29:51.084561 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3df267718bacf36442be3ddd5e0c674420cf843b11dcb22450a77776906a3752"} err="failed to get container status \"3df267718bacf36442be3ddd5e0c674420cf843b11dcb22450a77776906a3752\": rpc error: code = NotFound desc = could not find container \"3df267718bacf36442be3ddd5e0c674420cf843b11dcb22450a77776906a3752\": container with ID starting with 3df267718bacf36442be3ddd5e0c674420cf843b11dcb22450a77776906a3752 not found: ID does not exist" Feb 16 17:29:51 crc kubenswrapper[4794]: I0216 17:29:51.084584 4794 scope.go:117] "RemoveContainer" containerID="e40b9f75ee70158fe2b16eb9b23568a93bacd50e4b6738b77bfcbe3bf31bb64a" Feb 16 17:29:51 crc kubenswrapper[4794]: E0216 17:29:51.084767 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e40b9f75ee70158fe2b16eb9b23568a93bacd50e4b6738b77bfcbe3bf31bb64a\": container with ID starting with e40b9f75ee70158fe2b16eb9b23568a93bacd50e4b6738b77bfcbe3bf31bb64a not found: ID does not exist" containerID="e40b9f75ee70158fe2b16eb9b23568a93bacd50e4b6738b77bfcbe3bf31bb64a" Feb 16 17:29:51 crc kubenswrapper[4794]: I0216 17:29:51.084788 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e40b9f75ee70158fe2b16eb9b23568a93bacd50e4b6738b77bfcbe3bf31bb64a"} err="failed to get container status \"e40b9f75ee70158fe2b16eb9b23568a93bacd50e4b6738b77bfcbe3bf31bb64a\": rpc error: code = NotFound desc = could not find container \"e40b9f75ee70158fe2b16eb9b23568a93bacd50e4b6738b77bfcbe3bf31bb64a\": container with ID starting with e40b9f75ee70158fe2b16eb9b23568a93bacd50e4b6738b77bfcbe3bf31bb64a not found: ID does not exist" Feb 16 17:29:52 crc kubenswrapper[4794]: I0216 17:29:52.803397 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9604cc47-484a-4e5f-bafb-2d4648095cc9" path="/var/lib/kubelet/pods/9604cc47-484a-4e5f-bafb-2d4648095cc9/volumes" Feb 16 17:29:58 crc kubenswrapper[4794]: E0216 17:29:58.794143 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:30:00 crc kubenswrapper[4794]: I0216 17:30:00.177077 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521050-jzcdh"] Feb 16 17:30:00 crc kubenswrapper[4794]: E0216 17:30:00.178062 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9604cc47-484a-4e5f-bafb-2d4648095cc9" containerName="extract-utilities" Feb 16 17:30:00 crc kubenswrapper[4794]: I0216 17:30:00.178081 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="9604cc47-484a-4e5f-bafb-2d4648095cc9" containerName="extract-utilities" Feb 16 17:30:00 crc kubenswrapper[4794]: E0216 17:30:00.178106 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9604cc47-484a-4e5f-bafb-2d4648095cc9" containerName="extract-content" Feb 16 17:30:00 crc kubenswrapper[4794]: I0216 17:30:00.178115 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="9604cc47-484a-4e5f-bafb-2d4648095cc9" containerName="extract-content" Feb 16 17:30:00 crc kubenswrapper[4794]: E0216 17:30:00.178127 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e84beebf-64d8-47d0-8594-f7a028c9bf5d" containerName="extract-utilities" Feb 16 17:30:00 crc kubenswrapper[4794]: I0216 17:30:00.178134 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="e84beebf-64d8-47d0-8594-f7a028c9bf5d" containerName="extract-utilities" Feb 16 17:30:00 crc kubenswrapper[4794]: E0216 17:30:00.178146 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e84beebf-64d8-47d0-8594-f7a028c9bf5d" containerName="extract-content" Feb 16 17:30:00 crc kubenswrapper[4794]: I0216 17:30:00.178151 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="e84beebf-64d8-47d0-8594-f7a028c9bf5d" containerName="extract-content" Feb 16 17:30:00 crc kubenswrapper[4794]: E0216 17:30:00.178163 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e84beebf-64d8-47d0-8594-f7a028c9bf5d" containerName="registry-server" Feb 16 17:30:00 crc kubenswrapper[4794]: I0216 17:30:00.178168 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="e84beebf-64d8-47d0-8594-f7a028c9bf5d" containerName="registry-server" Feb 16 17:30:00 crc kubenswrapper[4794]: E0216 17:30:00.178206 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9604cc47-484a-4e5f-bafb-2d4648095cc9" containerName="registry-server" Feb 16 17:30:00 crc kubenswrapper[4794]: I0216 17:30:00.178212 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="9604cc47-484a-4e5f-bafb-2d4648095cc9" containerName="registry-server" Feb 16 17:30:00 crc kubenswrapper[4794]: I0216 17:30:00.178457 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="9604cc47-484a-4e5f-bafb-2d4648095cc9" containerName="registry-server" Feb 16 17:30:00 crc kubenswrapper[4794]: I0216 17:30:00.178486 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="e84beebf-64d8-47d0-8594-f7a028c9bf5d" containerName="registry-server" Feb 16 17:30:00 crc kubenswrapper[4794]: I0216 17:30:00.179353 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-jzcdh" Feb 16 17:30:00 crc kubenswrapper[4794]: I0216 17:30:00.182166 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 17:30:00 crc kubenswrapper[4794]: I0216 17:30:00.182949 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 17:30:00 crc kubenswrapper[4794]: I0216 17:30:00.198120 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521050-jzcdh"] Feb 16 17:30:00 crc kubenswrapper[4794]: I0216 17:30:00.348091 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44652222-f734-44ff-8769-44adae44fc93-config-volume\") pod \"collect-profiles-29521050-jzcdh\" (UID: \"44652222-f734-44ff-8769-44adae44fc93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-jzcdh" Feb 16 17:30:00 crc kubenswrapper[4794]: I0216 17:30:00.348464 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np8mq\" (UniqueName: \"kubernetes.io/projected/44652222-f734-44ff-8769-44adae44fc93-kube-api-access-np8mq\") pod \"collect-profiles-29521050-jzcdh\" (UID: \"44652222-f734-44ff-8769-44adae44fc93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-jzcdh" Feb 16 17:30:00 crc kubenswrapper[4794]: I0216 17:30:00.348556 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44652222-f734-44ff-8769-44adae44fc93-secret-volume\") pod \"collect-profiles-29521050-jzcdh\" (UID: \"44652222-f734-44ff-8769-44adae44fc93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-jzcdh" Feb 16 17:30:00 crc kubenswrapper[4794]: I0216 17:30:00.450358 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44652222-f734-44ff-8769-44adae44fc93-secret-volume\") pod \"collect-profiles-29521050-jzcdh\" (UID: \"44652222-f734-44ff-8769-44adae44fc93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-jzcdh" Feb 16 17:30:00 crc kubenswrapper[4794]: I0216 17:30:00.450551 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44652222-f734-44ff-8769-44adae44fc93-config-volume\") pod \"collect-profiles-29521050-jzcdh\" (UID: \"44652222-f734-44ff-8769-44adae44fc93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-jzcdh" Feb 16 17:30:00 crc kubenswrapper[4794]: I0216 17:30:00.450681 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np8mq\" (UniqueName: \"kubernetes.io/projected/44652222-f734-44ff-8769-44adae44fc93-kube-api-access-np8mq\") pod \"collect-profiles-29521050-jzcdh\" (UID: \"44652222-f734-44ff-8769-44adae44fc93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-jzcdh" Feb 16 17:30:00 crc kubenswrapper[4794]: I0216 17:30:00.451817 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44652222-f734-44ff-8769-44adae44fc93-config-volume\") pod \"collect-profiles-29521050-jzcdh\" (UID: \"44652222-f734-44ff-8769-44adae44fc93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-jzcdh" Feb 16 17:30:00 crc kubenswrapper[4794]: I0216 17:30:00.458577 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44652222-f734-44ff-8769-44adae44fc93-secret-volume\") pod \"collect-profiles-29521050-jzcdh\" (UID: \"44652222-f734-44ff-8769-44adae44fc93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-jzcdh" Feb 16 17:30:00 crc kubenswrapper[4794]: I0216 17:30:00.469140 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np8mq\" (UniqueName: \"kubernetes.io/projected/44652222-f734-44ff-8769-44adae44fc93-kube-api-access-np8mq\") pod \"collect-profiles-29521050-jzcdh\" (UID: \"44652222-f734-44ff-8769-44adae44fc93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-jzcdh" Feb 16 17:30:00 crc kubenswrapper[4794]: I0216 17:30:00.503226 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-jzcdh" Feb 16 17:30:01 crc kubenswrapper[4794]: I0216 17:30:01.021760 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521050-jzcdh"] Feb 16 17:30:01 crc kubenswrapper[4794]: I0216 17:30:01.088146 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-jzcdh" event={"ID":"44652222-f734-44ff-8769-44adae44fc93","Type":"ContainerStarted","Data":"352e3ec54e6a38b72fb9bb43c3b4d0a70ae176e1d7d479ea303da2717add4a44"} Feb 16 17:30:02 crc kubenswrapper[4794]: I0216 17:30:02.110777 4794 generic.go:334] "Generic (PLEG): container finished" podID="44652222-f734-44ff-8769-44adae44fc93" containerID="bc9365e8426a88c0b09ed8c3836f8a80d98196debeec5b07be146511e0454e50" exitCode=0 Feb 16 17:30:02 crc kubenswrapper[4794]: I0216 17:30:02.113033 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-jzcdh" event={"ID":"44652222-f734-44ff-8769-44adae44fc93","Type":"ContainerDied","Data":"bc9365e8426a88c0b09ed8c3836f8a80d98196debeec5b07be146511e0454e50"} Feb 16 17:30:03 crc kubenswrapper[4794]: I0216 17:30:03.604445 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-jzcdh" Feb 16 17:30:03 crc kubenswrapper[4794]: I0216 17:30:03.774936 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44652222-f734-44ff-8769-44adae44fc93-config-volume\") pod \"44652222-f734-44ff-8769-44adae44fc93\" (UID: \"44652222-f734-44ff-8769-44adae44fc93\") " Feb 16 17:30:03 crc kubenswrapper[4794]: I0216 17:30:03.775099 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44652222-f734-44ff-8769-44adae44fc93-secret-volume\") pod \"44652222-f734-44ff-8769-44adae44fc93\" (UID: \"44652222-f734-44ff-8769-44adae44fc93\") " Feb 16 17:30:03 crc kubenswrapper[4794]: I0216 17:30:03.775419 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-np8mq\" (UniqueName: \"kubernetes.io/projected/44652222-f734-44ff-8769-44adae44fc93-kube-api-access-np8mq\") pod \"44652222-f734-44ff-8769-44adae44fc93\" (UID: \"44652222-f734-44ff-8769-44adae44fc93\") " Feb 16 17:30:03 crc kubenswrapper[4794]: I0216 17:30:03.776006 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44652222-f734-44ff-8769-44adae44fc93-config-volume" (OuterVolumeSpecName: "config-volume") pod "44652222-f734-44ff-8769-44adae44fc93" (UID: "44652222-f734-44ff-8769-44adae44fc93"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:30:03 crc kubenswrapper[4794]: I0216 17:30:03.782541 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44652222-f734-44ff-8769-44adae44fc93-kube-api-access-np8mq" (OuterVolumeSpecName: "kube-api-access-np8mq") pod "44652222-f734-44ff-8769-44adae44fc93" (UID: "44652222-f734-44ff-8769-44adae44fc93"). InnerVolumeSpecName "kube-api-access-np8mq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:30:03 crc kubenswrapper[4794]: E0216 17:30:03.793046 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:30:03 crc kubenswrapper[4794]: I0216 17:30:03.797457 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44652222-f734-44ff-8769-44adae44fc93-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "44652222-f734-44ff-8769-44adae44fc93" (UID: "44652222-f734-44ff-8769-44adae44fc93"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:30:03 crc kubenswrapper[4794]: I0216 17:30:03.878579 4794 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44652222-f734-44ff-8769-44adae44fc93-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 17:30:03 crc kubenswrapper[4794]: I0216 17:30:03.878611 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-np8mq\" (UniqueName: \"kubernetes.io/projected/44652222-f734-44ff-8769-44adae44fc93-kube-api-access-np8mq\") on node \"crc\" DevicePath \"\"" Feb 16 17:30:03 crc kubenswrapper[4794]: I0216 17:30:03.878621 4794 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44652222-f734-44ff-8769-44adae44fc93-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 17:30:04 crc kubenswrapper[4794]: I0216 17:30:04.139541 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-jzcdh" event={"ID":"44652222-f734-44ff-8769-44adae44fc93","Type":"ContainerDied","Data":"352e3ec54e6a38b72fb9bb43c3b4d0a70ae176e1d7d479ea303da2717add4a44"} Feb 16 17:30:04 crc kubenswrapper[4794]: I0216 17:30:04.139932 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="352e3ec54e6a38b72fb9bb43c3b4d0a70ae176e1d7d479ea303da2717add4a44" Feb 16 17:30:04 crc kubenswrapper[4794]: I0216 17:30:04.139603 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-jzcdh" Feb 16 17:30:11 crc kubenswrapper[4794]: E0216 17:30:11.794760 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:30:18 crc kubenswrapper[4794]: E0216 17:30:18.794636 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:30:22 crc kubenswrapper[4794]: E0216 17:30:22.809705 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:30:29 crc kubenswrapper[4794]: E0216 17:30:29.793848 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:30:33 crc kubenswrapper[4794]: E0216 17:30:33.795345 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:30:38 crc kubenswrapper[4794]: I0216 17:30:38.796373 4794 scope.go:117] "RemoveContainer" containerID="a00b53ad46b822a70c9339195ca2a4b34915849555540ce220adb1a6c8f851a8" Feb 16 17:30:38 crc kubenswrapper[4794]: I0216 17:30:38.826058 4794 scope.go:117] "RemoveContainer" containerID="128e84ee994db10b71ef37c8025aa78608235ade22a8dc2863eec2584b1dd6b5" Feb 16 17:30:40 crc kubenswrapper[4794]: E0216 17:30:40.793036 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:30:44 crc kubenswrapper[4794]: I0216 17:30:44.044075 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-6wxqb"] Feb 16 17:30:44 crc kubenswrapper[4794]: I0216 17:30:44.056896 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-6wxqb"] Feb 16 17:30:44 crc kubenswrapper[4794]: I0216 17:30:44.805728 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7351df94-ade5-4e5e-b281-b195301dc37d" path="/var/lib/kubelet/pods/7351df94-ade5-4e5e-b281-b195301dc37d/volumes" Feb 16 17:30:45 crc kubenswrapper[4794]: I0216 17:30:45.052025 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-s4fk6"] Feb 16 17:30:45 crc kubenswrapper[4794]: I0216 17:30:45.060282 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-c135-account-create-update-79qz9"] Feb 16 17:30:45 crc kubenswrapper[4794]: I0216 17:30:45.073065 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-0fed-account-create-update-tb2gr"] Feb 16 17:30:45 crc kubenswrapper[4794]: I0216 17:30:45.086170 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-c135-account-create-update-79qz9"] Feb 16 17:30:45 crc kubenswrapper[4794]: I0216 17:30:45.097033 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-0fed-account-create-update-tb2gr"] Feb 16 17:30:45 crc kubenswrapper[4794]: I0216 17:30:45.110002 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-s4fk6"] Feb 16 17:30:46 crc kubenswrapper[4794]: I0216 17:30:46.047052 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-7b9c-account-create-update-xqtkk"] Feb 16 17:30:46 crc kubenswrapper[4794]: I0216 17:30:46.061586 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-7b9c-account-create-update-xqtkk"] Feb 16 17:30:46 crc kubenswrapper[4794]: I0216 17:30:46.821888 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05c86def-4e37-40ef-847d-ccb9dd6c99a9" path="/var/lib/kubelet/pods/05c86def-4e37-40ef-847d-ccb9dd6c99a9/volumes" Feb 16 17:30:46 crc kubenswrapper[4794]: I0216 17:30:46.825218 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d346472-4e86-4519-8307-ee7cf5f74280" path="/var/lib/kubelet/pods/3d346472-4e86-4519-8307-ee7cf5f74280/volumes" Feb 16 17:30:46 crc kubenswrapper[4794]: I0216 17:30:46.827599 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7fd9bb0-100b-4941-80d2-1a9ec63423be" path="/var/lib/kubelet/pods/a7fd9bb0-100b-4941-80d2-1a9ec63423be/volumes" Feb 16 17:30:46 crc kubenswrapper[4794]: I0216 17:30:46.828692 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8cd1b17-5173-42b6-a51d-e2a057d404f4" path="/var/lib/kubelet/pods/b8cd1b17-5173-42b6-a51d-e2a057d404f4/volumes" Feb 16 17:30:47 crc kubenswrapper[4794]: I0216 17:30:47.641268 4794 generic.go:334] "Generic (PLEG): container finished" podID="00aac5cd-2d06-4021-9d8d-5724b2ad87bc" containerID="9d679cb0fd48155e67d7a08aaa2b07178827aaddad991c51011b1448792c347c" exitCode=0 Feb 16 17:30:47 crc kubenswrapper[4794]: I0216 17:30:47.641346 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kzl44" event={"ID":"00aac5cd-2d06-4021-9d8d-5724b2ad87bc","Type":"ContainerDied","Data":"9d679cb0fd48155e67d7a08aaa2b07178827aaddad991c51011b1448792c347c"} Feb 16 17:30:47 crc kubenswrapper[4794]: E0216 17:30:47.793700 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:30:48 crc kubenswrapper[4794]: I0216 17:30:48.045235 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-42a4-account-create-update-r755d"] Feb 16 17:30:48 crc kubenswrapper[4794]: I0216 17:30:48.057128 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-42a4-account-create-update-r755d"] Feb 16 17:30:48 crc kubenswrapper[4794]: I0216 17:30:48.808616 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4ed7df7-08c2-4c06-bd2b-14ea362191d1" path="/var/lib/kubelet/pods/b4ed7df7-08c2-4c06-bd2b-14ea362191d1/volumes" Feb 16 17:30:49 crc kubenswrapper[4794]: I0216 17:30:49.169856 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kzl44" Feb 16 17:30:49 crc kubenswrapper[4794]: I0216 17:30:49.234577 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00aac5cd-2d06-4021-9d8d-5724b2ad87bc-bootstrap-combined-ca-bundle\") pod \"00aac5cd-2d06-4021-9d8d-5724b2ad87bc\" (UID: \"00aac5cd-2d06-4021-9d8d-5724b2ad87bc\") " Feb 16 17:30:49 crc kubenswrapper[4794]: I0216 17:30:49.235585 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00aac5cd-2d06-4021-9d8d-5724b2ad87bc-ssh-key-openstack-edpm-ipam\") pod \"00aac5cd-2d06-4021-9d8d-5724b2ad87bc\" (UID: \"00aac5cd-2d06-4021-9d8d-5724b2ad87bc\") " Feb 16 17:30:49 crc kubenswrapper[4794]: I0216 17:30:49.235697 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxjhk\" (UniqueName: \"kubernetes.io/projected/00aac5cd-2d06-4021-9d8d-5724b2ad87bc-kube-api-access-lxjhk\") pod \"00aac5cd-2d06-4021-9d8d-5724b2ad87bc\" (UID: \"00aac5cd-2d06-4021-9d8d-5724b2ad87bc\") " Feb 16 17:30:49 crc kubenswrapper[4794]: I0216 17:30:49.235833 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00aac5cd-2d06-4021-9d8d-5724b2ad87bc-inventory\") pod \"00aac5cd-2d06-4021-9d8d-5724b2ad87bc\" (UID: \"00aac5cd-2d06-4021-9d8d-5724b2ad87bc\") " Feb 16 17:30:49 crc kubenswrapper[4794]: I0216 17:30:49.240716 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00aac5cd-2d06-4021-9d8d-5724b2ad87bc-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "00aac5cd-2d06-4021-9d8d-5724b2ad87bc" (UID: "00aac5cd-2d06-4021-9d8d-5724b2ad87bc"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:30:49 crc kubenswrapper[4794]: I0216 17:30:49.242452 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00aac5cd-2d06-4021-9d8d-5724b2ad87bc-kube-api-access-lxjhk" (OuterVolumeSpecName: "kube-api-access-lxjhk") pod "00aac5cd-2d06-4021-9d8d-5724b2ad87bc" (UID: "00aac5cd-2d06-4021-9d8d-5724b2ad87bc"). InnerVolumeSpecName "kube-api-access-lxjhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:30:49 crc kubenswrapper[4794]: E0216 17:30:49.265549 4794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00aac5cd-2d06-4021-9d8d-5724b2ad87bc-inventory podName:00aac5cd-2d06-4021-9d8d-5724b2ad87bc nodeName:}" failed. No retries permitted until 2026-02-16 17:30:49.765522484 +0000 UTC m=+1875.713617131 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "inventory" (UniqueName: "kubernetes.io/secret/00aac5cd-2d06-4021-9d8d-5724b2ad87bc-inventory") pod "00aac5cd-2d06-4021-9d8d-5724b2ad87bc" (UID: "00aac5cd-2d06-4021-9d8d-5724b2ad87bc") : error deleting /var/lib/kubelet/pods/00aac5cd-2d06-4021-9d8d-5724b2ad87bc/volume-subpaths: remove /var/lib/kubelet/pods/00aac5cd-2d06-4021-9d8d-5724b2ad87bc/volume-subpaths: no such file or directory Feb 16 17:30:49 crc kubenswrapper[4794]: I0216 17:30:49.275755 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00aac5cd-2d06-4021-9d8d-5724b2ad87bc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "00aac5cd-2d06-4021-9d8d-5724b2ad87bc" (UID: "00aac5cd-2d06-4021-9d8d-5724b2ad87bc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:30:49 crc kubenswrapper[4794]: I0216 17:30:49.339033 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxjhk\" (UniqueName: \"kubernetes.io/projected/00aac5cd-2d06-4021-9d8d-5724b2ad87bc-kube-api-access-lxjhk\") on node \"crc\" DevicePath \"\"" Feb 16 17:30:49 crc kubenswrapper[4794]: I0216 17:30:49.339296 4794 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00aac5cd-2d06-4021-9d8d-5724b2ad87bc-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:30:49 crc kubenswrapper[4794]: I0216 17:30:49.339384 4794 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00aac5cd-2d06-4021-9d8d-5724b2ad87bc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 17:30:49 crc kubenswrapper[4794]: I0216 17:30:49.664563 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kzl44" event={"ID":"00aac5cd-2d06-4021-9d8d-5724b2ad87bc","Type":"ContainerDied","Data":"3dca23c78a745788d4ca81b43bc70ab0047dd0c3f88bfd3465d527ff347e7eba"} Feb 16 17:30:49 crc kubenswrapper[4794]: I0216 17:30:49.664608 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3dca23c78a745788d4ca81b43bc70ab0047dd0c3f88bfd3465d527ff347e7eba" Feb 16 17:30:49 crc kubenswrapper[4794]: I0216 17:30:49.664641 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kzl44" Feb 16 17:30:49 crc kubenswrapper[4794]: I0216 17:30:49.813642 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cjr9z"] Feb 16 17:30:49 crc kubenswrapper[4794]: E0216 17:30:49.818357 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00aac5cd-2d06-4021-9d8d-5724b2ad87bc" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 16 17:30:49 crc kubenswrapper[4794]: I0216 17:30:49.818382 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="00aac5cd-2d06-4021-9d8d-5724b2ad87bc" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 16 17:30:49 crc kubenswrapper[4794]: E0216 17:30:49.818395 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44652222-f734-44ff-8769-44adae44fc93" containerName="collect-profiles" Feb 16 17:30:49 crc kubenswrapper[4794]: I0216 17:30:49.818403 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="44652222-f734-44ff-8769-44adae44fc93" containerName="collect-profiles" Feb 16 17:30:49 crc kubenswrapper[4794]: I0216 17:30:49.818663 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="00aac5cd-2d06-4021-9d8d-5724b2ad87bc" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 16 17:30:49 crc kubenswrapper[4794]: I0216 17:30:49.818691 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="44652222-f734-44ff-8769-44adae44fc93" containerName="collect-profiles" Feb 16 17:30:49 crc kubenswrapper[4794]: I0216 17:30:49.819529 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cjr9z" Feb 16 17:30:49 crc kubenswrapper[4794]: I0216 17:30:49.839839 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cjr9z"] Feb 16 17:30:49 crc kubenswrapper[4794]: I0216 17:30:49.867412 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00aac5cd-2d06-4021-9d8d-5724b2ad87bc-inventory\") pod \"00aac5cd-2d06-4021-9d8d-5724b2ad87bc\" (UID: \"00aac5cd-2d06-4021-9d8d-5724b2ad87bc\") " Feb 16 17:30:49 crc kubenswrapper[4794]: I0216 17:30:49.871923 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00aac5cd-2d06-4021-9d8d-5724b2ad87bc-inventory" (OuterVolumeSpecName: "inventory") pod "00aac5cd-2d06-4021-9d8d-5724b2ad87bc" (UID: "00aac5cd-2d06-4021-9d8d-5724b2ad87bc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:30:49 crc kubenswrapper[4794]: I0216 17:30:49.970392 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzqf7\" (UniqueName: \"kubernetes.io/projected/25576ab9-760b-40e6-b7c7-866fbb7ed70c-kube-api-access-pzqf7\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cjr9z\" (UID: \"25576ab9-760b-40e6-b7c7-866fbb7ed70c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cjr9z" Feb 16 17:30:49 crc kubenswrapper[4794]: I0216 17:30:49.970445 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25576ab9-760b-40e6-b7c7-866fbb7ed70c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cjr9z\" (UID: \"25576ab9-760b-40e6-b7c7-866fbb7ed70c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cjr9z" Feb 16 17:30:49 crc kubenswrapper[4794]: I0216 17:30:49.970790 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/25576ab9-760b-40e6-b7c7-866fbb7ed70c-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cjr9z\" (UID: \"25576ab9-760b-40e6-b7c7-866fbb7ed70c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cjr9z" Feb 16 17:30:49 crc kubenswrapper[4794]: I0216 17:30:49.971165 4794 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00aac5cd-2d06-4021-9d8d-5724b2ad87bc-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 17:30:50 crc kubenswrapper[4794]: I0216 17:30:50.073399 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzqf7\" (UniqueName: \"kubernetes.io/projected/25576ab9-760b-40e6-b7c7-866fbb7ed70c-kube-api-access-pzqf7\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cjr9z\" (UID: \"25576ab9-760b-40e6-b7c7-866fbb7ed70c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cjr9z" Feb 16 17:30:50 crc kubenswrapper[4794]: I0216 17:30:50.073472 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25576ab9-760b-40e6-b7c7-866fbb7ed70c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cjr9z\" (UID: \"25576ab9-760b-40e6-b7c7-866fbb7ed70c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cjr9z" Feb 16 17:30:50 crc kubenswrapper[4794]: I0216 17:30:50.073584 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/25576ab9-760b-40e6-b7c7-866fbb7ed70c-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cjr9z\" (UID: \"25576ab9-760b-40e6-b7c7-866fbb7ed70c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cjr9z" Feb 16 17:30:50 crc kubenswrapper[4794]: I0216 17:30:50.077574 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25576ab9-760b-40e6-b7c7-866fbb7ed70c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cjr9z\" (UID: \"25576ab9-760b-40e6-b7c7-866fbb7ed70c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cjr9z" Feb 16 17:30:50 crc kubenswrapper[4794]: I0216 17:30:50.078048 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/25576ab9-760b-40e6-b7c7-866fbb7ed70c-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cjr9z\" (UID: \"25576ab9-760b-40e6-b7c7-866fbb7ed70c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cjr9z" Feb 16 17:30:50 crc kubenswrapper[4794]: I0216 17:30:50.089041 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzqf7\" (UniqueName: \"kubernetes.io/projected/25576ab9-760b-40e6-b7c7-866fbb7ed70c-kube-api-access-pzqf7\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cjr9z\" (UID: \"25576ab9-760b-40e6-b7c7-866fbb7ed70c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cjr9z" Feb 16 17:30:50 crc kubenswrapper[4794]: I0216 17:30:50.141690 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cjr9z" Feb 16 17:30:50 crc kubenswrapper[4794]: I0216 17:30:50.738831 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cjr9z"] Feb 16 17:30:51 crc kubenswrapper[4794]: I0216 17:30:51.044982 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-nbn72"] Feb 16 17:30:51 crc kubenswrapper[4794]: I0216 17:30:51.062374 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-l72f2"] Feb 16 17:30:51 crc kubenswrapper[4794]: I0216 17:30:51.074125 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-nbn72"] Feb 16 17:30:51 crc kubenswrapper[4794]: I0216 17:30:51.086979 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-l72f2"] Feb 16 17:30:51 crc kubenswrapper[4794]: I0216 17:30:51.689961 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cjr9z" event={"ID":"25576ab9-760b-40e6-b7c7-866fbb7ed70c","Type":"ContainerStarted","Data":"a9a606122964105341db1c1f3bb249c2ff16792bb31ef34e83d994ad483b3f2e"} Feb 16 17:30:51 crc kubenswrapper[4794]: I0216 17:30:51.690345 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cjr9z" event={"ID":"25576ab9-760b-40e6-b7c7-866fbb7ed70c","Type":"ContainerStarted","Data":"7818fb02d7da7abd5ff0e1a0b33615c9373f553109d2241d98c94f1b22bf5cce"} Feb 16 17:30:51 crc kubenswrapper[4794]: I0216 17:30:51.714607 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cjr9z" podStartSLOduration=2.212032821 podStartE2EDuration="2.714585198s" podCreationTimestamp="2026-02-16 17:30:49 +0000 UTC" firstStartedPulling="2026-02-16 17:30:50.733317232 +0000 UTC m=+1876.681411879" lastFinishedPulling="2026-02-16 17:30:51.235869609 +0000 UTC m=+1877.183964256" observedRunningTime="2026-02-16 17:30:51.703830833 +0000 UTC m=+1877.651925520" watchObservedRunningTime="2026-02-16 17:30:51.714585198 +0000 UTC m=+1877.662679845" Feb 16 17:30:52 crc kubenswrapper[4794]: I0216 17:30:52.810677 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aee4dca2-9581-44c6-91db-ce6516f9b05e" path="/var/lib/kubelet/pods/aee4dca2-9581-44c6-91db-ce6516f9b05e/volumes" Feb 16 17:30:52 crc kubenswrapper[4794]: I0216 17:30:52.813631 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3bc8a6c-f954-4825-8853-316738b0eb94" path="/var/lib/kubelet/pods/c3bc8a6c-f954-4825-8853-316738b0eb94/volumes" Feb 16 17:30:53 crc kubenswrapper[4794]: E0216 17:30:53.793845 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:30:58 crc kubenswrapper[4794]: I0216 17:30:58.079809 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-3566-account-create-update-8tq2m"] Feb 16 17:30:58 crc kubenswrapper[4794]: I0216 17:30:58.096753 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-j5dwq"] Feb 16 17:30:58 crc kubenswrapper[4794]: I0216 17:30:58.109091 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-3566-account-create-update-8tq2m"] Feb 16 17:30:58 crc kubenswrapper[4794]: I0216 17:30:58.119977 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-j5dwq"] Feb 16 17:30:58 crc kubenswrapper[4794]: E0216 17:30:58.794611 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:30:58 crc kubenswrapper[4794]: I0216 17:30:58.817843 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="806a0c64-26e2-4021-875a-b7224b615057" path="/var/lib/kubelet/pods/806a0c64-26e2-4021-875a-b7224b615057/volumes" Feb 16 17:30:58 crc kubenswrapper[4794]: I0216 17:30:58.819278 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b39da8d5-1def-498c-9a64-d015fa5de3b3" path="/var/lib/kubelet/pods/b39da8d5-1def-498c-9a64-d015fa5de3b3/volumes" Feb 16 17:31:03 crc kubenswrapper[4794]: I0216 17:31:03.035107 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9bqgp"] Feb 16 17:31:03 crc kubenswrapper[4794]: I0216 17:31:03.048024 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-9bqgp"] Feb 16 17:31:04 crc kubenswrapper[4794]: I0216 17:31:04.813117 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d7a9db0-dcc3-4dba-a84a-e5de6ca5ebeb" path="/var/lib/kubelet/pods/6d7a9db0-dcc3-4dba-a84a-e5de6ca5ebeb/volumes" Feb 16 17:31:06 crc kubenswrapper[4794]: E0216 17:31:06.794800 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:31:12 crc kubenswrapper[4794]: E0216 17:31:12.795028 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:31:21 crc kubenswrapper[4794]: E0216 17:31:21.793525 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:31:23 crc kubenswrapper[4794]: E0216 17:31:23.794511 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:31:32 crc kubenswrapper[4794]: E0216 17:31:32.794176 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:31:34 crc kubenswrapper[4794]: E0216 17:31:34.803626 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:31:35 crc kubenswrapper[4794]: I0216 17:31:35.058581 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-01fa-account-create-update-mng8s"] Feb 16 17:31:35 crc kubenswrapper[4794]: I0216 17:31:35.078070 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-2ea9-account-create-update-7tt5t"] Feb 16 17:31:35 crc kubenswrapper[4794]: I0216 17:31:35.089586 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-wd76h"] Feb 16 17:31:35 crc kubenswrapper[4794]: I0216 17:31:35.102977 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-01fa-account-create-update-mng8s"] Feb 16 17:31:35 crc kubenswrapper[4794]: I0216 17:31:35.119340 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-2ea9-account-create-update-7tt5t"] Feb 16 17:31:35 crc kubenswrapper[4794]: I0216 17:31:35.131894 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-wd76h"] Feb 16 17:31:36 crc kubenswrapper[4794]: I0216 17:31:36.058408 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-rrqcf"] Feb 16 17:31:36 crc kubenswrapper[4794]: I0216 17:31:36.072819 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-rrqcf"] Feb 16 17:31:36 crc kubenswrapper[4794]: I0216 17:31:36.091737 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-xp5cn"] Feb 16 17:31:36 crc kubenswrapper[4794]: I0216 17:31:36.102717 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-vc8d5"] Feb 16 17:31:36 crc kubenswrapper[4794]: I0216 17:31:36.113278 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5932-account-create-update-bd29m"] Feb 16 17:31:36 crc kubenswrapper[4794]: I0216 17:31:36.124061 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-xp5cn"] Feb 16 17:31:36 crc kubenswrapper[4794]: I0216 17:31:36.134589 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-3f33-account-create-update-bmfmg"] Feb 16 17:31:36 crc kubenswrapper[4794]: I0216 17:31:36.144715 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-6pcwt"] Feb 16 17:31:36 crc kubenswrapper[4794]: I0216 17:31:36.155195 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-vc8d5"] Feb 16 17:31:36 crc kubenswrapper[4794]: I0216 17:31:36.164662 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5932-account-create-update-bd29m"] Feb 16 17:31:36 crc kubenswrapper[4794]: I0216 17:31:36.175280 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-6pcwt"] Feb 16 17:31:36 crc kubenswrapper[4794]: I0216 17:31:36.184620 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-3f33-account-create-update-bmfmg"] Feb 16 17:31:36 crc kubenswrapper[4794]: I0216 17:31:36.806535 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0eba0114-90ef-495f-b633-be0e999ee9db" path="/var/lib/kubelet/pods/0eba0114-90ef-495f-b633-be0e999ee9db/volumes" Feb 16 17:31:36 crc kubenswrapper[4794]: I0216 17:31:36.807192 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1eb6af8b-8f65-4725-a2bc-88339a37bf85" path="/var/lib/kubelet/pods/1eb6af8b-8f65-4725-a2bc-88339a37bf85/volumes" Feb 16 17:31:36 crc kubenswrapper[4794]: I0216 17:31:36.808920 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22fec0db-d521-4e76-bd89-7c22ea6a8bb1" path="/var/lib/kubelet/pods/22fec0db-d521-4e76-bd89-7c22ea6a8bb1/volumes" Feb 16 17:31:36 crc kubenswrapper[4794]: I0216 17:31:36.810043 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48f949ea-bb57-46ea-a3b7-dfa2cd3ed8a4" path="/var/lib/kubelet/pods/48f949ea-bb57-46ea-a3b7-dfa2cd3ed8a4/volumes" Feb 16 17:31:36 crc kubenswrapper[4794]: I0216 17:31:36.811545 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5589f24e-f4c8-427e-ba13-f0ffb8358940" path="/var/lib/kubelet/pods/5589f24e-f4c8-427e-ba13-f0ffb8358940/volumes" Feb 16 17:31:36 crc kubenswrapper[4794]: I0216 17:31:36.812777 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6505f038-47d3-4a1b-a939-11469306ff84" path="/var/lib/kubelet/pods/6505f038-47d3-4a1b-a939-11469306ff84/volumes" Feb 16 17:31:36 crc kubenswrapper[4794]: I0216 17:31:36.813550 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6989884b-6a5b-4e42-a0c8-bfd3a1361057" path="/var/lib/kubelet/pods/6989884b-6a5b-4e42-a0c8-bfd3a1361057/volumes" Feb 16 17:31:36 crc kubenswrapper[4794]: I0216 17:31:36.816768 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9e97513-5c89-4917-8e5a-d2230e694e3f" path="/var/lib/kubelet/pods/f9e97513-5c89-4917-8e5a-d2230e694e3f/volumes" Feb 16 17:31:36 crc kubenswrapper[4794]: I0216 17:31:36.817420 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb8edc26-5ad8-440e-9d5b-942b0a287ea4" path="/var/lib/kubelet/pods/fb8edc26-5ad8-440e-9d5b-942b0a287ea4/volumes" Feb 16 17:31:38 crc kubenswrapper[4794]: I0216 17:31:38.966511 4794 scope.go:117] "RemoveContainer" containerID="5e45cbe19ecb4c6c292eb2959ae5ea77a14adbd89b63acd3148b6a3a9f5f7e58" Feb 16 17:31:39 crc kubenswrapper[4794]: I0216 17:31:39.009642 4794 scope.go:117] "RemoveContainer" containerID="b361f858b2a25ac83fb9cd20b3b7ef7c69f443dbfbcc0c2a577d2d34cebfc7e3" Feb 16 17:31:39 crc kubenswrapper[4794]: I0216 17:31:39.071268 4794 scope.go:117] "RemoveContainer" containerID="0c8d4cc22b9fe6eab62d122b6ce1664ad3d47285de67635dd1363762627e7ad4" Feb 16 17:31:39 crc kubenswrapper[4794]: I0216 17:31:39.125047 4794 scope.go:117] "RemoveContainer" containerID="f8ac86bc80c5233684c1b47c179a1df5d96139bfd69fb1eaf0d71038282f797d" Feb 16 17:31:39 crc kubenswrapper[4794]: I0216 17:31:39.202482 4794 scope.go:117] "RemoveContainer" containerID="d329847a9ebf9636e9b55cd869afe7fc46d427b0ca5c513703af27dd785771ff" Feb 16 17:31:39 crc kubenswrapper[4794]: I0216 17:31:39.270875 4794 scope.go:117] "RemoveContainer" containerID="447ffb8d8b4495130e9739fabe034c2edbfe34d056da67cd871631699252a06d" Feb 16 17:31:39 crc kubenswrapper[4794]: I0216 17:31:39.326924 4794 scope.go:117] "RemoveContainer" containerID="630575e6e05bf43ed348d66618f77d949bba32704d4b42395e01551c8dadadf9" Feb 16 17:31:39 crc kubenswrapper[4794]: I0216 17:31:39.356712 4794 scope.go:117] "RemoveContainer" containerID="268923bb88d3ff319485c8986c493ad543f3c0460287cabb6c4072e8fbd1d43a" Feb 16 17:31:39 crc kubenswrapper[4794]: I0216 17:31:39.382424 4794 scope.go:117] "RemoveContainer" containerID="88e3906f0ca3fd28b8a0b47412e1e4a24f611740e2bc9e3bd7fb2503645ff84c" Feb 16 17:31:39 crc kubenswrapper[4794]: I0216 17:31:39.406409 4794 scope.go:117] "RemoveContainer" containerID="4fb324774e2f6f84e3afb9ea82687141fc92d7dda51c974ea093be5619e031dd" Feb 16 17:31:39 crc kubenswrapper[4794]: I0216 17:31:39.437879 4794 scope.go:117] "RemoveContainer" containerID="893e845e410be8e6b6a4dfd5bffbe3bb05b49af4c1da8177fb88b502bd7ceb60" Feb 16 17:31:39 crc kubenswrapper[4794]: I0216 17:31:39.461705 4794 scope.go:117] "RemoveContainer" containerID="9bc110d2f764d6184910d501cd998ee52e2930479bfbae83d7a123976df54630" Feb 16 17:31:39 crc kubenswrapper[4794]: I0216 17:31:39.485936 4794 scope.go:117] "RemoveContainer" containerID="35c3affb2961c8861c1b9db09d2342b5abdd819f5017af5b25f4de81066ec822" Feb 16 17:31:39 crc kubenswrapper[4794]: I0216 17:31:39.512269 4794 scope.go:117] "RemoveContainer" containerID="040c3bcf07f107ec2e2e9901c34cbdf2916f485148627912dccbc483778aa13c" Feb 16 17:31:39 crc kubenswrapper[4794]: I0216 17:31:39.547054 4794 scope.go:117] "RemoveContainer" containerID="2829c362c5a037ccd3c1ad307b5707931b39470677ae11b3443c439c1a392495" Feb 16 17:31:39 crc kubenswrapper[4794]: I0216 17:31:39.573557 4794 scope.go:117] "RemoveContainer" containerID="fbbbf2b86d6f7aca18f522d584fa582d33447ddaebfe59cf234b9265bca71fd0" Feb 16 17:31:39 crc kubenswrapper[4794]: I0216 17:31:39.601921 4794 scope.go:117] "RemoveContainer" containerID="2a632e977a49b27ea68bee7de6a2f979b999ad36f19d9f783c004e149891fc59" Feb 16 17:31:39 crc kubenswrapper[4794]: I0216 17:31:39.627041 4794 scope.go:117] "RemoveContainer" containerID="ae5408138554b5b91af1e51726e147d638e8ba51378075aaa6abe78224194f31" Feb 16 17:31:39 crc kubenswrapper[4794]: I0216 17:31:39.647267 4794 scope.go:117] "RemoveContainer" containerID="e8830bf7dd6f89c0101e2fbd6ed08deab0a66b1aad535c5baccb6b9493aea4ea" Feb 16 17:31:39 crc kubenswrapper[4794]: I0216 17:31:39.673256 4794 scope.go:117] "RemoveContainer" containerID="f37f4684a09448f6f61fc02bd7ce900a1e3657f204183b4716858e9c36fae406" Feb 16 17:31:41 crc kubenswrapper[4794]: I0216 17:31:41.033465 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-tk65m"] Feb 16 17:31:41 crc kubenswrapper[4794]: I0216 17:31:41.054693 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-tk65m"] Feb 16 17:31:42 crc kubenswrapper[4794]: I0216 17:31:42.807432 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c38fe9c-5f26-457a-9209-688ba917fc8c" path="/var/lib/kubelet/pods/9c38fe9c-5f26-457a-9209-688ba917fc8c/volumes" Feb 16 17:31:45 crc kubenswrapper[4794]: I0216 17:31:45.795485 4794 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 17:31:45 crc kubenswrapper[4794]: E0216 17:31:45.916969 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 17:31:45 crc kubenswrapper[4794]: E0216 17:31:45.917026 4794 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 17:31:45 crc kubenswrapper[4794]: E0216 17:31:45.917149 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2h5l2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-7gcsf_openstack(c695f880-15cb-45b1-8545-60d8437ec631): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 17:31:45 crc kubenswrapper[4794]: E0216 17:31:45.918257 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:31:49 crc kubenswrapper[4794]: E0216 17:31:49.911694 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 17:31:49 crc kubenswrapper[4794]: E0216 17:31:49.912319 4794 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 17:31:49 crc kubenswrapper[4794]: E0216 17:31:49.912486 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59fh58dh6ch557h84h55ch564h5bh58fh5c8h5d4h584h669h667h569h59hd5hdbh9dh67ch5f9h59fh597h96h664h687h66dhfch5ddh5b7h88h59cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9v9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(8981f528-1f74-4d56-a93c-22860725b490): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 17:31:49 crc kubenswrapper[4794]: E0216 17:31:49.913688 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:31:50 crc kubenswrapper[4794]: I0216 17:31:50.140511 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:31:50 crc kubenswrapper[4794]: I0216 17:31:50.140565 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:32:00 crc kubenswrapper[4794]: E0216 17:32:00.795389 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:32:02 crc kubenswrapper[4794]: E0216 17:32:02.793575 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:32:13 crc kubenswrapper[4794]: I0216 17:32:13.046150 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-fs2n9"] Feb 16 17:32:13 crc kubenswrapper[4794]: I0216 17:32:13.058343 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-fs2n9"] Feb 16 17:32:13 crc kubenswrapper[4794]: E0216 17:32:13.792938 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:32:14 crc kubenswrapper[4794]: E0216 17:32:14.810608 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:32:14 crc kubenswrapper[4794]: I0216 17:32:14.811752 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67e15f05-9d62-45f7-a278-aeb9583be1a3" path="/var/lib/kubelet/pods/67e15f05-9d62-45f7-a278-aeb9583be1a3/volumes" Feb 16 17:32:20 crc kubenswrapper[4794]: I0216 17:32:20.069117 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-4fdhf"] Feb 16 17:32:20 crc kubenswrapper[4794]: I0216 17:32:20.094426 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-4fdhf"] Feb 16 17:32:20 crc kubenswrapper[4794]: I0216 17:32:20.140760 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:32:20 crc kubenswrapper[4794]: I0216 17:32:20.140840 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:32:20 crc kubenswrapper[4794]: I0216 17:32:20.805832 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7473b04b-0d0a-4c73-ac81-f0ad2959dc79" path="/var/lib/kubelet/pods/7473b04b-0d0a-4c73-ac81-f0ad2959dc79/volumes" Feb 16 17:32:24 crc kubenswrapper[4794]: I0216 17:32:24.054847 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-wnm9v"] Feb 16 17:32:24 crc kubenswrapper[4794]: I0216 17:32:24.072363 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-wnm9v"] Feb 16 17:32:24 crc kubenswrapper[4794]: I0216 17:32:24.804151 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15d18e7f-9229-47e4-97f3-d5515e5c59fb" path="/var/lib/kubelet/pods/15d18e7f-9229-47e4-97f3-d5515e5c59fb/volumes" Feb 16 17:32:27 crc kubenswrapper[4794]: E0216 17:32:27.793882 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:32:27 crc kubenswrapper[4794]: E0216 17:32:27.794045 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:32:36 crc kubenswrapper[4794]: I0216 17:32:36.075738 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-rs2k4"] Feb 16 17:32:36 crc kubenswrapper[4794]: I0216 17:32:36.092896 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-rs2k4"] Feb 16 17:32:36 crc kubenswrapper[4794]: I0216 17:32:36.806665 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="865acfbb-330f-4594-a7d8-64962cab3cd5" path="/var/lib/kubelet/pods/865acfbb-330f-4594-a7d8-64962cab3cd5/volumes" Feb 16 17:32:40 crc kubenswrapper[4794]: I0216 17:32:40.030962 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-t9x9p"] Feb 16 17:32:40 crc kubenswrapper[4794]: I0216 17:32:40.041986 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-t9x9p"] Feb 16 17:32:40 crc kubenswrapper[4794]: I0216 17:32:40.114556 4794 scope.go:117] "RemoveContainer" containerID="9dbd51902899322ece34a2733ec3d8e16d85e9ac734b4818b20a5762bdbbbd8f" Feb 16 17:32:40 crc kubenswrapper[4794]: I0216 17:32:40.162171 4794 scope.go:117] "RemoveContainer" containerID="c0014598bc2a512223afdf6b71b9f3b4a272584045d78c4d818756b0f6ddd386" Feb 16 17:32:40 crc kubenswrapper[4794]: I0216 17:32:40.251088 4794 scope.go:117] "RemoveContainer" containerID="5e365cd8b92e9b70ab7a1ff326aa3ea071de71ca4a3fca7e51d64e7410449362" Feb 16 17:32:40 crc kubenswrapper[4794]: I0216 17:32:40.341180 4794 scope.go:117] "RemoveContainer" containerID="2979acd342e4124f130f2b0129a7af906efdbe4e15b83cddb51e005fb30ea921" Feb 16 17:32:40 crc kubenswrapper[4794]: I0216 17:32:40.426237 4794 scope.go:117] "RemoveContainer" containerID="4aaae524dab826255e6a2ba268bb0f7d36c73d90aa6fb43b268b42cf915e4a6d" Feb 16 17:32:40 crc kubenswrapper[4794]: E0216 17:32:40.797913 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:32:40 crc kubenswrapper[4794]: I0216 17:32:40.811407 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="706ed090-ccb8-4488-ae71-8c991476fd08" path="/var/lib/kubelet/pods/706ed090-ccb8-4488-ae71-8c991476fd08/volumes" Feb 16 17:32:42 crc kubenswrapper[4794]: E0216 17:32:42.794192 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:32:50 crc kubenswrapper[4794]: I0216 17:32:50.140907 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:32:50 crc kubenswrapper[4794]: I0216 17:32:50.141358 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:32:50 crc kubenswrapper[4794]: I0216 17:32:50.141396 4794 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" Feb 16 17:32:50 crc kubenswrapper[4794]: I0216 17:32:50.142182 4794 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"43fc9dd7f2fae3a5a4c080fa56f687e2435f83f7280f2c8e8a10fb66c8654d44"} pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 17:32:50 crc kubenswrapper[4794]: I0216 17:32:50.142234 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" containerID="cri-o://43fc9dd7f2fae3a5a4c080fa56f687e2435f83f7280f2c8e8a10fb66c8654d44" gracePeriod=600 Feb 16 17:32:51 crc kubenswrapper[4794]: I0216 17:32:51.063074 4794 generic.go:334] "Generic (PLEG): container finished" podID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerID="43fc9dd7f2fae3a5a4c080fa56f687e2435f83f7280f2c8e8a10fb66c8654d44" exitCode=0 Feb 16 17:32:51 crc kubenswrapper[4794]: I0216 17:32:51.063245 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerDied","Data":"43fc9dd7f2fae3a5a4c080fa56f687e2435f83f7280f2c8e8a10fb66c8654d44"} Feb 16 17:32:51 crc kubenswrapper[4794]: I0216 17:32:51.063473 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerStarted","Data":"0a4de50ee67947d138f0a6fbc8c3acf24a2b9293c0d0a8cda201de752f64a5c2"} Feb 16 17:32:51 crc kubenswrapper[4794]: I0216 17:32:51.063512 4794 scope.go:117] "RemoveContainer" containerID="6e22a7be8018d748f49f3c871459e97f76ee04f859d3d1d0d46b4bbb2dd36691" Feb 16 17:32:51 crc kubenswrapper[4794]: E0216 17:32:51.793203 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:32:53 crc kubenswrapper[4794]: E0216 17:32:53.806880 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:33:03 crc kubenswrapper[4794]: E0216 17:33:03.796415 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:33:08 crc kubenswrapper[4794]: E0216 17:33:08.795334 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:33:17 crc kubenswrapper[4794]: E0216 17:33:17.795039 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:33:22 crc kubenswrapper[4794]: E0216 17:33:22.793705 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:33:30 crc kubenswrapper[4794]: E0216 17:33:30.793402 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:33:33 crc kubenswrapper[4794]: E0216 17:33:33.793893 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:33:36 crc kubenswrapper[4794]: I0216 17:33:36.085365 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-g8d8v"] Feb 16 17:33:36 crc kubenswrapper[4794]: I0216 17:33:36.097523 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-g8d8v"] Feb 16 17:33:36 crc kubenswrapper[4794]: I0216 17:33:36.803756 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0770add-b35a-4790-b877-78e7a2661b48" path="/var/lib/kubelet/pods/e0770add-b35a-4790-b877-78e7a2661b48/volumes" Feb 16 17:33:37 crc kubenswrapper[4794]: I0216 17:33:37.041816 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-9ed5-account-create-update-4c5kr"] Feb 16 17:33:37 crc kubenswrapper[4794]: I0216 17:33:37.054943 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-8970-account-create-update-bjpr9"] Feb 16 17:33:37 crc kubenswrapper[4794]: I0216 17:33:37.071473 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-vsrl6"] Feb 16 17:33:37 crc kubenswrapper[4794]: I0216 17:33:37.081143 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-9ed5-account-create-update-4c5kr"] Feb 16 17:33:37 crc kubenswrapper[4794]: I0216 17:33:37.090469 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-8970-account-create-update-bjpr9"] Feb 16 17:33:37 crc kubenswrapper[4794]: I0216 17:33:37.100052 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-sn2z4"] Feb 16 17:33:37 crc kubenswrapper[4794]: I0216 17:33:37.109087 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7154-account-create-update-wzjn5"] Feb 16 17:33:37 crc kubenswrapper[4794]: I0216 17:33:37.118112 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-vsrl6"] Feb 16 17:33:37 crc kubenswrapper[4794]: I0216 17:33:37.126568 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-7154-account-create-update-wzjn5"] Feb 16 17:33:37 crc kubenswrapper[4794]: I0216 17:33:37.135613 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-sn2z4"] Feb 16 17:33:38 crc kubenswrapper[4794]: I0216 17:33:38.808624 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1469ebf3-80e2-45db-bb76-9c0d75fa6ba0" path="/var/lib/kubelet/pods/1469ebf3-80e2-45db-bb76-9c0d75fa6ba0/volumes" Feb 16 17:33:38 crc kubenswrapper[4794]: I0216 17:33:38.809945 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22fb06b0-be61-4104-bd06-e83653551448" path="/var/lib/kubelet/pods/22fb06b0-be61-4104-bd06-e83653551448/volumes" Feb 16 17:33:38 crc kubenswrapper[4794]: I0216 17:33:38.810869 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="589ff3be-38fe-4b42-9465-749794f9d7ac" path="/var/lib/kubelet/pods/589ff3be-38fe-4b42-9465-749794f9d7ac/volumes" Feb 16 17:33:38 crc kubenswrapper[4794]: I0216 17:33:38.811747 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab6d5750-0a23-4cce-8557-b0b1d867f91b" path="/var/lib/kubelet/pods/ab6d5750-0a23-4cce-8557-b0b1d867f91b/volumes" Feb 16 17:33:38 crc kubenswrapper[4794]: I0216 17:33:38.813323 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baa5596a-0a62-46eb-9652-e6fd66238582" path="/var/lib/kubelet/pods/baa5596a-0a62-46eb-9652-e6fd66238582/volumes" Feb 16 17:33:40 crc kubenswrapper[4794]: I0216 17:33:40.553462 4794 scope.go:117] "RemoveContainer" containerID="7f12354d91da9ae57eb9a6a0abd89f7615e632c66398378e2e904dc37a6b95a0" Feb 16 17:33:40 crc kubenswrapper[4794]: I0216 17:33:40.581205 4794 scope.go:117] "RemoveContainer" containerID="9c411ab12e345bf5eb3aa9ad5f019b654392dfe10ca69b38d59cefff19ea1efe" Feb 16 17:33:40 crc kubenswrapper[4794]: I0216 17:33:40.641373 4794 scope.go:117] "RemoveContainer" containerID="00af2cf0c0dc48b41c8b45fbe8fd9b92f4071b05ba2749a483c92ce3cb5c8a31" Feb 16 17:33:40 crc kubenswrapper[4794]: I0216 17:33:40.700742 4794 scope.go:117] "RemoveContainer" containerID="8f80acd8b9b614a25974d6d5208ae776bb28b2b98fccdb9c6243444def7d7447" Feb 16 17:33:40 crc kubenswrapper[4794]: I0216 17:33:40.778384 4794 scope.go:117] "RemoveContainer" containerID="8b9d4df14b19d8a7d1f25a537e2b0eedc5720d1d72f45bd9882b2e2d1e86d954" Feb 16 17:33:40 crc kubenswrapper[4794]: I0216 17:33:40.826194 4794 scope.go:117] "RemoveContainer" containerID="e07c6e5e2092eed02c076d0483a4fa722898fa351f4362e0d7732096b6b23487" Feb 16 17:33:40 crc kubenswrapper[4794]: I0216 17:33:40.889291 4794 scope.go:117] "RemoveContainer" containerID="0bece7d5f48dd15b39b877b5ae23d371ea7e7316114f0f37dd3c64ed978a21cf" Feb 16 17:33:43 crc kubenswrapper[4794]: E0216 17:33:43.793574 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:33:46 crc kubenswrapper[4794]: E0216 17:33:46.794434 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:33:55 crc kubenswrapper[4794]: E0216 17:33:55.792913 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:33:57 crc kubenswrapper[4794]: E0216 17:33:57.795075 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:34:09 crc kubenswrapper[4794]: E0216 17:34:09.793529 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:34:10 crc kubenswrapper[4794]: I0216 17:34:10.037218 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4gn4j"] Feb 16 17:34:10 crc kubenswrapper[4794]: I0216 17:34:10.047958 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4gn4j"] Feb 16 17:34:10 crc kubenswrapper[4794]: E0216 17:34:10.793208 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:34:10 crc kubenswrapper[4794]: I0216 17:34:10.807996 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="619e01e3-7fcb-4b21-b1df-07ba70374e09" path="/var/lib/kubelet/pods/619e01e3-7fcb-4b21-b1df-07ba70374e09/volumes" Feb 16 17:34:20 crc kubenswrapper[4794]: E0216 17:34:20.796777 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:34:23 crc kubenswrapper[4794]: E0216 17:34:23.795723 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:34:31 crc kubenswrapper[4794]: I0216 17:34:31.048092 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-4kglx"] Feb 16 17:34:31 crc kubenswrapper[4794]: I0216 17:34:31.068252 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-4kglx"] Feb 16 17:34:32 crc kubenswrapper[4794]: E0216 17:34:32.805276 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:34:32 crc kubenswrapper[4794]: I0216 17:34:32.816278 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c08d48e2-27f0-44e5-a13a-815719c3f5dc" path="/var/lib/kubelet/pods/c08d48e2-27f0-44e5-a13a-815719c3f5dc/volumes" Feb 16 17:34:33 crc kubenswrapper[4794]: I0216 17:34:33.039020 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-4618-account-create-update-s8vpk"] Feb 16 17:34:33 crc kubenswrapper[4794]: I0216 17:34:33.051939 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-rn6n9"] Feb 16 17:34:33 crc kubenswrapper[4794]: I0216 17:34:33.063655 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-rn6n9"] Feb 16 17:34:33 crc kubenswrapper[4794]: I0216 17:34:33.074065 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-4618-account-create-update-s8vpk"] Feb 16 17:34:34 crc kubenswrapper[4794]: I0216 17:34:34.036570 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9j8qt"] Feb 16 17:34:34 crc kubenswrapper[4794]: I0216 17:34:34.049208 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9j8qt"] Feb 16 17:34:34 crc kubenswrapper[4794]: E0216 17:34:34.810055 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:34:34 crc kubenswrapper[4794]: I0216 17:34:34.811521 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="521b6a44-f328-4e6e-926b-f27a9b9810ad" path="/var/lib/kubelet/pods/521b6a44-f328-4e6e-926b-f27a9b9810ad/volumes" Feb 16 17:34:34 crc kubenswrapper[4794]: I0216 17:34:34.812251 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78df36ef-5c86-41c8-9085-7ce98caad880" path="/var/lib/kubelet/pods/78df36ef-5c86-41c8-9085-7ce98caad880/volumes" Feb 16 17:34:34 crc kubenswrapper[4794]: I0216 17:34:34.812933 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6bd641d-034e-45b5-9379-422fe35d0054" path="/var/lib/kubelet/pods/a6bd641d-034e-45b5-9379-422fe35d0054/volumes" Feb 16 17:34:41 crc kubenswrapper[4794]: I0216 17:34:41.039514 4794 scope.go:117] "RemoveContainer" containerID="db728f45a7d72303db0063e0c79c648d9723af4783855140e9fc45dad0d2b4ea" Feb 16 17:34:41 crc kubenswrapper[4794]: I0216 17:34:41.070602 4794 scope.go:117] "RemoveContainer" containerID="e8abf350a47b29c3209ffe1180e17a1433efc2a261f3f0546d5ea8c697b07457" Feb 16 17:34:41 crc kubenswrapper[4794]: I0216 17:34:41.162631 4794 scope.go:117] "RemoveContainer" containerID="cef756b523489089cdfc52fe85cf59247cde121a8515537da9a4a1f17ba2c217" Feb 16 17:34:41 crc kubenswrapper[4794]: I0216 17:34:41.225164 4794 scope.go:117] "RemoveContainer" containerID="12f17d5ac32e08af8912b4dd207c6af189a74299ceefe361e64172031e650797" Feb 16 17:34:41 crc kubenswrapper[4794]: I0216 17:34:41.259188 4794 scope.go:117] "RemoveContainer" containerID="fab724d84b281db1c2dea99f457728689590085c962814a33fdffd092056286a" Feb 16 17:34:46 crc kubenswrapper[4794]: E0216 17:34:46.794489 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:34:48 crc kubenswrapper[4794]: I0216 17:34:48.046907 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-sf58s"] Feb 16 17:34:48 crc kubenswrapper[4794]: I0216 17:34:48.058111 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-sf58s"] Feb 16 17:34:48 crc kubenswrapper[4794]: E0216 17:34:48.794923 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:34:48 crc kubenswrapper[4794]: I0216 17:34:48.808103 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="377738df-5701-4cde-a811-3c975e20fce7" path="/var/lib/kubelet/pods/377738df-5701-4cde-a811-3c975e20fce7/volumes" Feb 16 17:34:50 crc kubenswrapper[4794]: I0216 17:34:50.140635 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:34:50 crc kubenswrapper[4794]: I0216 17:34:50.141001 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:34:59 crc kubenswrapper[4794]: E0216 17:34:59.793826 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:35:01 crc kubenswrapper[4794]: I0216 17:35:01.215231 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4dkjc"] Feb 16 17:35:01 crc kubenswrapper[4794]: I0216 17:35:01.221956 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4dkjc" Feb 16 17:35:01 crc kubenswrapper[4794]: I0216 17:35:01.229470 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4dkjc"] Feb 16 17:35:01 crc kubenswrapper[4794]: I0216 17:35:01.366006 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2997233e-8e5e-41b2-a59d-4365abc3d109-catalog-content\") pod \"redhat-marketplace-4dkjc\" (UID: \"2997233e-8e5e-41b2-a59d-4365abc3d109\") " pod="openshift-marketplace/redhat-marketplace-4dkjc" Feb 16 17:35:01 crc kubenswrapper[4794]: I0216 17:35:01.366475 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2997233e-8e5e-41b2-a59d-4365abc3d109-utilities\") pod \"redhat-marketplace-4dkjc\" (UID: \"2997233e-8e5e-41b2-a59d-4365abc3d109\") " pod="openshift-marketplace/redhat-marketplace-4dkjc" Feb 16 17:35:01 crc kubenswrapper[4794]: I0216 17:35:01.366538 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbgrc\" (UniqueName: \"kubernetes.io/projected/2997233e-8e5e-41b2-a59d-4365abc3d109-kube-api-access-cbgrc\") pod \"redhat-marketplace-4dkjc\" (UID: \"2997233e-8e5e-41b2-a59d-4365abc3d109\") " pod="openshift-marketplace/redhat-marketplace-4dkjc" Feb 16 17:35:01 crc kubenswrapper[4794]: I0216 17:35:01.468963 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2997233e-8e5e-41b2-a59d-4365abc3d109-catalog-content\") pod \"redhat-marketplace-4dkjc\" (UID: \"2997233e-8e5e-41b2-a59d-4365abc3d109\") " pod="openshift-marketplace/redhat-marketplace-4dkjc" Feb 16 17:35:01 crc kubenswrapper[4794]: I0216 17:35:01.469073 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2997233e-8e5e-41b2-a59d-4365abc3d109-utilities\") pod \"redhat-marketplace-4dkjc\" (UID: \"2997233e-8e5e-41b2-a59d-4365abc3d109\") " pod="openshift-marketplace/redhat-marketplace-4dkjc" Feb 16 17:35:01 crc kubenswrapper[4794]: I0216 17:35:01.469118 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbgrc\" (UniqueName: \"kubernetes.io/projected/2997233e-8e5e-41b2-a59d-4365abc3d109-kube-api-access-cbgrc\") pod \"redhat-marketplace-4dkjc\" (UID: \"2997233e-8e5e-41b2-a59d-4365abc3d109\") " pod="openshift-marketplace/redhat-marketplace-4dkjc" Feb 16 17:35:01 crc kubenswrapper[4794]: I0216 17:35:01.469988 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2997233e-8e5e-41b2-a59d-4365abc3d109-catalog-content\") pod \"redhat-marketplace-4dkjc\" (UID: \"2997233e-8e5e-41b2-a59d-4365abc3d109\") " pod="openshift-marketplace/redhat-marketplace-4dkjc" Feb 16 17:35:01 crc kubenswrapper[4794]: I0216 17:35:01.470173 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2997233e-8e5e-41b2-a59d-4365abc3d109-utilities\") pod \"redhat-marketplace-4dkjc\" (UID: \"2997233e-8e5e-41b2-a59d-4365abc3d109\") " pod="openshift-marketplace/redhat-marketplace-4dkjc" Feb 16 17:35:01 crc kubenswrapper[4794]: I0216 17:35:01.493074 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbgrc\" (UniqueName: \"kubernetes.io/projected/2997233e-8e5e-41b2-a59d-4365abc3d109-kube-api-access-cbgrc\") pod \"redhat-marketplace-4dkjc\" (UID: \"2997233e-8e5e-41b2-a59d-4365abc3d109\") " pod="openshift-marketplace/redhat-marketplace-4dkjc" Feb 16 17:35:01 crc kubenswrapper[4794]: I0216 17:35:01.549504 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4dkjc" Feb 16 17:35:01 crc kubenswrapper[4794]: E0216 17:35:01.800960 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:35:02 crc kubenswrapper[4794]: I0216 17:35:02.010283 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4dkjc"] Feb 16 17:35:02 crc kubenswrapper[4794]: I0216 17:35:02.545275 4794 generic.go:334] "Generic (PLEG): container finished" podID="2997233e-8e5e-41b2-a59d-4365abc3d109" containerID="2199b0ca592cde4c891e9c76ae6a32586fe10058871e33131f8355ddfa781d63" exitCode=0 Feb 16 17:35:02 crc kubenswrapper[4794]: I0216 17:35:02.549989 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4dkjc" event={"ID":"2997233e-8e5e-41b2-a59d-4365abc3d109","Type":"ContainerDied","Data":"2199b0ca592cde4c891e9c76ae6a32586fe10058871e33131f8355ddfa781d63"} Feb 16 17:35:02 crc kubenswrapper[4794]: I0216 17:35:02.550155 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4dkjc" event={"ID":"2997233e-8e5e-41b2-a59d-4365abc3d109","Type":"ContainerStarted","Data":"62702ebad661ef267bcfae89d2706437cc49fc6c7e722e5b88f28916b6d023ab"} Feb 16 17:35:03 crc kubenswrapper[4794]: I0216 17:35:03.557293 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4dkjc" event={"ID":"2997233e-8e5e-41b2-a59d-4365abc3d109","Type":"ContainerStarted","Data":"a0ad1975fe90eae4009193d67506501861f81f86718dbd554dd57beab22ab364"} Feb 16 17:35:04 crc kubenswrapper[4794]: I0216 17:35:04.570100 4794 generic.go:334] "Generic (PLEG): container finished" podID="2997233e-8e5e-41b2-a59d-4365abc3d109" containerID="a0ad1975fe90eae4009193d67506501861f81f86718dbd554dd57beab22ab364" exitCode=0 Feb 16 17:35:04 crc kubenswrapper[4794]: I0216 17:35:04.570224 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4dkjc" event={"ID":"2997233e-8e5e-41b2-a59d-4365abc3d109","Type":"ContainerDied","Data":"a0ad1975fe90eae4009193d67506501861f81f86718dbd554dd57beab22ab364"} Feb 16 17:35:05 crc kubenswrapper[4794]: I0216 17:35:05.583269 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4dkjc" event={"ID":"2997233e-8e5e-41b2-a59d-4365abc3d109","Type":"ContainerStarted","Data":"32921c132f133c5a8dfebb3320ee115f330845b9bce7c1a0db0050bda42f2778"} Feb 16 17:35:05 crc kubenswrapper[4794]: I0216 17:35:05.611920 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4dkjc" podStartSLOduration=2.1783909120000002 podStartE2EDuration="4.611897798s" podCreationTimestamp="2026-02-16 17:35:01 +0000 UTC" firstStartedPulling="2026-02-16 17:35:02.546930177 +0000 UTC m=+2128.495024824" lastFinishedPulling="2026-02-16 17:35:04.980437063 +0000 UTC m=+2130.928531710" observedRunningTime="2026-02-16 17:35:05.604325718 +0000 UTC m=+2131.552420375" watchObservedRunningTime="2026-02-16 17:35:05.611897798 +0000 UTC m=+2131.559992455" Feb 16 17:35:11 crc kubenswrapper[4794]: I0216 17:35:11.550611 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4dkjc" Feb 16 17:35:11 crc kubenswrapper[4794]: I0216 17:35:11.551260 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4dkjc" Feb 16 17:35:11 crc kubenswrapper[4794]: I0216 17:35:11.609204 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4dkjc" Feb 16 17:35:11 crc kubenswrapper[4794]: I0216 17:35:11.722635 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4dkjc" Feb 16 17:35:11 crc kubenswrapper[4794]: E0216 17:35:11.793752 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:35:11 crc kubenswrapper[4794]: I0216 17:35:11.868326 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4dkjc"] Feb 16 17:35:12 crc kubenswrapper[4794]: E0216 17:35:12.793751 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:35:13 crc kubenswrapper[4794]: I0216 17:35:13.693784 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4dkjc" podUID="2997233e-8e5e-41b2-a59d-4365abc3d109" containerName="registry-server" containerID="cri-o://32921c132f133c5a8dfebb3320ee115f330845b9bce7c1a0db0050bda42f2778" gracePeriod=2 Feb 16 17:35:13 crc kubenswrapper[4794]: E0216 17:35:13.927750 4794 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2997233e_8e5e_41b2_a59d_4365abc3d109.slice/crio-conmon-32921c132f133c5a8dfebb3320ee115f330845b9bce7c1a0db0050bda42f2778.scope\": RecentStats: unable to find data in memory cache]" Feb 16 17:35:14 crc kubenswrapper[4794]: I0216 17:35:14.259026 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4dkjc" Feb 16 17:35:14 crc kubenswrapper[4794]: I0216 17:35:14.391942 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2997233e-8e5e-41b2-a59d-4365abc3d109-utilities\") pod \"2997233e-8e5e-41b2-a59d-4365abc3d109\" (UID: \"2997233e-8e5e-41b2-a59d-4365abc3d109\") " Feb 16 17:35:14 crc kubenswrapper[4794]: I0216 17:35:14.392181 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2997233e-8e5e-41b2-a59d-4365abc3d109-catalog-content\") pod \"2997233e-8e5e-41b2-a59d-4365abc3d109\" (UID: \"2997233e-8e5e-41b2-a59d-4365abc3d109\") " Feb 16 17:35:14 crc kubenswrapper[4794]: I0216 17:35:14.392366 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbgrc\" (UniqueName: \"kubernetes.io/projected/2997233e-8e5e-41b2-a59d-4365abc3d109-kube-api-access-cbgrc\") pod \"2997233e-8e5e-41b2-a59d-4365abc3d109\" (UID: \"2997233e-8e5e-41b2-a59d-4365abc3d109\") " Feb 16 17:35:14 crc kubenswrapper[4794]: I0216 17:35:14.393079 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2997233e-8e5e-41b2-a59d-4365abc3d109-utilities" (OuterVolumeSpecName: "utilities") pod "2997233e-8e5e-41b2-a59d-4365abc3d109" (UID: "2997233e-8e5e-41b2-a59d-4365abc3d109"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:35:14 crc kubenswrapper[4794]: I0216 17:35:14.393899 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2997233e-8e5e-41b2-a59d-4365abc3d109-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:35:14 crc kubenswrapper[4794]: I0216 17:35:14.398154 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2997233e-8e5e-41b2-a59d-4365abc3d109-kube-api-access-cbgrc" (OuterVolumeSpecName: "kube-api-access-cbgrc") pod "2997233e-8e5e-41b2-a59d-4365abc3d109" (UID: "2997233e-8e5e-41b2-a59d-4365abc3d109"). InnerVolumeSpecName "kube-api-access-cbgrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:35:14 crc kubenswrapper[4794]: I0216 17:35:14.418489 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2997233e-8e5e-41b2-a59d-4365abc3d109-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2997233e-8e5e-41b2-a59d-4365abc3d109" (UID: "2997233e-8e5e-41b2-a59d-4365abc3d109"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:35:14 crc kubenswrapper[4794]: I0216 17:35:14.496234 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2997233e-8e5e-41b2-a59d-4365abc3d109-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:35:14 crc kubenswrapper[4794]: I0216 17:35:14.496293 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbgrc\" (UniqueName: \"kubernetes.io/projected/2997233e-8e5e-41b2-a59d-4365abc3d109-kube-api-access-cbgrc\") on node \"crc\" DevicePath \"\"" Feb 16 17:35:14 crc kubenswrapper[4794]: I0216 17:35:14.709075 4794 generic.go:334] "Generic (PLEG): container finished" podID="2997233e-8e5e-41b2-a59d-4365abc3d109" containerID="32921c132f133c5a8dfebb3320ee115f330845b9bce7c1a0db0050bda42f2778" exitCode=0 Feb 16 17:35:14 crc kubenswrapper[4794]: I0216 17:35:14.709123 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4dkjc" event={"ID":"2997233e-8e5e-41b2-a59d-4365abc3d109","Type":"ContainerDied","Data":"32921c132f133c5a8dfebb3320ee115f330845b9bce7c1a0db0050bda42f2778"} Feb 16 17:35:14 crc kubenswrapper[4794]: I0216 17:35:14.709160 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4dkjc" event={"ID":"2997233e-8e5e-41b2-a59d-4365abc3d109","Type":"ContainerDied","Data":"62702ebad661ef267bcfae89d2706437cc49fc6c7e722e5b88f28916b6d023ab"} Feb 16 17:35:14 crc kubenswrapper[4794]: I0216 17:35:14.709182 4794 scope.go:117] "RemoveContainer" containerID="32921c132f133c5a8dfebb3320ee115f330845b9bce7c1a0db0050bda42f2778" Feb 16 17:35:14 crc kubenswrapper[4794]: I0216 17:35:14.709208 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4dkjc" Feb 16 17:35:14 crc kubenswrapper[4794]: I0216 17:35:14.747344 4794 scope.go:117] "RemoveContainer" containerID="a0ad1975fe90eae4009193d67506501861f81f86718dbd554dd57beab22ab364" Feb 16 17:35:14 crc kubenswrapper[4794]: I0216 17:35:14.800034 4794 scope.go:117] "RemoveContainer" containerID="2199b0ca592cde4c891e9c76ae6a32586fe10058871e33131f8355ddfa781d63" Feb 16 17:35:14 crc kubenswrapper[4794]: I0216 17:35:14.810443 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4dkjc"] Feb 16 17:35:14 crc kubenswrapper[4794]: I0216 17:35:14.813025 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4dkjc"] Feb 16 17:35:14 crc kubenswrapper[4794]: I0216 17:35:14.854198 4794 scope.go:117] "RemoveContainer" containerID="32921c132f133c5a8dfebb3320ee115f330845b9bce7c1a0db0050bda42f2778" Feb 16 17:35:14 crc kubenswrapper[4794]: E0216 17:35:14.854758 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32921c132f133c5a8dfebb3320ee115f330845b9bce7c1a0db0050bda42f2778\": container with ID starting with 32921c132f133c5a8dfebb3320ee115f330845b9bce7c1a0db0050bda42f2778 not found: ID does not exist" containerID="32921c132f133c5a8dfebb3320ee115f330845b9bce7c1a0db0050bda42f2778" Feb 16 17:35:14 crc kubenswrapper[4794]: I0216 17:35:14.854821 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32921c132f133c5a8dfebb3320ee115f330845b9bce7c1a0db0050bda42f2778"} err="failed to get container status \"32921c132f133c5a8dfebb3320ee115f330845b9bce7c1a0db0050bda42f2778\": rpc error: code = NotFound desc = could not find container \"32921c132f133c5a8dfebb3320ee115f330845b9bce7c1a0db0050bda42f2778\": container with ID starting with 32921c132f133c5a8dfebb3320ee115f330845b9bce7c1a0db0050bda42f2778 not found: ID does not exist" Feb 16 17:35:14 crc kubenswrapper[4794]: I0216 17:35:14.854861 4794 scope.go:117] "RemoveContainer" containerID="a0ad1975fe90eae4009193d67506501861f81f86718dbd554dd57beab22ab364" Feb 16 17:35:14 crc kubenswrapper[4794]: E0216 17:35:14.855195 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0ad1975fe90eae4009193d67506501861f81f86718dbd554dd57beab22ab364\": container with ID starting with a0ad1975fe90eae4009193d67506501861f81f86718dbd554dd57beab22ab364 not found: ID does not exist" containerID="a0ad1975fe90eae4009193d67506501861f81f86718dbd554dd57beab22ab364" Feb 16 17:35:14 crc kubenswrapper[4794]: I0216 17:35:14.855216 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0ad1975fe90eae4009193d67506501861f81f86718dbd554dd57beab22ab364"} err="failed to get container status \"a0ad1975fe90eae4009193d67506501861f81f86718dbd554dd57beab22ab364\": rpc error: code = NotFound desc = could not find container \"a0ad1975fe90eae4009193d67506501861f81f86718dbd554dd57beab22ab364\": container with ID starting with a0ad1975fe90eae4009193d67506501861f81f86718dbd554dd57beab22ab364 not found: ID does not exist" Feb 16 17:35:14 crc kubenswrapper[4794]: I0216 17:35:14.855233 4794 scope.go:117] "RemoveContainer" containerID="2199b0ca592cde4c891e9c76ae6a32586fe10058871e33131f8355ddfa781d63" Feb 16 17:35:14 crc kubenswrapper[4794]: E0216 17:35:14.855562 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2199b0ca592cde4c891e9c76ae6a32586fe10058871e33131f8355ddfa781d63\": container with ID starting with 2199b0ca592cde4c891e9c76ae6a32586fe10058871e33131f8355ddfa781d63 not found: ID does not exist" containerID="2199b0ca592cde4c891e9c76ae6a32586fe10058871e33131f8355ddfa781d63" Feb 16 17:35:14 crc kubenswrapper[4794]: I0216 17:35:14.855585 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2199b0ca592cde4c891e9c76ae6a32586fe10058871e33131f8355ddfa781d63"} err="failed to get container status \"2199b0ca592cde4c891e9c76ae6a32586fe10058871e33131f8355ddfa781d63\": rpc error: code = NotFound desc = could not find container \"2199b0ca592cde4c891e9c76ae6a32586fe10058871e33131f8355ddfa781d63\": container with ID starting with 2199b0ca592cde4c891e9c76ae6a32586fe10058871e33131f8355ddfa781d63 not found: ID does not exist" Feb 16 17:35:16 crc kubenswrapper[4794]: I0216 17:35:16.065424 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-4r7xb"] Feb 16 17:35:16 crc kubenswrapper[4794]: I0216 17:35:16.089630 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-4r7xb"] Feb 16 17:35:16 crc kubenswrapper[4794]: I0216 17:35:16.822964 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2997233e-8e5e-41b2-a59d-4365abc3d109" path="/var/lib/kubelet/pods/2997233e-8e5e-41b2-a59d-4365abc3d109/volumes" Feb 16 17:35:16 crc kubenswrapper[4794]: I0216 17:35:16.824998 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80d48a50-835e-455f-81f7-9c40a212b9e6" path="/var/lib/kubelet/pods/80d48a50-835e-455f-81f7-9c40a212b9e6/volumes" Feb 16 17:35:20 crc kubenswrapper[4794]: I0216 17:35:20.141370 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:35:20 crc kubenswrapper[4794]: I0216 17:35:20.142340 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:35:24 crc kubenswrapper[4794]: E0216 17:35:24.810242 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:35:27 crc kubenswrapper[4794]: E0216 17:35:27.793230 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:35:38 crc kubenswrapper[4794]: E0216 17:35:38.793581 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:35:39 crc kubenswrapper[4794]: E0216 17:35:39.793467 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:35:41 crc kubenswrapper[4794]: I0216 17:35:41.428178 4794 scope.go:117] "RemoveContainer" containerID="f42f3f6652e80673cd93402c97cc19fc746d71d59bd381ad65fa0d9465ac6651" Feb 16 17:35:41 crc kubenswrapper[4794]: I0216 17:35:41.486707 4794 scope.go:117] "RemoveContainer" containerID="39148aaddc8efdd9d08367b65200436fc85a30a0cf6ccd872dd780e445c86ad9" Feb 16 17:35:47 crc kubenswrapper[4794]: I0216 17:35:47.669465 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zj6tx"] Feb 16 17:35:47 crc kubenswrapper[4794]: E0216 17:35:47.670589 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2997233e-8e5e-41b2-a59d-4365abc3d109" containerName="registry-server" Feb 16 17:35:47 crc kubenswrapper[4794]: I0216 17:35:47.670604 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="2997233e-8e5e-41b2-a59d-4365abc3d109" containerName="registry-server" Feb 16 17:35:47 crc kubenswrapper[4794]: E0216 17:35:47.670639 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2997233e-8e5e-41b2-a59d-4365abc3d109" containerName="extract-utilities" Feb 16 17:35:47 crc kubenswrapper[4794]: I0216 17:35:47.670645 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="2997233e-8e5e-41b2-a59d-4365abc3d109" containerName="extract-utilities" Feb 16 17:35:47 crc kubenswrapper[4794]: E0216 17:35:47.670668 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2997233e-8e5e-41b2-a59d-4365abc3d109" containerName="extract-content" Feb 16 17:35:47 crc kubenswrapper[4794]: I0216 17:35:47.670675 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="2997233e-8e5e-41b2-a59d-4365abc3d109" containerName="extract-content" Feb 16 17:35:47 crc kubenswrapper[4794]: I0216 17:35:47.670878 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="2997233e-8e5e-41b2-a59d-4365abc3d109" containerName="registry-server" Feb 16 17:35:47 crc kubenswrapper[4794]: I0216 17:35:47.672587 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zj6tx" Feb 16 17:35:47 crc kubenswrapper[4794]: I0216 17:35:47.683905 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zj6tx"] Feb 16 17:35:47 crc kubenswrapper[4794]: I0216 17:35:47.775635 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fzvb\" (UniqueName: \"kubernetes.io/projected/51fc989b-70c7-4537-b219-43e03fc82f17-kube-api-access-9fzvb\") pod \"community-operators-zj6tx\" (UID: \"51fc989b-70c7-4537-b219-43e03fc82f17\") " pod="openshift-marketplace/community-operators-zj6tx" Feb 16 17:35:47 crc kubenswrapper[4794]: I0216 17:35:47.775770 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51fc989b-70c7-4537-b219-43e03fc82f17-utilities\") pod \"community-operators-zj6tx\" (UID: \"51fc989b-70c7-4537-b219-43e03fc82f17\") " pod="openshift-marketplace/community-operators-zj6tx" Feb 16 17:35:47 crc kubenswrapper[4794]: I0216 17:35:47.775894 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51fc989b-70c7-4537-b219-43e03fc82f17-catalog-content\") pod \"community-operators-zj6tx\" (UID: \"51fc989b-70c7-4537-b219-43e03fc82f17\") " pod="openshift-marketplace/community-operators-zj6tx" Feb 16 17:35:47 crc kubenswrapper[4794]: I0216 17:35:47.878630 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fzvb\" (UniqueName: \"kubernetes.io/projected/51fc989b-70c7-4537-b219-43e03fc82f17-kube-api-access-9fzvb\") pod \"community-operators-zj6tx\" (UID: \"51fc989b-70c7-4537-b219-43e03fc82f17\") " pod="openshift-marketplace/community-operators-zj6tx" Feb 16 17:35:47 crc kubenswrapper[4794]: I0216 17:35:47.878758 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51fc989b-70c7-4537-b219-43e03fc82f17-utilities\") pod \"community-operators-zj6tx\" (UID: \"51fc989b-70c7-4537-b219-43e03fc82f17\") " pod="openshift-marketplace/community-operators-zj6tx" Feb 16 17:35:47 crc kubenswrapper[4794]: I0216 17:35:47.878973 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51fc989b-70c7-4537-b219-43e03fc82f17-catalog-content\") pod \"community-operators-zj6tx\" (UID: \"51fc989b-70c7-4537-b219-43e03fc82f17\") " pod="openshift-marketplace/community-operators-zj6tx" Feb 16 17:35:47 crc kubenswrapper[4794]: I0216 17:35:47.879259 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51fc989b-70c7-4537-b219-43e03fc82f17-utilities\") pod \"community-operators-zj6tx\" (UID: \"51fc989b-70c7-4537-b219-43e03fc82f17\") " pod="openshift-marketplace/community-operators-zj6tx" Feb 16 17:35:47 crc kubenswrapper[4794]: I0216 17:35:47.879409 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51fc989b-70c7-4537-b219-43e03fc82f17-catalog-content\") pod \"community-operators-zj6tx\" (UID: \"51fc989b-70c7-4537-b219-43e03fc82f17\") " pod="openshift-marketplace/community-operators-zj6tx" Feb 16 17:35:47 crc kubenswrapper[4794]: I0216 17:35:47.904490 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fzvb\" (UniqueName: \"kubernetes.io/projected/51fc989b-70c7-4537-b219-43e03fc82f17-kube-api-access-9fzvb\") pod \"community-operators-zj6tx\" (UID: \"51fc989b-70c7-4537-b219-43e03fc82f17\") " pod="openshift-marketplace/community-operators-zj6tx" Feb 16 17:35:48 crc kubenswrapper[4794]: I0216 17:35:48.010185 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zj6tx" Feb 16 17:35:48 crc kubenswrapper[4794]: I0216 17:35:48.583950 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zj6tx"] Feb 16 17:35:49 crc kubenswrapper[4794]: I0216 17:35:49.122204 4794 generic.go:334] "Generic (PLEG): container finished" podID="51fc989b-70c7-4537-b219-43e03fc82f17" containerID="1c39eac434f7b42bdbbdf1cfca92369a4967e7b2b1975145b116bd7448658a8b" exitCode=0 Feb 16 17:35:49 crc kubenswrapper[4794]: I0216 17:35:49.122565 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zj6tx" event={"ID":"51fc989b-70c7-4537-b219-43e03fc82f17","Type":"ContainerDied","Data":"1c39eac434f7b42bdbbdf1cfca92369a4967e7b2b1975145b116bd7448658a8b"} Feb 16 17:35:49 crc kubenswrapper[4794]: I0216 17:35:49.122675 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zj6tx" event={"ID":"51fc989b-70c7-4537-b219-43e03fc82f17","Type":"ContainerStarted","Data":"b9c8c5af5f67994de84ceded53a6c0673f48e8424e8a8f65d14418571485ecf4"} Feb 16 17:35:50 crc kubenswrapper[4794]: I0216 17:35:50.136320 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zj6tx" event={"ID":"51fc989b-70c7-4537-b219-43e03fc82f17","Type":"ContainerStarted","Data":"97caf0240a128d3bccff45e0f3d5113aa28dd4fb99998b27dfc978e056c85668"} Feb 16 17:35:50 crc kubenswrapper[4794]: I0216 17:35:50.140352 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:35:50 crc kubenswrapper[4794]: I0216 17:35:50.140592 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:35:50 crc kubenswrapper[4794]: I0216 17:35:50.140786 4794 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" Feb 16 17:35:50 crc kubenswrapper[4794]: I0216 17:35:50.141643 4794 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0a4de50ee67947d138f0a6fbc8c3acf24a2b9293c0d0a8cda201de752f64a5c2"} pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 17:35:50 crc kubenswrapper[4794]: I0216 17:35:50.141852 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" containerID="cri-o://0a4de50ee67947d138f0a6fbc8c3acf24a2b9293c0d0a8cda201de752f64a5c2" gracePeriod=600 Feb 16 17:35:50 crc kubenswrapper[4794]: E0216 17:35:50.274473 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:35:50 crc kubenswrapper[4794]: E0216 17:35:50.794527 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:35:51 crc kubenswrapper[4794]: I0216 17:35:51.146751 4794 generic.go:334] "Generic (PLEG): container finished" podID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerID="0a4de50ee67947d138f0a6fbc8c3acf24a2b9293c0d0a8cda201de752f64a5c2" exitCode=0 Feb 16 17:35:51 crc kubenswrapper[4794]: I0216 17:35:51.146770 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerDied","Data":"0a4de50ee67947d138f0a6fbc8c3acf24a2b9293c0d0a8cda201de752f64a5c2"} Feb 16 17:35:51 crc kubenswrapper[4794]: I0216 17:35:51.147126 4794 scope.go:117] "RemoveContainer" containerID="43fc9dd7f2fae3a5a4c080fa56f687e2435f83f7280f2c8e8a10fb66c8654d44" Feb 16 17:35:51 crc kubenswrapper[4794]: I0216 17:35:51.149069 4794 scope.go:117] "RemoveContainer" containerID="0a4de50ee67947d138f0a6fbc8c3acf24a2b9293c0d0a8cda201de752f64a5c2" Feb 16 17:35:51 crc kubenswrapper[4794]: E0216 17:35:51.149688 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:35:52 crc kubenswrapper[4794]: I0216 17:35:52.162016 4794 generic.go:334] "Generic (PLEG): container finished" podID="51fc989b-70c7-4537-b219-43e03fc82f17" containerID="97caf0240a128d3bccff45e0f3d5113aa28dd4fb99998b27dfc978e056c85668" exitCode=0 Feb 16 17:35:52 crc kubenswrapper[4794]: I0216 17:35:52.162098 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zj6tx" event={"ID":"51fc989b-70c7-4537-b219-43e03fc82f17","Type":"ContainerDied","Data":"97caf0240a128d3bccff45e0f3d5113aa28dd4fb99998b27dfc978e056c85668"} Feb 16 17:35:52 crc kubenswrapper[4794]: E0216 17:35:52.794935 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:35:53 crc kubenswrapper[4794]: I0216 17:35:53.174537 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zj6tx" event={"ID":"51fc989b-70c7-4537-b219-43e03fc82f17","Type":"ContainerStarted","Data":"1b2c5c121fca4d08e388169378c1e1649455da92bc94c4ede3de738df98db02a"} Feb 16 17:35:53 crc kubenswrapper[4794]: I0216 17:35:53.197809 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zj6tx" podStartSLOduration=2.755758197 podStartE2EDuration="6.197758369s" podCreationTimestamp="2026-02-16 17:35:47 +0000 UTC" firstStartedPulling="2026-02-16 17:35:49.126486132 +0000 UTC m=+2175.074580779" lastFinishedPulling="2026-02-16 17:35:52.568486264 +0000 UTC m=+2178.516580951" observedRunningTime="2026-02-16 17:35:53.192210954 +0000 UTC m=+2179.140305611" watchObservedRunningTime="2026-02-16 17:35:53.197758369 +0000 UTC m=+2179.145853016" Feb 16 17:35:58 crc kubenswrapper[4794]: I0216 17:35:58.010345 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zj6tx" Feb 16 17:35:58 crc kubenswrapper[4794]: I0216 17:35:58.010914 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zj6tx" Feb 16 17:35:58 crc kubenswrapper[4794]: I0216 17:35:58.058915 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zj6tx" Feb 16 17:35:58 crc kubenswrapper[4794]: I0216 17:35:58.296632 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zj6tx" Feb 16 17:35:58 crc kubenswrapper[4794]: I0216 17:35:58.366558 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zj6tx"] Feb 16 17:36:00 crc kubenswrapper[4794]: I0216 17:36:00.243936 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zj6tx" podUID="51fc989b-70c7-4537-b219-43e03fc82f17" containerName="registry-server" containerID="cri-o://1b2c5c121fca4d08e388169378c1e1649455da92bc94c4ede3de738df98db02a" gracePeriod=2 Feb 16 17:36:00 crc kubenswrapper[4794]: I0216 17:36:00.752621 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zj6tx" Feb 16 17:36:00 crc kubenswrapper[4794]: I0216 17:36:00.816763 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51fc989b-70c7-4537-b219-43e03fc82f17-catalog-content\") pod \"51fc989b-70c7-4537-b219-43e03fc82f17\" (UID: \"51fc989b-70c7-4537-b219-43e03fc82f17\") " Feb 16 17:36:00 crc kubenswrapper[4794]: I0216 17:36:00.816858 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fzvb\" (UniqueName: \"kubernetes.io/projected/51fc989b-70c7-4537-b219-43e03fc82f17-kube-api-access-9fzvb\") pod \"51fc989b-70c7-4537-b219-43e03fc82f17\" (UID: \"51fc989b-70c7-4537-b219-43e03fc82f17\") " Feb 16 17:36:00 crc kubenswrapper[4794]: I0216 17:36:00.816986 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51fc989b-70c7-4537-b219-43e03fc82f17-utilities\") pod \"51fc989b-70c7-4537-b219-43e03fc82f17\" (UID: \"51fc989b-70c7-4537-b219-43e03fc82f17\") " Feb 16 17:36:00 crc kubenswrapper[4794]: I0216 17:36:00.818413 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51fc989b-70c7-4537-b219-43e03fc82f17-utilities" (OuterVolumeSpecName: "utilities") pod "51fc989b-70c7-4537-b219-43e03fc82f17" (UID: "51fc989b-70c7-4537-b219-43e03fc82f17"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:36:00 crc kubenswrapper[4794]: I0216 17:36:00.825736 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51fc989b-70c7-4537-b219-43e03fc82f17-kube-api-access-9fzvb" (OuterVolumeSpecName: "kube-api-access-9fzvb") pod "51fc989b-70c7-4537-b219-43e03fc82f17" (UID: "51fc989b-70c7-4537-b219-43e03fc82f17"). InnerVolumeSpecName "kube-api-access-9fzvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:36:00 crc kubenswrapper[4794]: I0216 17:36:00.879439 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51fc989b-70c7-4537-b219-43e03fc82f17-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51fc989b-70c7-4537-b219-43e03fc82f17" (UID: "51fc989b-70c7-4537-b219-43e03fc82f17"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:36:00 crc kubenswrapper[4794]: I0216 17:36:00.920888 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51fc989b-70c7-4537-b219-43e03fc82f17-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:36:00 crc kubenswrapper[4794]: I0216 17:36:00.920936 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fzvb\" (UniqueName: \"kubernetes.io/projected/51fc989b-70c7-4537-b219-43e03fc82f17-kube-api-access-9fzvb\") on node \"crc\" DevicePath \"\"" Feb 16 17:36:00 crc kubenswrapper[4794]: I0216 17:36:00.920960 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51fc989b-70c7-4537-b219-43e03fc82f17-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:36:01 crc kubenswrapper[4794]: I0216 17:36:01.255262 4794 generic.go:334] "Generic (PLEG): container finished" podID="51fc989b-70c7-4537-b219-43e03fc82f17" containerID="1b2c5c121fca4d08e388169378c1e1649455da92bc94c4ede3de738df98db02a" exitCode=0 Feb 16 17:36:01 crc kubenswrapper[4794]: I0216 17:36:01.255357 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zj6tx" event={"ID":"51fc989b-70c7-4537-b219-43e03fc82f17","Type":"ContainerDied","Data":"1b2c5c121fca4d08e388169378c1e1649455da92bc94c4ede3de738df98db02a"} Feb 16 17:36:01 crc kubenswrapper[4794]: I0216 17:36:01.255386 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zj6tx" event={"ID":"51fc989b-70c7-4537-b219-43e03fc82f17","Type":"ContainerDied","Data":"b9c8c5af5f67994de84ceded53a6c0673f48e8424e8a8f65d14418571485ecf4"} Feb 16 17:36:01 crc kubenswrapper[4794]: I0216 17:36:01.255421 4794 scope.go:117] "RemoveContainer" containerID="1b2c5c121fca4d08e388169378c1e1649455da92bc94c4ede3de738df98db02a" Feb 16 17:36:01 crc kubenswrapper[4794]: I0216 17:36:01.255600 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zj6tx" Feb 16 17:36:01 crc kubenswrapper[4794]: I0216 17:36:01.291243 4794 scope.go:117] "RemoveContainer" containerID="97caf0240a128d3bccff45e0f3d5113aa28dd4fb99998b27dfc978e056c85668" Feb 16 17:36:01 crc kubenswrapper[4794]: I0216 17:36:01.301709 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zj6tx"] Feb 16 17:36:01 crc kubenswrapper[4794]: I0216 17:36:01.312075 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zj6tx"] Feb 16 17:36:01 crc kubenswrapper[4794]: I0216 17:36:01.330251 4794 scope.go:117] "RemoveContainer" containerID="1c39eac434f7b42bdbbdf1cfca92369a4967e7b2b1975145b116bd7448658a8b" Feb 16 17:36:01 crc kubenswrapper[4794]: I0216 17:36:01.364339 4794 scope.go:117] "RemoveContainer" containerID="1b2c5c121fca4d08e388169378c1e1649455da92bc94c4ede3de738df98db02a" Feb 16 17:36:01 crc kubenswrapper[4794]: E0216 17:36:01.364767 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b2c5c121fca4d08e388169378c1e1649455da92bc94c4ede3de738df98db02a\": container with ID starting with 1b2c5c121fca4d08e388169378c1e1649455da92bc94c4ede3de738df98db02a not found: ID does not exist" containerID="1b2c5c121fca4d08e388169378c1e1649455da92bc94c4ede3de738df98db02a" Feb 16 17:36:01 crc kubenswrapper[4794]: I0216 17:36:01.364797 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b2c5c121fca4d08e388169378c1e1649455da92bc94c4ede3de738df98db02a"} err="failed to get container status \"1b2c5c121fca4d08e388169378c1e1649455da92bc94c4ede3de738df98db02a\": rpc error: code = NotFound desc = could not find container \"1b2c5c121fca4d08e388169378c1e1649455da92bc94c4ede3de738df98db02a\": container with ID starting with 1b2c5c121fca4d08e388169378c1e1649455da92bc94c4ede3de738df98db02a not found: ID does not exist" Feb 16 17:36:01 crc kubenswrapper[4794]: I0216 17:36:01.364820 4794 scope.go:117] "RemoveContainer" containerID="97caf0240a128d3bccff45e0f3d5113aa28dd4fb99998b27dfc978e056c85668" Feb 16 17:36:01 crc kubenswrapper[4794]: E0216 17:36:01.365171 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97caf0240a128d3bccff45e0f3d5113aa28dd4fb99998b27dfc978e056c85668\": container with ID starting with 97caf0240a128d3bccff45e0f3d5113aa28dd4fb99998b27dfc978e056c85668 not found: ID does not exist" containerID="97caf0240a128d3bccff45e0f3d5113aa28dd4fb99998b27dfc978e056c85668" Feb 16 17:36:01 crc kubenswrapper[4794]: I0216 17:36:01.365211 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97caf0240a128d3bccff45e0f3d5113aa28dd4fb99998b27dfc978e056c85668"} err="failed to get container status \"97caf0240a128d3bccff45e0f3d5113aa28dd4fb99998b27dfc978e056c85668\": rpc error: code = NotFound desc = could not find container \"97caf0240a128d3bccff45e0f3d5113aa28dd4fb99998b27dfc978e056c85668\": container with ID starting with 97caf0240a128d3bccff45e0f3d5113aa28dd4fb99998b27dfc978e056c85668 not found: ID does not exist" Feb 16 17:36:01 crc kubenswrapper[4794]: I0216 17:36:01.365244 4794 scope.go:117] "RemoveContainer" containerID="1c39eac434f7b42bdbbdf1cfca92369a4967e7b2b1975145b116bd7448658a8b" Feb 16 17:36:01 crc kubenswrapper[4794]: E0216 17:36:01.365543 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c39eac434f7b42bdbbdf1cfca92369a4967e7b2b1975145b116bd7448658a8b\": container with ID starting with 1c39eac434f7b42bdbbdf1cfca92369a4967e7b2b1975145b116bd7448658a8b not found: ID does not exist" containerID="1c39eac434f7b42bdbbdf1cfca92369a4967e7b2b1975145b116bd7448658a8b" Feb 16 17:36:01 crc kubenswrapper[4794]: I0216 17:36:01.365566 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c39eac434f7b42bdbbdf1cfca92369a4967e7b2b1975145b116bd7448658a8b"} err="failed to get container status \"1c39eac434f7b42bdbbdf1cfca92369a4967e7b2b1975145b116bd7448658a8b\": rpc error: code = NotFound desc = could not find container \"1c39eac434f7b42bdbbdf1cfca92369a4967e7b2b1975145b116bd7448658a8b\": container with ID starting with 1c39eac434f7b42bdbbdf1cfca92369a4967e7b2b1975145b116bd7448658a8b not found: ID does not exist" Feb 16 17:36:01 crc kubenswrapper[4794]: E0216 17:36:01.794664 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:36:02 crc kubenswrapper[4794]: I0216 17:36:02.833769 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51fc989b-70c7-4537-b219-43e03fc82f17" path="/var/lib/kubelet/pods/51fc989b-70c7-4537-b219-43e03fc82f17/volumes" Feb 16 17:36:03 crc kubenswrapper[4794]: I0216 17:36:03.791288 4794 scope.go:117] "RemoveContainer" containerID="0a4de50ee67947d138f0a6fbc8c3acf24a2b9293c0d0a8cda201de752f64a5c2" Feb 16 17:36:03 crc kubenswrapper[4794]: E0216 17:36:03.791771 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:36:07 crc kubenswrapper[4794]: E0216 17:36:07.795262 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:36:12 crc kubenswrapper[4794]: E0216 17:36:12.794135 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:36:17 crc kubenswrapper[4794]: I0216 17:36:17.792920 4794 scope.go:117] "RemoveContainer" containerID="0a4de50ee67947d138f0a6fbc8c3acf24a2b9293c0d0a8cda201de752f64a5c2" Feb 16 17:36:17 crc kubenswrapper[4794]: E0216 17:36:17.793759 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:36:20 crc kubenswrapper[4794]: E0216 17:36:20.794818 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:36:26 crc kubenswrapper[4794]: E0216 17:36:26.794655 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:36:29 crc kubenswrapper[4794]: I0216 17:36:29.793580 4794 scope.go:117] "RemoveContainer" containerID="0a4de50ee67947d138f0a6fbc8c3acf24a2b9293c0d0a8cda201de752f64a5c2" Feb 16 17:36:29 crc kubenswrapper[4794]: E0216 17:36:29.794676 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:36:35 crc kubenswrapper[4794]: E0216 17:36:35.793667 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:36:40 crc kubenswrapper[4794]: I0216 17:36:40.792007 4794 scope.go:117] "RemoveContainer" containerID="0a4de50ee67947d138f0a6fbc8c3acf24a2b9293c0d0a8cda201de752f64a5c2" Feb 16 17:36:40 crc kubenswrapper[4794]: E0216 17:36:40.792892 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:36:41 crc kubenswrapper[4794]: E0216 17:36:41.794294 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:36:47 crc kubenswrapper[4794]: I0216 17:36:47.797478 4794 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 17:36:47 crc kubenswrapper[4794]: E0216 17:36:47.912431 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 17:36:47 crc kubenswrapper[4794]: E0216 17:36:47.912521 4794 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 17:36:47 crc kubenswrapper[4794]: E0216 17:36:47.912697 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2h5l2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-7gcsf_openstack(c695f880-15cb-45b1-8545-60d8437ec631): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 17:36:47 crc kubenswrapper[4794]: E0216 17:36:47.914184 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:36:53 crc kubenswrapper[4794]: E0216 17:36:53.878101 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 17:36:53 crc kubenswrapper[4794]: E0216 17:36:53.878780 4794 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 17:36:53 crc kubenswrapper[4794]: E0216 17:36:53.878946 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59fh58dh6ch557h84h55ch564h5bh58fh5c8h5d4h584h669h667h569h59hd5hdbh9dh67ch5f9h59fh597h96h664h687h66dhfch5ddh5b7h88h59cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9v9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(8981f528-1f74-4d56-a93c-22860725b490): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 17:36:53 crc kubenswrapper[4794]: E0216 17:36:53.880191 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:36:54 crc kubenswrapper[4794]: I0216 17:36:54.800100 4794 scope.go:117] "RemoveContainer" containerID="0a4de50ee67947d138f0a6fbc8c3acf24a2b9293c0d0a8cda201de752f64a5c2" Feb 16 17:36:54 crc kubenswrapper[4794]: E0216 17:36:54.800743 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:37:01 crc kubenswrapper[4794]: E0216 17:37:01.794544 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:37:07 crc kubenswrapper[4794]: E0216 17:37:07.795318 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:37:07 crc kubenswrapper[4794]: I0216 17:37:07.820628 4794 scope.go:117] "RemoveContainer" containerID="0a4de50ee67947d138f0a6fbc8c3acf24a2b9293c0d0a8cda201de752f64a5c2" Feb 16 17:37:07 crc kubenswrapper[4794]: E0216 17:37:07.821099 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:37:15 crc kubenswrapper[4794]: E0216 17:37:15.794371 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:37:20 crc kubenswrapper[4794]: I0216 17:37:20.792762 4794 scope.go:117] "RemoveContainer" containerID="0a4de50ee67947d138f0a6fbc8c3acf24a2b9293c0d0a8cda201de752f64a5c2" Feb 16 17:37:20 crc kubenswrapper[4794]: E0216 17:37:20.794015 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:37:22 crc kubenswrapper[4794]: E0216 17:37:22.794243 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:37:29 crc kubenswrapper[4794]: E0216 17:37:29.794223 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:37:31 crc kubenswrapper[4794]: I0216 17:37:31.792524 4794 scope.go:117] "RemoveContainer" containerID="0a4de50ee67947d138f0a6fbc8c3acf24a2b9293c0d0a8cda201de752f64a5c2" Feb 16 17:37:31 crc kubenswrapper[4794]: E0216 17:37:31.793106 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:37:36 crc kubenswrapper[4794]: E0216 17:37:36.799163 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:37:42 crc kubenswrapper[4794]: E0216 17:37:42.795180 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:37:45 crc kubenswrapper[4794]: I0216 17:37:45.791820 4794 scope.go:117] "RemoveContainer" containerID="0a4de50ee67947d138f0a6fbc8c3acf24a2b9293c0d0a8cda201de752f64a5c2" Feb 16 17:37:45 crc kubenswrapper[4794]: E0216 17:37:45.792602 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:37:47 crc kubenswrapper[4794]: E0216 17:37:47.794997 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:37:54 crc kubenswrapper[4794]: I0216 17:37:54.083204 4794 generic.go:334] "Generic (PLEG): container finished" podID="25576ab9-760b-40e6-b7c7-866fbb7ed70c" containerID="a9a606122964105341db1c1f3bb249c2ff16792bb31ef34e83d994ad483b3f2e" exitCode=2 Feb 16 17:37:54 crc kubenswrapper[4794]: I0216 17:37:54.083286 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cjr9z" event={"ID":"25576ab9-760b-40e6-b7c7-866fbb7ed70c","Type":"ContainerDied","Data":"a9a606122964105341db1c1f3bb249c2ff16792bb31ef34e83d994ad483b3f2e"} Feb 16 17:37:55 crc kubenswrapper[4794]: I0216 17:37:55.588217 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cjr9z" Feb 16 17:37:55 crc kubenswrapper[4794]: I0216 17:37:55.717387 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzqf7\" (UniqueName: \"kubernetes.io/projected/25576ab9-760b-40e6-b7c7-866fbb7ed70c-kube-api-access-pzqf7\") pod \"25576ab9-760b-40e6-b7c7-866fbb7ed70c\" (UID: \"25576ab9-760b-40e6-b7c7-866fbb7ed70c\") " Feb 16 17:37:55 crc kubenswrapper[4794]: I0216 17:37:55.717638 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25576ab9-760b-40e6-b7c7-866fbb7ed70c-inventory\") pod \"25576ab9-760b-40e6-b7c7-866fbb7ed70c\" (UID: \"25576ab9-760b-40e6-b7c7-866fbb7ed70c\") " Feb 16 17:37:55 crc kubenswrapper[4794]: I0216 17:37:55.717676 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/25576ab9-760b-40e6-b7c7-866fbb7ed70c-ssh-key-openstack-edpm-ipam\") pod \"25576ab9-760b-40e6-b7c7-866fbb7ed70c\" (UID: \"25576ab9-760b-40e6-b7c7-866fbb7ed70c\") " Feb 16 17:37:55 crc kubenswrapper[4794]: I0216 17:37:55.722792 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25576ab9-760b-40e6-b7c7-866fbb7ed70c-kube-api-access-pzqf7" (OuterVolumeSpecName: "kube-api-access-pzqf7") pod "25576ab9-760b-40e6-b7c7-866fbb7ed70c" (UID: "25576ab9-760b-40e6-b7c7-866fbb7ed70c"). InnerVolumeSpecName "kube-api-access-pzqf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:37:55 crc kubenswrapper[4794]: I0216 17:37:55.749161 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25576ab9-760b-40e6-b7c7-866fbb7ed70c-inventory" (OuterVolumeSpecName: "inventory") pod "25576ab9-760b-40e6-b7c7-866fbb7ed70c" (UID: "25576ab9-760b-40e6-b7c7-866fbb7ed70c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:37:55 crc kubenswrapper[4794]: I0216 17:37:55.751242 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25576ab9-760b-40e6-b7c7-866fbb7ed70c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "25576ab9-760b-40e6-b7c7-866fbb7ed70c" (UID: "25576ab9-760b-40e6-b7c7-866fbb7ed70c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:37:55 crc kubenswrapper[4794]: I0216 17:37:55.820939 4794 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25576ab9-760b-40e6-b7c7-866fbb7ed70c-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 17:37:55 crc kubenswrapper[4794]: I0216 17:37:55.820977 4794 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/25576ab9-760b-40e6-b7c7-866fbb7ed70c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 17:37:55 crc kubenswrapper[4794]: I0216 17:37:55.820994 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzqf7\" (UniqueName: \"kubernetes.io/projected/25576ab9-760b-40e6-b7c7-866fbb7ed70c-kube-api-access-pzqf7\") on node \"crc\" DevicePath \"\"" Feb 16 17:37:56 crc kubenswrapper[4794]: I0216 17:37:56.102893 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cjr9z" event={"ID":"25576ab9-760b-40e6-b7c7-866fbb7ed70c","Type":"ContainerDied","Data":"7818fb02d7da7abd5ff0e1a0b33615c9373f553109d2241d98c94f1b22bf5cce"} Feb 16 17:37:56 crc kubenswrapper[4794]: I0216 17:37:56.102921 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cjr9z" Feb 16 17:37:56 crc kubenswrapper[4794]: I0216 17:37:56.102925 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7818fb02d7da7abd5ff0e1a0b33615c9373f553109d2241d98c94f1b22bf5cce" Feb 16 17:37:57 crc kubenswrapper[4794]: E0216 17:37:57.793918 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:37:59 crc kubenswrapper[4794]: I0216 17:37:59.791237 4794 scope.go:117] "RemoveContainer" containerID="0a4de50ee67947d138f0a6fbc8c3acf24a2b9293c0d0a8cda201de752f64a5c2" Feb 16 17:37:59 crc kubenswrapper[4794]: E0216 17:37:59.791922 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:38:02 crc kubenswrapper[4794]: E0216 17:38:02.793827 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:38:03 crc kubenswrapper[4794]: I0216 17:38:03.034487 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9h9h7"] Feb 16 17:38:03 crc kubenswrapper[4794]: E0216 17:38:03.035077 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25576ab9-760b-40e6-b7c7-866fbb7ed70c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 17:38:03 crc kubenswrapper[4794]: I0216 17:38:03.035104 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="25576ab9-760b-40e6-b7c7-866fbb7ed70c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 17:38:03 crc kubenswrapper[4794]: E0216 17:38:03.035145 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51fc989b-70c7-4537-b219-43e03fc82f17" containerName="extract-content" Feb 16 17:38:03 crc kubenswrapper[4794]: I0216 17:38:03.035157 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="51fc989b-70c7-4537-b219-43e03fc82f17" containerName="extract-content" Feb 16 17:38:03 crc kubenswrapper[4794]: E0216 17:38:03.035195 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51fc989b-70c7-4537-b219-43e03fc82f17" containerName="registry-server" Feb 16 17:38:03 crc kubenswrapper[4794]: I0216 17:38:03.035204 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="51fc989b-70c7-4537-b219-43e03fc82f17" containerName="registry-server" Feb 16 17:38:03 crc kubenswrapper[4794]: E0216 17:38:03.035237 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51fc989b-70c7-4537-b219-43e03fc82f17" containerName="extract-utilities" Feb 16 17:38:03 crc kubenswrapper[4794]: I0216 17:38:03.035248 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="51fc989b-70c7-4537-b219-43e03fc82f17" containerName="extract-utilities" Feb 16 17:38:03 crc kubenswrapper[4794]: I0216 17:38:03.035487 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="51fc989b-70c7-4537-b219-43e03fc82f17" containerName="registry-server" Feb 16 17:38:03 crc kubenswrapper[4794]: I0216 17:38:03.035511 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="25576ab9-760b-40e6-b7c7-866fbb7ed70c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 17:38:03 crc kubenswrapper[4794]: I0216 17:38:03.036509 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9h9h7" Feb 16 17:38:03 crc kubenswrapper[4794]: I0216 17:38:03.039378 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 17:38:03 crc kubenswrapper[4794]: I0216 17:38:03.039539 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 17:38:03 crc kubenswrapper[4794]: I0216 17:38:03.039671 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 17:38:03 crc kubenswrapper[4794]: I0216 17:38:03.039693 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kshzw" Feb 16 17:38:03 crc kubenswrapper[4794]: I0216 17:38:03.062113 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9h9h7"] Feb 16 17:38:03 crc kubenswrapper[4794]: I0216 17:38:03.197496 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-752kb\" (UniqueName: \"kubernetes.io/projected/7694359c-dd70-4640-bcc6-2ed4377e5cbb-kube-api-access-752kb\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9h9h7\" (UID: \"7694359c-dd70-4640-bcc6-2ed4377e5cbb\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9h9h7" Feb 16 17:38:03 crc kubenswrapper[4794]: I0216 17:38:03.197641 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7694359c-dd70-4640-bcc6-2ed4377e5cbb-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9h9h7\" (UID: \"7694359c-dd70-4640-bcc6-2ed4377e5cbb\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9h9h7" Feb 16 17:38:03 crc kubenswrapper[4794]: I0216 17:38:03.197780 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7694359c-dd70-4640-bcc6-2ed4377e5cbb-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9h9h7\" (UID: \"7694359c-dd70-4640-bcc6-2ed4377e5cbb\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9h9h7" Feb 16 17:38:03 crc kubenswrapper[4794]: I0216 17:38:03.299679 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-752kb\" (UniqueName: \"kubernetes.io/projected/7694359c-dd70-4640-bcc6-2ed4377e5cbb-kube-api-access-752kb\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9h9h7\" (UID: \"7694359c-dd70-4640-bcc6-2ed4377e5cbb\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9h9h7" Feb 16 17:38:03 crc kubenswrapper[4794]: I0216 17:38:03.299803 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7694359c-dd70-4640-bcc6-2ed4377e5cbb-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9h9h7\" (UID: \"7694359c-dd70-4640-bcc6-2ed4377e5cbb\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9h9h7" Feb 16 17:38:03 crc kubenswrapper[4794]: I0216 17:38:03.299976 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7694359c-dd70-4640-bcc6-2ed4377e5cbb-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9h9h7\" (UID: \"7694359c-dd70-4640-bcc6-2ed4377e5cbb\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9h9h7" Feb 16 17:38:03 crc kubenswrapper[4794]: I0216 17:38:03.305673 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7694359c-dd70-4640-bcc6-2ed4377e5cbb-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9h9h7\" (UID: \"7694359c-dd70-4640-bcc6-2ed4377e5cbb\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9h9h7" Feb 16 17:38:03 crc kubenswrapper[4794]: I0216 17:38:03.306603 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7694359c-dd70-4640-bcc6-2ed4377e5cbb-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9h9h7\" (UID: \"7694359c-dd70-4640-bcc6-2ed4377e5cbb\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9h9h7" Feb 16 17:38:03 crc kubenswrapper[4794]: I0216 17:38:03.320017 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-752kb\" (UniqueName: \"kubernetes.io/projected/7694359c-dd70-4640-bcc6-2ed4377e5cbb-kube-api-access-752kb\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9h9h7\" (UID: \"7694359c-dd70-4640-bcc6-2ed4377e5cbb\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9h9h7" Feb 16 17:38:03 crc kubenswrapper[4794]: I0216 17:38:03.365507 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9h9h7" Feb 16 17:38:04 crc kubenswrapper[4794]: I0216 17:38:04.034867 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9h9h7"] Feb 16 17:38:04 crc kubenswrapper[4794]: I0216 17:38:04.200552 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9h9h7" event={"ID":"7694359c-dd70-4640-bcc6-2ed4377e5cbb","Type":"ContainerStarted","Data":"c4ee309ab11ba7267b917e7df71370e6932206cebd75133de437d7bac0c6f90f"} Feb 16 17:38:05 crc kubenswrapper[4794]: I0216 17:38:05.214613 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9h9h7" event={"ID":"7694359c-dd70-4640-bcc6-2ed4377e5cbb","Type":"ContainerStarted","Data":"1d5fceca7c06530be6b37837b02e55bdfb64c818a6d20b346c0ea7433d064ae3"} Feb 16 17:38:05 crc kubenswrapper[4794]: I0216 17:38:05.243268 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9h9h7" podStartSLOduration=1.7750953699999998 podStartE2EDuration="2.243251368s" podCreationTimestamp="2026-02-16 17:38:03 +0000 UTC" firstStartedPulling="2026-02-16 17:38:04.0392036 +0000 UTC m=+2309.987298247" lastFinishedPulling="2026-02-16 17:38:04.507359598 +0000 UTC m=+2310.455454245" observedRunningTime="2026-02-16 17:38:05.237784063 +0000 UTC m=+2311.185878720" watchObservedRunningTime="2026-02-16 17:38:05.243251368 +0000 UTC m=+2311.191346015" Feb 16 17:38:09 crc kubenswrapper[4794]: E0216 17:38:09.794954 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:38:14 crc kubenswrapper[4794]: I0216 17:38:14.804368 4794 scope.go:117] "RemoveContainer" containerID="0a4de50ee67947d138f0a6fbc8c3acf24a2b9293c0d0a8cda201de752f64a5c2" Feb 16 17:38:14 crc kubenswrapper[4794]: E0216 17:38:14.805446 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:38:15 crc kubenswrapper[4794]: E0216 17:38:15.793751 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:38:24 crc kubenswrapper[4794]: E0216 17:38:24.803692 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:38:27 crc kubenswrapper[4794]: E0216 17:38:27.151212 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:38:27 crc kubenswrapper[4794]: I0216 17:38:27.791575 4794 scope.go:117] "RemoveContainer" containerID="0a4de50ee67947d138f0a6fbc8c3acf24a2b9293c0d0a8cda201de752f64a5c2" Feb 16 17:38:27 crc kubenswrapper[4794]: E0216 17:38:27.792173 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:38:35 crc kubenswrapper[4794]: E0216 17:38:35.793137 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:38:38 crc kubenswrapper[4794]: E0216 17:38:38.806064 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:38:39 crc kubenswrapper[4794]: I0216 17:38:39.792813 4794 scope.go:117] "RemoveContainer" containerID="0a4de50ee67947d138f0a6fbc8c3acf24a2b9293c0d0a8cda201de752f64a5c2" Feb 16 17:38:39 crc kubenswrapper[4794]: E0216 17:38:39.793759 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:38:48 crc kubenswrapper[4794]: E0216 17:38:48.793777 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:38:51 crc kubenswrapper[4794]: E0216 17:38:51.794472 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:38:52 crc kubenswrapper[4794]: I0216 17:38:52.792213 4794 scope.go:117] "RemoveContainer" containerID="0a4de50ee67947d138f0a6fbc8c3acf24a2b9293c0d0a8cda201de752f64a5c2" Feb 16 17:38:52 crc kubenswrapper[4794]: E0216 17:38:52.793283 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:39:01 crc kubenswrapper[4794]: E0216 17:39:01.794051 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:39:02 crc kubenswrapper[4794]: E0216 17:39:02.792925 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:39:04 crc kubenswrapper[4794]: I0216 17:39:04.806065 4794 scope.go:117] "RemoveContainer" containerID="0a4de50ee67947d138f0a6fbc8c3acf24a2b9293c0d0a8cda201de752f64a5c2" Feb 16 17:39:04 crc kubenswrapper[4794]: E0216 17:39:04.806745 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:39:14 crc kubenswrapper[4794]: E0216 17:39:14.802291 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:39:15 crc kubenswrapper[4794]: E0216 17:39:15.811663 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:39:18 crc kubenswrapper[4794]: I0216 17:39:18.792453 4794 scope.go:117] "RemoveContainer" containerID="0a4de50ee67947d138f0a6fbc8c3acf24a2b9293c0d0a8cda201de752f64a5c2" Feb 16 17:39:18 crc kubenswrapper[4794]: E0216 17:39:18.793265 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:39:25 crc kubenswrapper[4794]: E0216 17:39:25.802268 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:39:28 crc kubenswrapper[4794]: E0216 17:39:28.794875 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:39:32 crc kubenswrapper[4794]: I0216 17:39:32.791358 4794 scope.go:117] "RemoveContainer" containerID="0a4de50ee67947d138f0a6fbc8c3acf24a2b9293c0d0a8cda201de752f64a5c2" Feb 16 17:39:32 crc kubenswrapper[4794]: E0216 17:39:32.792554 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:39:40 crc kubenswrapper[4794]: E0216 17:39:40.793462 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:39:40 crc kubenswrapper[4794]: E0216 17:39:40.793503 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:39:44 crc kubenswrapper[4794]: I0216 17:39:44.151180 4794 scope.go:117] "RemoveContainer" containerID="0a4de50ee67947d138f0a6fbc8c3acf24a2b9293c0d0a8cda201de752f64a5c2" Feb 16 17:39:44 crc kubenswrapper[4794]: E0216 17:39:44.155328 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:39:52 crc kubenswrapper[4794]: E0216 17:39:52.793960 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:39:53 crc kubenswrapper[4794]: E0216 17:39:53.792904 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:39:58 crc kubenswrapper[4794]: I0216 17:39:58.791875 4794 scope.go:117] "RemoveContainer" containerID="0a4de50ee67947d138f0a6fbc8c3acf24a2b9293c0d0a8cda201de752f64a5c2" Feb 16 17:39:58 crc kubenswrapper[4794]: E0216 17:39:58.792760 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:40:05 crc kubenswrapper[4794]: E0216 17:40:05.795117 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:40:06 crc kubenswrapper[4794]: E0216 17:40:06.796639 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:40:11 crc kubenswrapper[4794]: I0216 17:40:11.301106 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m7fmf"] Feb 16 17:40:11 crc kubenswrapper[4794]: I0216 17:40:11.304552 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m7fmf" Feb 16 17:40:11 crc kubenswrapper[4794]: I0216 17:40:11.343843 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m7fmf"] Feb 16 17:40:11 crc kubenswrapper[4794]: I0216 17:40:11.481090 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efb35358-0846-42c4-9492-94555bdc6b67-catalog-content\") pod \"redhat-operators-m7fmf\" (UID: \"efb35358-0846-42c4-9492-94555bdc6b67\") " pod="openshift-marketplace/redhat-operators-m7fmf" Feb 16 17:40:11 crc kubenswrapper[4794]: I0216 17:40:11.481221 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efb35358-0846-42c4-9492-94555bdc6b67-utilities\") pod \"redhat-operators-m7fmf\" (UID: \"efb35358-0846-42c4-9492-94555bdc6b67\") " pod="openshift-marketplace/redhat-operators-m7fmf" Feb 16 17:40:11 crc kubenswrapper[4794]: I0216 17:40:11.481265 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlv6f\" (UniqueName: \"kubernetes.io/projected/efb35358-0846-42c4-9492-94555bdc6b67-kube-api-access-dlv6f\") pod \"redhat-operators-m7fmf\" (UID: \"efb35358-0846-42c4-9492-94555bdc6b67\") " pod="openshift-marketplace/redhat-operators-m7fmf" Feb 16 17:40:11 crc kubenswrapper[4794]: I0216 17:40:11.583752 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efb35358-0846-42c4-9492-94555bdc6b67-catalog-content\") pod \"redhat-operators-m7fmf\" (UID: \"efb35358-0846-42c4-9492-94555bdc6b67\") " pod="openshift-marketplace/redhat-operators-m7fmf" Feb 16 17:40:11 crc kubenswrapper[4794]: I0216 17:40:11.583886 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efb35358-0846-42c4-9492-94555bdc6b67-utilities\") pod \"redhat-operators-m7fmf\" (UID: \"efb35358-0846-42c4-9492-94555bdc6b67\") " pod="openshift-marketplace/redhat-operators-m7fmf" Feb 16 17:40:11 crc kubenswrapper[4794]: I0216 17:40:11.583947 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlv6f\" (UniqueName: \"kubernetes.io/projected/efb35358-0846-42c4-9492-94555bdc6b67-kube-api-access-dlv6f\") pod \"redhat-operators-m7fmf\" (UID: \"efb35358-0846-42c4-9492-94555bdc6b67\") " pod="openshift-marketplace/redhat-operators-m7fmf" Feb 16 17:40:11 crc kubenswrapper[4794]: I0216 17:40:11.584455 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efb35358-0846-42c4-9492-94555bdc6b67-utilities\") pod \"redhat-operators-m7fmf\" (UID: \"efb35358-0846-42c4-9492-94555bdc6b67\") " pod="openshift-marketplace/redhat-operators-m7fmf" Feb 16 17:40:11 crc kubenswrapper[4794]: I0216 17:40:11.584457 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efb35358-0846-42c4-9492-94555bdc6b67-catalog-content\") pod \"redhat-operators-m7fmf\" (UID: \"efb35358-0846-42c4-9492-94555bdc6b67\") " pod="openshift-marketplace/redhat-operators-m7fmf" Feb 16 17:40:11 crc kubenswrapper[4794]: I0216 17:40:11.604841 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlv6f\" (UniqueName: \"kubernetes.io/projected/efb35358-0846-42c4-9492-94555bdc6b67-kube-api-access-dlv6f\") pod \"redhat-operators-m7fmf\" (UID: \"efb35358-0846-42c4-9492-94555bdc6b67\") " pod="openshift-marketplace/redhat-operators-m7fmf" Feb 16 17:40:11 crc kubenswrapper[4794]: I0216 17:40:11.632238 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m7fmf" Feb 16 17:40:12 crc kubenswrapper[4794]: I0216 17:40:12.095528 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m7fmf"] Feb 16 17:40:12 crc kubenswrapper[4794]: I0216 17:40:12.472279 4794 generic.go:334] "Generic (PLEG): container finished" podID="efb35358-0846-42c4-9492-94555bdc6b67" containerID="bcb89b900ceaa4f9c52746da677a9340d2ef58124ede70ceca8d9be2f8ca23cd" exitCode=0 Feb 16 17:40:12 crc kubenswrapper[4794]: I0216 17:40:12.473399 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m7fmf" event={"ID":"efb35358-0846-42c4-9492-94555bdc6b67","Type":"ContainerDied","Data":"bcb89b900ceaa4f9c52746da677a9340d2ef58124ede70ceca8d9be2f8ca23cd"} Feb 16 17:40:12 crc kubenswrapper[4794]: I0216 17:40:12.473517 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m7fmf" event={"ID":"efb35358-0846-42c4-9492-94555bdc6b67","Type":"ContainerStarted","Data":"d15f82000746f6a387ccb043ee51d15c71a2f3428f3e9160e5700cf8c442902d"} Feb 16 17:40:13 crc kubenswrapper[4794]: I0216 17:40:13.491137 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m7fmf" event={"ID":"efb35358-0846-42c4-9492-94555bdc6b67","Type":"ContainerStarted","Data":"a5f42a8f9a3d41a34b2308b82bcff002dc8264e3f8c6386c360028a8074fed7d"} Feb 16 17:40:13 crc kubenswrapper[4794]: I0216 17:40:13.792433 4794 scope.go:117] "RemoveContainer" containerID="0a4de50ee67947d138f0a6fbc8c3acf24a2b9293c0d0a8cda201de752f64a5c2" Feb 16 17:40:13 crc kubenswrapper[4794]: E0216 17:40:13.792817 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:40:18 crc kubenswrapper[4794]: E0216 17:40:18.103119 4794 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefb35358_0846_42c4_9492_94555bdc6b67.slice/crio-conmon-a5f42a8f9a3d41a34b2308b82bcff002dc8264e3f8c6386c360028a8074fed7d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefb35358_0846_42c4_9492_94555bdc6b67.slice/crio-a5f42a8f9a3d41a34b2308b82bcff002dc8264e3f8c6386c360028a8074fed7d.scope\": RecentStats: unable to find data in memory cache]" Feb 16 17:40:18 crc kubenswrapper[4794]: I0216 17:40:18.549197 4794 generic.go:334] "Generic (PLEG): container finished" podID="efb35358-0846-42c4-9492-94555bdc6b67" containerID="a5f42a8f9a3d41a34b2308b82bcff002dc8264e3f8c6386c360028a8074fed7d" exitCode=0 Feb 16 17:40:18 crc kubenswrapper[4794]: I0216 17:40:18.549246 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m7fmf" event={"ID":"efb35358-0846-42c4-9492-94555bdc6b67","Type":"ContainerDied","Data":"a5f42a8f9a3d41a34b2308b82bcff002dc8264e3f8c6386c360028a8074fed7d"} Feb 16 17:40:18 crc kubenswrapper[4794]: E0216 17:40:18.793865 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:40:19 crc kubenswrapper[4794]: I0216 17:40:19.570190 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m7fmf" event={"ID":"efb35358-0846-42c4-9492-94555bdc6b67","Type":"ContainerStarted","Data":"143722baa8ce9164e4eda143ac183164ea3731681dadcd056ff0b0aac4a7a848"} Feb 16 17:40:19 crc kubenswrapper[4794]: I0216 17:40:19.596238 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m7fmf" podStartSLOduration=2.084371617 podStartE2EDuration="8.596221765s" podCreationTimestamp="2026-02-16 17:40:11 +0000 UTC" firstStartedPulling="2026-02-16 17:40:12.474861161 +0000 UTC m=+2438.422955808" lastFinishedPulling="2026-02-16 17:40:18.986711289 +0000 UTC m=+2444.934805956" observedRunningTime="2026-02-16 17:40:19.593154268 +0000 UTC m=+2445.541248925" watchObservedRunningTime="2026-02-16 17:40:19.596221765 +0000 UTC m=+2445.544316412" Feb 16 17:40:19 crc kubenswrapper[4794]: E0216 17:40:19.793875 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:40:20 crc kubenswrapper[4794]: I0216 17:40:20.530210 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v6fpv"] Feb 16 17:40:20 crc kubenswrapper[4794]: I0216 17:40:20.534733 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v6fpv" Feb 16 17:40:20 crc kubenswrapper[4794]: I0216 17:40:20.561866 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v6fpv"] Feb 16 17:40:20 crc kubenswrapper[4794]: I0216 17:40:20.733402 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e261c1f-73e1-4df0-8b70-82134d90a4a5-utilities\") pod \"certified-operators-v6fpv\" (UID: \"5e261c1f-73e1-4df0-8b70-82134d90a4a5\") " pod="openshift-marketplace/certified-operators-v6fpv" Feb 16 17:40:20 crc kubenswrapper[4794]: I0216 17:40:20.733871 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e261c1f-73e1-4df0-8b70-82134d90a4a5-catalog-content\") pod \"certified-operators-v6fpv\" (UID: \"5e261c1f-73e1-4df0-8b70-82134d90a4a5\") " pod="openshift-marketplace/certified-operators-v6fpv" Feb 16 17:40:20 crc kubenswrapper[4794]: I0216 17:40:20.734029 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhwwn\" (UniqueName: \"kubernetes.io/projected/5e261c1f-73e1-4df0-8b70-82134d90a4a5-kube-api-access-dhwwn\") pod \"certified-operators-v6fpv\" (UID: \"5e261c1f-73e1-4df0-8b70-82134d90a4a5\") " pod="openshift-marketplace/certified-operators-v6fpv" Feb 16 17:40:20 crc kubenswrapper[4794]: I0216 17:40:20.836710 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e261c1f-73e1-4df0-8b70-82134d90a4a5-catalog-content\") pod \"certified-operators-v6fpv\" (UID: \"5e261c1f-73e1-4df0-8b70-82134d90a4a5\") " pod="openshift-marketplace/certified-operators-v6fpv" Feb 16 17:40:20 crc kubenswrapper[4794]: I0216 17:40:20.837008 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhwwn\" (UniqueName: \"kubernetes.io/projected/5e261c1f-73e1-4df0-8b70-82134d90a4a5-kube-api-access-dhwwn\") pod \"certified-operators-v6fpv\" (UID: \"5e261c1f-73e1-4df0-8b70-82134d90a4a5\") " pod="openshift-marketplace/certified-operators-v6fpv" Feb 16 17:40:20 crc kubenswrapper[4794]: I0216 17:40:20.837122 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e261c1f-73e1-4df0-8b70-82134d90a4a5-utilities\") pod \"certified-operators-v6fpv\" (UID: \"5e261c1f-73e1-4df0-8b70-82134d90a4a5\") " pod="openshift-marketplace/certified-operators-v6fpv" Feb 16 17:40:20 crc kubenswrapper[4794]: I0216 17:40:20.838281 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e261c1f-73e1-4df0-8b70-82134d90a4a5-catalog-content\") pod \"certified-operators-v6fpv\" (UID: \"5e261c1f-73e1-4df0-8b70-82134d90a4a5\") " pod="openshift-marketplace/certified-operators-v6fpv" Feb 16 17:40:20 crc kubenswrapper[4794]: I0216 17:40:20.838342 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e261c1f-73e1-4df0-8b70-82134d90a4a5-utilities\") pod \"certified-operators-v6fpv\" (UID: \"5e261c1f-73e1-4df0-8b70-82134d90a4a5\") " pod="openshift-marketplace/certified-operators-v6fpv" Feb 16 17:40:20 crc kubenswrapper[4794]: I0216 17:40:20.858267 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhwwn\" (UniqueName: \"kubernetes.io/projected/5e261c1f-73e1-4df0-8b70-82134d90a4a5-kube-api-access-dhwwn\") pod \"certified-operators-v6fpv\" (UID: \"5e261c1f-73e1-4df0-8b70-82134d90a4a5\") " pod="openshift-marketplace/certified-operators-v6fpv" Feb 16 17:40:21 crc kubenswrapper[4794]: I0216 17:40:21.153920 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v6fpv" Feb 16 17:40:21 crc kubenswrapper[4794]: I0216 17:40:21.632706 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m7fmf" Feb 16 17:40:21 crc kubenswrapper[4794]: I0216 17:40:21.633358 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m7fmf" Feb 16 17:40:21 crc kubenswrapper[4794]: I0216 17:40:21.772177 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v6fpv"] Feb 16 17:40:22 crc kubenswrapper[4794]: I0216 17:40:22.601414 4794 generic.go:334] "Generic (PLEG): container finished" podID="5e261c1f-73e1-4df0-8b70-82134d90a4a5" containerID="e709836363a13a032cca74a402ed99a2bb3f85007959af97672976ee12574dd2" exitCode=0 Feb 16 17:40:22 crc kubenswrapper[4794]: I0216 17:40:22.601538 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v6fpv" event={"ID":"5e261c1f-73e1-4df0-8b70-82134d90a4a5","Type":"ContainerDied","Data":"e709836363a13a032cca74a402ed99a2bb3f85007959af97672976ee12574dd2"} Feb 16 17:40:22 crc kubenswrapper[4794]: I0216 17:40:22.601782 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v6fpv" event={"ID":"5e261c1f-73e1-4df0-8b70-82134d90a4a5","Type":"ContainerStarted","Data":"7198e4826aadb916c8e2b3b3fe77bfd449c9b3b1beb9febe352b87afa8caacf0"} Feb 16 17:40:22 crc kubenswrapper[4794]: I0216 17:40:22.828468 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m7fmf" podUID="efb35358-0846-42c4-9492-94555bdc6b67" containerName="registry-server" probeResult="failure" output=< Feb 16 17:40:22 crc kubenswrapper[4794]: timeout: failed to connect service ":50051" within 1s Feb 16 17:40:22 crc kubenswrapper[4794]: > Feb 16 17:40:25 crc kubenswrapper[4794]: I0216 17:40:25.793000 4794 scope.go:117] "RemoveContainer" containerID="0a4de50ee67947d138f0a6fbc8c3acf24a2b9293c0d0a8cda201de752f64a5c2" Feb 16 17:40:25 crc kubenswrapper[4794]: E0216 17:40:25.793930 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:40:28 crc kubenswrapper[4794]: I0216 17:40:28.660739 4794 generic.go:334] "Generic (PLEG): container finished" podID="5e261c1f-73e1-4df0-8b70-82134d90a4a5" containerID="48ee0b0f3573fd225b5f01887fd746079d4cbe93cb4a4edd74c61b93457d91de" exitCode=0 Feb 16 17:40:28 crc kubenswrapper[4794]: I0216 17:40:28.660830 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v6fpv" event={"ID":"5e261c1f-73e1-4df0-8b70-82134d90a4a5","Type":"ContainerDied","Data":"48ee0b0f3573fd225b5f01887fd746079d4cbe93cb4a4edd74c61b93457d91de"} Feb 16 17:40:29 crc kubenswrapper[4794]: I0216 17:40:29.673714 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v6fpv" event={"ID":"5e261c1f-73e1-4df0-8b70-82134d90a4a5","Type":"ContainerStarted","Data":"ae6545ae42d4a2087847f5e4a1b338a10e23af65c05a6aacc64c1eac92f20a36"} Feb 16 17:40:29 crc kubenswrapper[4794]: I0216 17:40:29.696526 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v6fpv" podStartSLOduration=3.014307494 podStartE2EDuration="9.696504926s" podCreationTimestamp="2026-02-16 17:40:20 +0000 UTC" firstStartedPulling="2026-02-16 17:40:22.603270398 +0000 UTC m=+2448.551365065" lastFinishedPulling="2026-02-16 17:40:29.28546785 +0000 UTC m=+2455.233562497" observedRunningTime="2026-02-16 17:40:29.695814306 +0000 UTC m=+2455.643908953" watchObservedRunningTime="2026-02-16 17:40:29.696504926 +0000 UTC m=+2455.644599573" Feb 16 17:40:31 crc kubenswrapper[4794]: I0216 17:40:31.154346 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-v6fpv" Feb 16 17:40:31 crc kubenswrapper[4794]: I0216 17:40:31.154951 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v6fpv" Feb 16 17:40:32 crc kubenswrapper[4794]: I0216 17:40:32.204814 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-v6fpv" podUID="5e261c1f-73e1-4df0-8b70-82134d90a4a5" containerName="registry-server" probeResult="failure" output=< Feb 16 17:40:32 crc kubenswrapper[4794]: timeout: failed to connect service ":50051" within 1s Feb 16 17:40:32 crc kubenswrapper[4794]: > Feb 16 17:40:32 crc kubenswrapper[4794]: I0216 17:40:32.679351 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m7fmf" podUID="efb35358-0846-42c4-9492-94555bdc6b67" containerName="registry-server" probeResult="failure" output=< Feb 16 17:40:32 crc kubenswrapper[4794]: timeout: failed to connect service ":50051" within 1s Feb 16 17:40:32 crc kubenswrapper[4794]: > Feb 16 17:40:32 crc kubenswrapper[4794]: E0216 17:40:32.794624 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:40:33 crc kubenswrapper[4794]: E0216 17:40:33.793287 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:40:40 crc kubenswrapper[4794]: I0216 17:40:40.792970 4794 scope.go:117] "RemoveContainer" containerID="0a4de50ee67947d138f0a6fbc8c3acf24a2b9293c0d0a8cda201de752f64a5c2" Feb 16 17:40:40 crc kubenswrapper[4794]: E0216 17:40:40.794484 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:40:41 crc kubenswrapper[4794]: I0216 17:40:41.202240 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v6fpv" Feb 16 17:40:41 crc kubenswrapper[4794]: I0216 17:40:41.280190 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v6fpv" Feb 16 17:40:42 crc kubenswrapper[4794]: I0216 17:40:42.273475 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v6fpv"] Feb 16 17:40:42 crc kubenswrapper[4794]: I0216 17:40:42.446713 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qrdnt"] Feb 16 17:40:42 crc kubenswrapper[4794]: I0216 17:40:42.446997 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qrdnt" podUID="9cf8044a-dc2d-47f7-9edb-166f30ac8ab2" containerName="registry-server" containerID="cri-o://bdcf07558f2b211b32d97d0a67b6b0b83ff9619d9d916da190c30a7c9096962f" gracePeriod=2 Feb 16 17:40:42 crc kubenswrapper[4794]: I0216 17:40:42.685498 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m7fmf" podUID="efb35358-0846-42c4-9492-94555bdc6b67" containerName="registry-server" probeResult="failure" output=< Feb 16 17:40:42 crc kubenswrapper[4794]: timeout: failed to connect service ":50051" within 1s Feb 16 17:40:42 crc kubenswrapper[4794]: > Feb 16 17:40:42 crc kubenswrapper[4794]: I0216 17:40:42.837074 4794 generic.go:334] "Generic (PLEG): container finished" podID="9cf8044a-dc2d-47f7-9edb-166f30ac8ab2" containerID="bdcf07558f2b211b32d97d0a67b6b0b83ff9619d9d916da190c30a7c9096962f" exitCode=0 Feb 16 17:40:42 crc kubenswrapper[4794]: I0216 17:40:42.837486 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qrdnt" event={"ID":"9cf8044a-dc2d-47f7-9edb-166f30ac8ab2","Type":"ContainerDied","Data":"bdcf07558f2b211b32d97d0a67b6b0b83ff9619d9d916da190c30a7c9096962f"} Feb 16 17:40:42 crc kubenswrapper[4794]: I0216 17:40:42.980805 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qrdnt" Feb 16 17:40:43 crc kubenswrapper[4794]: I0216 17:40:43.166707 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cf8044a-dc2d-47f7-9edb-166f30ac8ab2-catalog-content\") pod \"9cf8044a-dc2d-47f7-9edb-166f30ac8ab2\" (UID: \"9cf8044a-dc2d-47f7-9edb-166f30ac8ab2\") " Feb 16 17:40:43 crc kubenswrapper[4794]: I0216 17:40:43.166762 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwltm\" (UniqueName: \"kubernetes.io/projected/9cf8044a-dc2d-47f7-9edb-166f30ac8ab2-kube-api-access-cwltm\") pod \"9cf8044a-dc2d-47f7-9edb-166f30ac8ab2\" (UID: \"9cf8044a-dc2d-47f7-9edb-166f30ac8ab2\") " Feb 16 17:40:43 crc kubenswrapper[4794]: I0216 17:40:43.166936 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cf8044a-dc2d-47f7-9edb-166f30ac8ab2-utilities\") pod \"9cf8044a-dc2d-47f7-9edb-166f30ac8ab2\" (UID: \"9cf8044a-dc2d-47f7-9edb-166f30ac8ab2\") " Feb 16 17:40:43 crc kubenswrapper[4794]: I0216 17:40:43.169144 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cf8044a-dc2d-47f7-9edb-166f30ac8ab2-utilities" (OuterVolumeSpecName: "utilities") pod "9cf8044a-dc2d-47f7-9edb-166f30ac8ab2" (UID: "9cf8044a-dc2d-47f7-9edb-166f30ac8ab2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:40:43 crc kubenswrapper[4794]: I0216 17:40:43.183447 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cf8044a-dc2d-47f7-9edb-166f30ac8ab2-kube-api-access-cwltm" (OuterVolumeSpecName: "kube-api-access-cwltm") pod "9cf8044a-dc2d-47f7-9edb-166f30ac8ab2" (UID: "9cf8044a-dc2d-47f7-9edb-166f30ac8ab2"). InnerVolumeSpecName "kube-api-access-cwltm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:40:43 crc kubenswrapper[4794]: I0216 17:40:43.270446 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cf8044a-dc2d-47f7-9edb-166f30ac8ab2-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:40:43 crc kubenswrapper[4794]: I0216 17:40:43.270494 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwltm\" (UniqueName: \"kubernetes.io/projected/9cf8044a-dc2d-47f7-9edb-166f30ac8ab2-kube-api-access-cwltm\") on node \"crc\" DevicePath \"\"" Feb 16 17:40:43 crc kubenswrapper[4794]: I0216 17:40:43.301709 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cf8044a-dc2d-47f7-9edb-166f30ac8ab2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9cf8044a-dc2d-47f7-9edb-166f30ac8ab2" (UID: "9cf8044a-dc2d-47f7-9edb-166f30ac8ab2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:40:43 crc kubenswrapper[4794]: I0216 17:40:43.372247 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cf8044a-dc2d-47f7-9edb-166f30ac8ab2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:40:43 crc kubenswrapper[4794]: I0216 17:40:43.849844 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qrdnt" event={"ID":"9cf8044a-dc2d-47f7-9edb-166f30ac8ab2","Type":"ContainerDied","Data":"7e0d3fbb6573eb76f5a71dd29810ee407785da0f608f860054282e1e6e16be48"} Feb 16 17:40:43 crc kubenswrapper[4794]: I0216 17:40:43.850210 4794 scope.go:117] "RemoveContainer" containerID="bdcf07558f2b211b32d97d0a67b6b0b83ff9619d9d916da190c30a7c9096962f" Feb 16 17:40:43 crc kubenswrapper[4794]: I0216 17:40:43.850401 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qrdnt" Feb 16 17:40:43 crc kubenswrapper[4794]: I0216 17:40:43.888524 4794 scope.go:117] "RemoveContainer" containerID="e8db6b17c88c8787b4d7492a2bfe162c4b65610ff33c9f119effb10dc0d72d45" Feb 16 17:40:43 crc kubenswrapper[4794]: I0216 17:40:43.909880 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qrdnt"] Feb 16 17:40:43 crc kubenswrapper[4794]: I0216 17:40:43.919757 4794 scope.go:117] "RemoveContainer" containerID="901c4b8f6ddad898a22d3847ce4f308c6f37ceab4aff8de9d30fa16de856d012" Feb 16 17:40:43 crc kubenswrapper[4794]: I0216 17:40:43.920212 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qrdnt"] Feb 16 17:40:44 crc kubenswrapper[4794]: I0216 17:40:44.814319 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cf8044a-dc2d-47f7-9edb-166f30ac8ab2" path="/var/lib/kubelet/pods/9cf8044a-dc2d-47f7-9edb-166f30ac8ab2/volumes" Feb 16 17:40:45 crc kubenswrapper[4794]: E0216 17:40:45.793774 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:40:45 crc kubenswrapper[4794]: E0216 17:40:45.793979 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:40:52 crc kubenswrapper[4794]: I0216 17:40:52.704910 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m7fmf" podUID="efb35358-0846-42c4-9492-94555bdc6b67" containerName="registry-server" probeResult="failure" output=< Feb 16 17:40:52 crc kubenswrapper[4794]: timeout: failed to connect service ":50051" within 1s Feb 16 17:40:52 crc kubenswrapper[4794]: > Feb 16 17:40:52 crc kubenswrapper[4794]: I0216 17:40:52.793003 4794 scope.go:117] "RemoveContainer" containerID="0a4de50ee67947d138f0a6fbc8c3acf24a2b9293c0d0a8cda201de752f64a5c2" Feb 16 17:40:53 crc kubenswrapper[4794]: I0216 17:40:53.958547 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerStarted","Data":"606db6f35e9c74feff8bb39ccbb04e71ce2ca1130b67430c806c8e435d10e146"} Feb 16 17:40:58 crc kubenswrapper[4794]: E0216 17:40:58.794944 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:41:00 crc kubenswrapper[4794]: E0216 17:41:00.793492 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:41:01 crc kubenswrapper[4794]: I0216 17:41:01.699110 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m7fmf" Feb 16 17:41:01 crc kubenswrapper[4794]: I0216 17:41:01.767878 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m7fmf" Feb 16 17:41:02 crc kubenswrapper[4794]: I0216 17:41:02.968498 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m7fmf"] Feb 16 17:41:03 crc kubenswrapper[4794]: I0216 17:41:03.056918 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m7fmf" podUID="efb35358-0846-42c4-9492-94555bdc6b67" containerName="registry-server" containerID="cri-o://143722baa8ce9164e4eda143ac183164ea3731681dadcd056ff0b0aac4a7a848" gracePeriod=2 Feb 16 17:41:03 crc kubenswrapper[4794]: I0216 17:41:03.647871 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m7fmf" Feb 16 17:41:03 crc kubenswrapper[4794]: I0216 17:41:03.754037 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efb35358-0846-42c4-9492-94555bdc6b67-catalog-content\") pod \"efb35358-0846-42c4-9492-94555bdc6b67\" (UID: \"efb35358-0846-42c4-9492-94555bdc6b67\") " Feb 16 17:41:03 crc kubenswrapper[4794]: I0216 17:41:03.754225 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlv6f\" (UniqueName: \"kubernetes.io/projected/efb35358-0846-42c4-9492-94555bdc6b67-kube-api-access-dlv6f\") pod \"efb35358-0846-42c4-9492-94555bdc6b67\" (UID: \"efb35358-0846-42c4-9492-94555bdc6b67\") " Feb 16 17:41:03 crc kubenswrapper[4794]: I0216 17:41:03.754512 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efb35358-0846-42c4-9492-94555bdc6b67-utilities\") pod \"efb35358-0846-42c4-9492-94555bdc6b67\" (UID: \"efb35358-0846-42c4-9492-94555bdc6b67\") " Feb 16 17:41:03 crc kubenswrapper[4794]: I0216 17:41:03.755727 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efb35358-0846-42c4-9492-94555bdc6b67-utilities" (OuterVolumeSpecName: "utilities") pod "efb35358-0846-42c4-9492-94555bdc6b67" (UID: "efb35358-0846-42c4-9492-94555bdc6b67"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:41:03 crc kubenswrapper[4794]: I0216 17:41:03.772552 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efb35358-0846-42c4-9492-94555bdc6b67-kube-api-access-dlv6f" (OuterVolumeSpecName: "kube-api-access-dlv6f") pod "efb35358-0846-42c4-9492-94555bdc6b67" (UID: "efb35358-0846-42c4-9492-94555bdc6b67"). InnerVolumeSpecName "kube-api-access-dlv6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:41:03 crc kubenswrapper[4794]: I0216 17:41:03.857232 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efb35358-0846-42c4-9492-94555bdc6b67-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:41:03 crc kubenswrapper[4794]: I0216 17:41:03.857262 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlv6f\" (UniqueName: \"kubernetes.io/projected/efb35358-0846-42c4-9492-94555bdc6b67-kube-api-access-dlv6f\") on node \"crc\" DevicePath \"\"" Feb 16 17:41:03 crc kubenswrapper[4794]: I0216 17:41:03.864930 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efb35358-0846-42c4-9492-94555bdc6b67-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "efb35358-0846-42c4-9492-94555bdc6b67" (UID: "efb35358-0846-42c4-9492-94555bdc6b67"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:41:03 crc kubenswrapper[4794]: I0216 17:41:03.959383 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efb35358-0846-42c4-9492-94555bdc6b67-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:41:04 crc kubenswrapper[4794]: I0216 17:41:04.068267 4794 generic.go:334] "Generic (PLEG): container finished" podID="efb35358-0846-42c4-9492-94555bdc6b67" containerID="143722baa8ce9164e4eda143ac183164ea3731681dadcd056ff0b0aac4a7a848" exitCode=0 Feb 16 17:41:04 crc kubenswrapper[4794]: I0216 17:41:04.068320 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m7fmf" event={"ID":"efb35358-0846-42c4-9492-94555bdc6b67","Type":"ContainerDied","Data":"143722baa8ce9164e4eda143ac183164ea3731681dadcd056ff0b0aac4a7a848"} Feb 16 17:41:04 crc kubenswrapper[4794]: I0216 17:41:04.068348 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m7fmf" event={"ID":"efb35358-0846-42c4-9492-94555bdc6b67","Type":"ContainerDied","Data":"d15f82000746f6a387ccb043ee51d15c71a2f3428f3e9160e5700cf8c442902d"} Feb 16 17:41:04 crc kubenswrapper[4794]: I0216 17:41:04.068364 4794 scope.go:117] "RemoveContainer" containerID="143722baa8ce9164e4eda143ac183164ea3731681dadcd056ff0b0aac4a7a848" Feb 16 17:41:04 crc kubenswrapper[4794]: I0216 17:41:04.068374 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m7fmf" Feb 16 17:41:04 crc kubenswrapper[4794]: I0216 17:41:04.097630 4794 scope.go:117] "RemoveContainer" containerID="a5f42a8f9a3d41a34b2308b82bcff002dc8264e3f8c6386c360028a8074fed7d" Feb 16 17:41:04 crc kubenswrapper[4794]: I0216 17:41:04.125332 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m7fmf"] Feb 16 17:41:04 crc kubenswrapper[4794]: I0216 17:41:04.143245 4794 scope.go:117] "RemoveContainer" containerID="bcb89b900ceaa4f9c52746da677a9340d2ef58124ede70ceca8d9be2f8ca23cd" Feb 16 17:41:04 crc kubenswrapper[4794]: I0216 17:41:04.155391 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m7fmf"] Feb 16 17:41:04 crc kubenswrapper[4794]: I0216 17:41:04.193046 4794 scope.go:117] "RemoveContainer" containerID="143722baa8ce9164e4eda143ac183164ea3731681dadcd056ff0b0aac4a7a848" Feb 16 17:41:04 crc kubenswrapper[4794]: E0216 17:41:04.195141 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"143722baa8ce9164e4eda143ac183164ea3731681dadcd056ff0b0aac4a7a848\": container with ID starting with 143722baa8ce9164e4eda143ac183164ea3731681dadcd056ff0b0aac4a7a848 not found: ID does not exist" containerID="143722baa8ce9164e4eda143ac183164ea3731681dadcd056ff0b0aac4a7a848" Feb 16 17:41:04 crc kubenswrapper[4794]: I0216 17:41:04.195184 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"143722baa8ce9164e4eda143ac183164ea3731681dadcd056ff0b0aac4a7a848"} err="failed to get container status \"143722baa8ce9164e4eda143ac183164ea3731681dadcd056ff0b0aac4a7a848\": rpc error: code = NotFound desc = could not find container \"143722baa8ce9164e4eda143ac183164ea3731681dadcd056ff0b0aac4a7a848\": container with ID starting with 143722baa8ce9164e4eda143ac183164ea3731681dadcd056ff0b0aac4a7a848 not found: ID does not exist" Feb 16 17:41:04 crc kubenswrapper[4794]: I0216 17:41:04.195219 4794 scope.go:117] "RemoveContainer" containerID="a5f42a8f9a3d41a34b2308b82bcff002dc8264e3f8c6386c360028a8074fed7d" Feb 16 17:41:04 crc kubenswrapper[4794]: E0216 17:41:04.195627 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5f42a8f9a3d41a34b2308b82bcff002dc8264e3f8c6386c360028a8074fed7d\": container with ID starting with a5f42a8f9a3d41a34b2308b82bcff002dc8264e3f8c6386c360028a8074fed7d not found: ID does not exist" containerID="a5f42a8f9a3d41a34b2308b82bcff002dc8264e3f8c6386c360028a8074fed7d" Feb 16 17:41:04 crc kubenswrapper[4794]: I0216 17:41:04.195655 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5f42a8f9a3d41a34b2308b82bcff002dc8264e3f8c6386c360028a8074fed7d"} err="failed to get container status \"a5f42a8f9a3d41a34b2308b82bcff002dc8264e3f8c6386c360028a8074fed7d\": rpc error: code = NotFound desc = could not find container \"a5f42a8f9a3d41a34b2308b82bcff002dc8264e3f8c6386c360028a8074fed7d\": container with ID starting with a5f42a8f9a3d41a34b2308b82bcff002dc8264e3f8c6386c360028a8074fed7d not found: ID does not exist" Feb 16 17:41:04 crc kubenswrapper[4794]: I0216 17:41:04.195674 4794 scope.go:117] "RemoveContainer" containerID="bcb89b900ceaa4f9c52746da677a9340d2ef58124ede70ceca8d9be2f8ca23cd" Feb 16 17:41:04 crc kubenswrapper[4794]: E0216 17:41:04.195906 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcb89b900ceaa4f9c52746da677a9340d2ef58124ede70ceca8d9be2f8ca23cd\": container with ID starting with bcb89b900ceaa4f9c52746da677a9340d2ef58124ede70ceca8d9be2f8ca23cd not found: ID does not exist" containerID="bcb89b900ceaa4f9c52746da677a9340d2ef58124ede70ceca8d9be2f8ca23cd" Feb 16 17:41:04 crc kubenswrapper[4794]: I0216 17:41:04.195940 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcb89b900ceaa4f9c52746da677a9340d2ef58124ede70ceca8d9be2f8ca23cd"} err="failed to get container status \"bcb89b900ceaa4f9c52746da677a9340d2ef58124ede70ceca8d9be2f8ca23cd\": rpc error: code = NotFound desc = could not find container \"bcb89b900ceaa4f9c52746da677a9340d2ef58124ede70ceca8d9be2f8ca23cd\": container with ID starting with bcb89b900ceaa4f9c52746da677a9340d2ef58124ede70ceca8d9be2f8ca23cd not found: ID does not exist" Feb 16 17:41:04 crc kubenswrapper[4794]: I0216 17:41:04.814819 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efb35358-0846-42c4-9492-94555bdc6b67" path="/var/lib/kubelet/pods/efb35358-0846-42c4-9492-94555bdc6b67/volumes" Feb 16 17:41:10 crc kubenswrapper[4794]: E0216 17:41:10.795271 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:41:13 crc kubenswrapper[4794]: E0216 17:41:13.794532 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:41:22 crc kubenswrapper[4794]: E0216 17:41:22.796433 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:41:28 crc kubenswrapper[4794]: E0216 17:41:28.795743 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:41:36 crc kubenswrapper[4794]: E0216 17:41:36.793529 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:41:43 crc kubenswrapper[4794]: E0216 17:41:43.793213 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:41:50 crc kubenswrapper[4794]: E0216 17:41:50.794095 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:41:56 crc kubenswrapper[4794]: I0216 17:41:56.794608 4794 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 17:41:56 crc kubenswrapper[4794]: E0216 17:41:56.895772 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 17:41:56 crc kubenswrapper[4794]: E0216 17:41:56.895854 4794 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 17:41:56 crc kubenswrapper[4794]: E0216 17:41:56.896041 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2h5l2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-7gcsf_openstack(c695f880-15cb-45b1-8545-60d8437ec631): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 17:41:56 crc kubenswrapper[4794]: E0216 17:41:56.897352 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:42:01 crc kubenswrapper[4794]: E0216 17:42:01.908960 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 17:42:01 crc kubenswrapper[4794]: E0216 17:42:01.909273 4794 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 17:42:01 crc kubenswrapper[4794]: E0216 17:42:01.909405 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59fh58dh6ch557h84h55ch564h5bh58fh5c8h5d4h584h669h667h569h59hd5hdbh9dh67ch5f9h59fh597h96h664h687h66dhfch5ddh5b7h88h59cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9v9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(8981f528-1f74-4d56-a93c-22860725b490): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 17:42:01 crc kubenswrapper[4794]: E0216 17:42:01.911068 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:42:09 crc kubenswrapper[4794]: E0216 17:42:09.794805 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:42:12 crc kubenswrapper[4794]: E0216 17:42:12.794642 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:42:21 crc kubenswrapper[4794]: E0216 17:42:21.793570 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:42:25 crc kubenswrapper[4794]: E0216 17:42:25.795944 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:42:36 crc kubenswrapper[4794]: E0216 17:42:36.794164 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:42:38 crc kubenswrapper[4794]: E0216 17:42:38.794620 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:42:49 crc kubenswrapper[4794]: E0216 17:42:49.794561 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:42:51 crc kubenswrapper[4794]: E0216 17:42:51.794714 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:43:01 crc kubenswrapper[4794]: E0216 17:43:01.795994 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:43:04 crc kubenswrapper[4794]: E0216 17:43:04.805578 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:43:13 crc kubenswrapper[4794]: E0216 17:43:13.794962 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:43:16 crc kubenswrapper[4794]: E0216 17:43:16.794277 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:43:20 crc kubenswrapper[4794]: I0216 17:43:20.140623 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:43:20 crc kubenswrapper[4794]: I0216 17:43:20.140701 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:43:28 crc kubenswrapper[4794]: E0216 17:43:28.795566 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:43:30 crc kubenswrapper[4794]: E0216 17:43:30.794654 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:43:40 crc kubenswrapper[4794]: E0216 17:43:40.794479 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:43:43 crc kubenswrapper[4794]: E0216 17:43:43.793378 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:43:50 crc kubenswrapper[4794]: I0216 17:43:50.140894 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:43:50 crc kubenswrapper[4794]: I0216 17:43:50.141511 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:43:53 crc kubenswrapper[4794]: E0216 17:43:53.794769 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:43:54 crc kubenswrapper[4794]: E0216 17:43:54.802654 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:44:07 crc kubenswrapper[4794]: E0216 17:44:07.794125 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:44:07 crc kubenswrapper[4794]: E0216 17:44:07.794206 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:44:17 crc kubenswrapper[4794]: I0216 17:44:17.300930 4794 generic.go:334] "Generic (PLEG): container finished" podID="7694359c-dd70-4640-bcc6-2ed4377e5cbb" containerID="1d5fceca7c06530be6b37837b02e55bdfb64c818a6d20b346c0ea7433d064ae3" exitCode=2 Feb 16 17:44:17 crc kubenswrapper[4794]: I0216 17:44:17.301018 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9h9h7" event={"ID":"7694359c-dd70-4640-bcc6-2ed4377e5cbb","Type":"ContainerDied","Data":"1d5fceca7c06530be6b37837b02e55bdfb64c818a6d20b346c0ea7433d064ae3"} Feb 16 17:44:18 crc kubenswrapper[4794]: I0216 17:44:18.873101 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9h9h7" Feb 16 17:44:19 crc kubenswrapper[4794]: I0216 17:44:19.053181 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7694359c-dd70-4640-bcc6-2ed4377e5cbb-ssh-key-openstack-edpm-ipam\") pod \"7694359c-dd70-4640-bcc6-2ed4377e5cbb\" (UID: \"7694359c-dd70-4640-bcc6-2ed4377e5cbb\") " Feb 16 17:44:19 crc kubenswrapper[4794]: I0216 17:44:19.053378 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-752kb\" (UniqueName: \"kubernetes.io/projected/7694359c-dd70-4640-bcc6-2ed4377e5cbb-kube-api-access-752kb\") pod \"7694359c-dd70-4640-bcc6-2ed4377e5cbb\" (UID: \"7694359c-dd70-4640-bcc6-2ed4377e5cbb\") " Feb 16 17:44:19 crc kubenswrapper[4794]: I0216 17:44:19.053410 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7694359c-dd70-4640-bcc6-2ed4377e5cbb-inventory\") pod \"7694359c-dd70-4640-bcc6-2ed4377e5cbb\" (UID: \"7694359c-dd70-4640-bcc6-2ed4377e5cbb\") " Feb 16 17:44:19 crc kubenswrapper[4794]: I0216 17:44:19.059532 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7694359c-dd70-4640-bcc6-2ed4377e5cbb-kube-api-access-752kb" (OuterVolumeSpecName: "kube-api-access-752kb") pod "7694359c-dd70-4640-bcc6-2ed4377e5cbb" (UID: "7694359c-dd70-4640-bcc6-2ed4377e5cbb"). InnerVolumeSpecName "kube-api-access-752kb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:44:19 crc kubenswrapper[4794]: I0216 17:44:19.087789 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7694359c-dd70-4640-bcc6-2ed4377e5cbb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7694359c-dd70-4640-bcc6-2ed4377e5cbb" (UID: "7694359c-dd70-4640-bcc6-2ed4377e5cbb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:44:19 crc kubenswrapper[4794]: I0216 17:44:19.090711 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7694359c-dd70-4640-bcc6-2ed4377e5cbb-inventory" (OuterVolumeSpecName: "inventory") pod "7694359c-dd70-4640-bcc6-2ed4377e5cbb" (UID: "7694359c-dd70-4640-bcc6-2ed4377e5cbb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:44:19 crc kubenswrapper[4794]: I0216 17:44:19.156576 4794 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7694359c-dd70-4640-bcc6-2ed4377e5cbb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 17:44:19 crc kubenswrapper[4794]: I0216 17:44:19.156617 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-752kb\" (UniqueName: \"kubernetes.io/projected/7694359c-dd70-4640-bcc6-2ed4377e5cbb-kube-api-access-752kb\") on node \"crc\" DevicePath \"\"" Feb 16 17:44:19 crc kubenswrapper[4794]: I0216 17:44:19.156633 4794 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7694359c-dd70-4640-bcc6-2ed4377e5cbb-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 17:44:19 crc kubenswrapper[4794]: I0216 17:44:19.326486 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9h9h7" event={"ID":"7694359c-dd70-4640-bcc6-2ed4377e5cbb","Type":"ContainerDied","Data":"c4ee309ab11ba7267b917e7df71370e6932206cebd75133de437d7bac0c6f90f"} Feb 16 17:44:19 crc kubenswrapper[4794]: I0216 17:44:19.326538 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9h9h7" Feb 16 17:44:19 crc kubenswrapper[4794]: I0216 17:44:19.326545 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4ee309ab11ba7267b917e7df71370e6932206cebd75133de437d7bac0c6f90f" Feb 16 17:44:20 crc kubenswrapper[4794]: I0216 17:44:20.140623 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:44:20 crc kubenswrapper[4794]: I0216 17:44:20.140687 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:44:20 crc kubenswrapper[4794]: I0216 17:44:20.140739 4794 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" Feb 16 17:44:20 crc kubenswrapper[4794]: I0216 17:44:20.141943 4794 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"606db6f35e9c74feff8bb39ccbb04e71ce2ca1130b67430c806c8e435d10e146"} pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 17:44:20 crc kubenswrapper[4794]: I0216 17:44:20.142033 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" containerID="cri-o://606db6f35e9c74feff8bb39ccbb04e71ce2ca1130b67430c806c8e435d10e146" gracePeriod=600 Feb 16 17:44:20 crc kubenswrapper[4794]: I0216 17:44:20.338884 4794 generic.go:334] "Generic (PLEG): container finished" podID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerID="606db6f35e9c74feff8bb39ccbb04e71ce2ca1130b67430c806c8e435d10e146" exitCode=0 Feb 16 17:44:20 crc kubenswrapper[4794]: I0216 17:44:20.338964 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerDied","Data":"606db6f35e9c74feff8bb39ccbb04e71ce2ca1130b67430c806c8e435d10e146"} Feb 16 17:44:20 crc kubenswrapper[4794]: I0216 17:44:20.339223 4794 scope.go:117] "RemoveContainer" containerID="0a4de50ee67947d138f0a6fbc8c3acf24a2b9293c0d0a8cda201de752f64a5c2" Feb 16 17:44:21 crc kubenswrapper[4794]: I0216 17:44:21.350692 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerStarted","Data":"edf1be4d50aed76dbd4d7c50b58efe821ce0164013d44ef706bca2dbdf85d295"} Feb 16 17:44:21 crc kubenswrapper[4794]: E0216 17:44:21.794070 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:44:22 crc kubenswrapper[4794]: E0216 17:44:22.793478 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:44:33 crc kubenswrapper[4794]: E0216 17:44:33.793232 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:44:35 crc kubenswrapper[4794]: E0216 17:44:35.798100 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:44:36 crc kubenswrapper[4794]: I0216 17:44:36.071227 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wt7qf"] Feb 16 17:44:36 crc kubenswrapper[4794]: E0216 17:44:36.072160 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efb35358-0846-42c4-9492-94555bdc6b67" containerName="registry-server" Feb 16 17:44:36 crc kubenswrapper[4794]: I0216 17:44:36.072176 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="efb35358-0846-42c4-9492-94555bdc6b67" containerName="registry-server" Feb 16 17:44:36 crc kubenswrapper[4794]: E0216 17:44:36.072191 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cf8044a-dc2d-47f7-9edb-166f30ac8ab2" containerName="registry-server" Feb 16 17:44:36 crc kubenswrapper[4794]: I0216 17:44:36.072199 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cf8044a-dc2d-47f7-9edb-166f30ac8ab2" containerName="registry-server" Feb 16 17:44:36 crc kubenswrapper[4794]: E0216 17:44:36.072219 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efb35358-0846-42c4-9492-94555bdc6b67" containerName="extract-utilities" Feb 16 17:44:36 crc kubenswrapper[4794]: I0216 17:44:36.072227 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="efb35358-0846-42c4-9492-94555bdc6b67" containerName="extract-utilities" Feb 16 17:44:36 crc kubenswrapper[4794]: E0216 17:44:36.072241 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7694359c-dd70-4640-bcc6-2ed4377e5cbb" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 17:44:36 crc kubenswrapper[4794]: I0216 17:44:36.072250 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="7694359c-dd70-4640-bcc6-2ed4377e5cbb" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 17:44:36 crc kubenswrapper[4794]: E0216 17:44:36.072266 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cf8044a-dc2d-47f7-9edb-166f30ac8ab2" containerName="extract-content" Feb 16 17:44:36 crc kubenswrapper[4794]: I0216 17:44:36.072272 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cf8044a-dc2d-47f7-9edb-166f30ac8ab2" containerName="extract-content" Feb 16 17:44:36 crc kubenswrapper[4794]: E0216 17:44:36.072312 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efb35358-0846-42c4-9492-94555bdc6b67" containerName="extract-content" Feb 16 17:44:36 crc kubenswrapper[4794]: I0216 17:44:36.072320 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="efb35358-0846-42c4-9492-94555bdc6b67" containerName="extract-content" Feb 16 17:44:36 crc kubenswrapper[4794]: E0216 17:44:36.072339 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cf8044a-dc2d-47f7-9edb-166f30ac8ab2" containerName="extract-utilities" Feb 16 17:44:36 crc kubenswrapper[4794]: I0216 17:44:36.072346 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cf8044a-dc2d-47f7-9edb-166f30ac8ab2" containerName="extract-utilities" Feb 16 17:44:36 crc kubenswrapper[4794]: I0216 17:44:36.072596 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="efb35358-0846-42c4-9492-94555bdc6b67" containerName="registry-server" Feb 16 17:44:36 crc kubenswrapper[4794]: I0216 17:44:36.072626 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="7694359c-dd70-4640-bcc6-2ed4377e5cbb" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 17:44:36 crc kubenswrapper[4794]: I0216 17:44:36.072646 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cf8044a-dc2d-47f7-9edb-166f30ac8ab2" containerName="registry-server" Feb 16 17:44:36 crc kubenswrapper[4794]: I0216 17:44:36.073604 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wt7qf" Feb 16 17:44:36 crc kubenswrapper[4794]: I0216 17:44:36.080514 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kshzw" Feb 16 17:44:36 crc kubenswrapper[4794]: I0216 17:44:36.080729 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 17:44:36 crc kubenswrapper[4794]: I0216 17:44:36.080759 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 17:44:36 crc kubenswrapper[4794]: I0216 17:44:36.080528 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 17:44:36 crc kubenswrapper[4794]: I0216 17:44:36.103613 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wt7qf"] Feb 16 17:44:36 crc kubenswrapper[4794]: I0216 17:44:36.187607 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e0581f8-9225-4111-9249-c8b122cb33d3-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wt7qf\" (UID: \"8e0581f8-9225-4111-9249-c8b122cb33d3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wt7qf" Feb 16 17:44:36 crc kubenswrapper[4794]: I0216 17:44:36.187790 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqlc8\" (UniqueName: \"kubernetes.io/projected/8e0581f8-9225-4111-9249-c8b122cb33d3-kube-api-access-gqlc8\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wt7qf\" (UID: \"8e0581f8-9225-4111-9249-c8b122cb33d3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wt7qf" Feb 16 17:44:36 crc kubenswrapper[4794]: I0216 17:44:36.187839 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e0581f8-9225-4111-9249-c8b122cb33d3-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wt7qf\" (UID: \"8e0581f8-9225-4111-9249-c8b122cb33d3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wt7qf" Feb 16 17:44:36 crc kubenswrapper[4794]: I0216 17:44:36.290433 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqlc8\" (UniqueName: \"kubernetes.io/projected/8e0581f8-9225-4111-9249-c8b122cb33d3-kube-api-access-gqlc8\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wt7qf\" (UID: \"8e0581f8-9225-4111-9249-c8b122cb33d3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wt7qf" Feb 16 17:44:36 crc kubenswrapper[4794]: I0216 17:44:36.290526 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e0581f8-9225-4111-9249-c8b122cb33d3-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wt7qf\" (UID: \"8e0581f8-9225-4111-9249-c8b122cb33d3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wt7qf" Feb 16 17:44:36 crc kubenswrapper[4794]: I0216 17:44:36.290569 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e0581f8-9225-4111-9249-c8b122cb33d3-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wt7qf\" (UID: \"8e0581f8-9225-4111-9249-c8b122cb33d3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wt7qf" Feb 16 17:44:36 crc kubenswrapper[4794]: I0216 17:44:36.315163 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e0581f8-9225-4111-9249-c8b122cb33d3-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wt7qf\" (UID: \"8e0581f8-9225-4111-9249-c8b122cb33d3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wt7qf" Feb 16 17:44:36 crc kubenswrapper[4794]: I0216 17:44:36.315943 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e0581f8-9225-4111-9249-c8b122cb33d3-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wt7qf\" (UID: \"8e0581f8-9225-4111-9249-c8b122cb33d3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wt7qf" Feb 16 17:44:36 crc kubenswrapper[4794]: I0216 17:44:36.318456 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqlc8\" (UniqueName: \"kubernetes.io/projected/8e0581f8-9225-4111-9249-c8b122cb33d3-kube-api-access-gqlc8\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wt7qf\" (UID: \"8e0581f8-9225-4111-9249-c8b122cb33d3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wt7qf" Feb 16 17:44:36 crc kubenswrapper[4794]: I0216 17:44:36.423904 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wt7qf" Feb 16 17:44:36 crc kubenswrapper[4794]: I0216 17:44:36.978531 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wt7qf"] Feb 16 17:44:37 crc kubenswrapper[4794]: I0216 17:44:37.512004 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wt7qf" event={"ID":"8e0581f8-9225-4111-9249-c8b122cb33d3","Type":"ContainerStarted","Data":"6e09e45c5b268c9f21f2dccfdc181d6104b7b16daa9f81437b54e2427c70c826"} Feb 16 17:44:38 crc kubenswrapper[4794]: I0216 17:44:38.523352 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wt7qf" event={"ID":"8e0581f8-9225-4111-9249-c8b122cb33d3","Type":"ContainerStarted","Data":"408c6bcfe91b6b6e76e5c88d93475ba2bc374517c1146658c1f1370a42fdbdf9"} Feb 16 17:44:38 crc kubenswrapper[4794]: I0216 17:44:38.540824 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wt7qf" podStartSLOduration=2.041680047 podStartE2EDuration="2.540803719s" podCreationTimestamp="2026-02-16 17:44:36 +0000 UTC" firstStartedPulling="2026-02-16 17:44:36.987091832 +0000 UTC m=+2702.935186479" lastFinishedPulling="2026-02-16 17:44:37.486215504 +0000 UTC m=+2703.434310151" observedRunningTime="2026-02-16 17:44:38.536140098 +0000 UTC m=+2704.484234745" watchObservedRunningTime="2026-02-16 17:44:38.540803719 +0000 UTC m=+2704.488898366" Feb 16 17:44:46 crc kubenswrapper[4794]: E0216 17:44:46.796360 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:44:49 crc kubenswrapper[4794]: E0216 17:44:49.793898 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:44:58 crc kubenswrapper[4794]: E0216 17:44:58.794070 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:45:00 crc kubenswrapper[4794]: I0216 17:45:00.148286 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521065-dhm45"] Feb 16 17:45:00 crc kubenswrapper[4794]: I0216 17:45:00.150654 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-dhm45" Feb 16 17:45:00 crc kubenswrapper[4794]: I0216 17:45:00.158409 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 17:45:00 crc kubenswrapper[4794]: I0216 17:45:00.158480 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 17:45:00 crc kubenswrapper[4794]: I0216 17:45:00.163643 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521065-dhm45"] Feb 16 17:45:00 crc kubenswrapper[4794]: I0216 17:45:00.264046 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19ddd02e-dace-4ced-807f-11c9b908350c-config-volume\") pod \"collect-profiles-29521065-dhm45\" (UID: \"19ddd02e-dace-4ced-807f-11c9b908350c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-dhm45" Feb 16 17:45:00 crc kubenswrapper[4794]: I0216 17:45:00.264182 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbfpk\" (UniqueName: \"kubernetes.io/projected/19ddd02e-dace-4ced-807f-11c9b908350c-kube-api-access-nbfpk\") pod \"collect-profiles-29521065-dhm45\" (UID: \"19ddd02e-dace-4ced-807f-11c9b908350c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-dhm45" Feb 16 17:45:00 crc kubenswrapper[4794]: I0216 17:45:00.264243 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19ddd02e-dace-4ced-807f-11c9b908350c-secret-volume\") pod \"collect-profiles-29521065-dhm45\" (UID: \"19ddd02e-dace-4ced-807f-11c9b908350c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-dhm45" Feb 16 17:45:00 crc kubenswrapper[4794]: I0216 17:45:00.367023 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19ddd02e-dace-4ced-807f-11c9b908350c-config-volume\") pod \"collect-profiles-29521065-dhm45\" (UID: \"19ddd02e-dace-4ced-807f-11c9b908350c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-dhm45" Feb 16 17:45:00 crc kubenswrapper[4794]: I0216 17:45:00.367093 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbfpk\" (UniqueName: \"kubernetes.io/projected/19ddd02e-dace-4ced-807f-11c9b908350c-kube-api-access-nbfpk\") pod \"collect-profiles-29521065-dhm45\" (UID: \"19ddd02e-dace-4ced-807f-11c9b908350c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-dhm45" Feb 16 17:45:00 crc kubenswrapper[4794]: I0216 17:45:00.367148 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19ddd02e-dace-4ced-807f-11c9b908350c-secret-volume\") pod \"collect-profiles-29521065-dhm45\" (UID: \"19ddd02e-dace-4ced-807f-11c9b908350c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-dhm45" Feb 16 17:45:00 crc kubenswrapper[4794]: I0216 17:45:00.367861 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19ddd02e-dace-4ced-807f-11c9b908350c-config-volume\") pod \"collect-profiles-29521065-dhm45\" (UID: \"19ddd02e-dace-4ced-807f-11c9b908350c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-dhm45" Feb 16 17:45:00 crc kubenswrapper[4794]: I0216 17:45:00.376038 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19ddd02e-dace-4ced-807f-11c9b908350c-secret-volume\") pod \"collect-profiles-29521065-dhm45\" (UID: \"19ddd02e-dace-4ced-807f-11c9b908350c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-dhm45" Feb 16 17:45:00 crc kubenswrapper[4794]: I0216 17:45:00.385641 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbfpk\" (UniqueName: \"kubernetes.io/projected/19ddd02e-dace-4ced-807f-11c9b908350c-kube-api-access-nbfpk\") pod \"collect-profiles-29521065-dhm45\" (UID: \"19ddd02e-dace-4ced-807f-11c9b908350c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-dhm45" Feb 16 17:45:00 crc kubenswrapper[4794]: I0216 17:45:00.484043 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-dhm45" Feb 16 17:45:00 crc kubenswrapper[4794]: I0216 17:45:00.994237 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521065-dhm45"] Feb 16 17:45:01 crc kubenswrapper[4794]: I0216 17:45:01.769467 4794 generic.go:334] "Generic (PLEG): container finished" podID="19ddd02e-dace-4ced-807f-11c9b908350c" containerID="b34af399380d09782265c8f88c5d967841b6b23f394168bb5b3bcf9ab785c64d" exitCode=0 Feb 16 17:45:01 crc kubenswrapper[4794]: I0216 17:45:01.769670 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-dhm45" event={"ID":"19ddd02e-dace-4ced-807f-11c9b908350c","Type":"ContainerDied","Data":"b34af399380d09782265c8f88c5d967841b6b23f394168bb5b3bcf9ab785c64d"} Feb 16 17:45:01 crc kubenswrapper[4794]: I0216 17:45:01.769769 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-dhm45" event={"ID":"19ddd02e-dace-4ced-807f-11c9b908350c","Type":"ContainerStarted","Data":"befdd4bf70f2e92e7c501b4bddf3aa08287b2653e09185a4d06d6997f76d5568"} Feb 16 17:45:02 crc kubenswrapper[4794]: E0216 17:45:02.793796 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:45:03 crc kubenswrapper[4794]: I0216 17:45:03.292037 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-dhm45" Feb 16 17:45:03 crc kubenswrapper[4794]: I0216 17:45:03.376446 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19ddd02e-dace-4ced-807f-11c9b908350c-config-volume\") pod \"19ddd02e-dace-4ced-807f-11c9b908350c\" (UID: \"19ddd02e-dace-4ced-807f-11c9b908350c\") " Feb 16 17:45:03 crc kubenswrapper[4794]: I0216 17:45:03.376591 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbfpk\" (UniqueName: \"kubernetes.io/projected/19ddd02e-dace-4ced-807f-11c9b908350c-kube-api-access-nbfpk\") pod \"19ddd02e-dace-4ced-807f-11c9b908350c\" (UID: \"19ddd02e-dace-4ced-807f-11c9b908350c\") " Feb 16 17:45:03 crc kubenswrapper[4794]: I0216 17:45:03.376725 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19ddd02e-dace-4ced-807f-11c9b908350c-secret-volume\") pod \"19ddd02e-dace-4ced-807f-11c9b908350c\" (UID: \"19ddd02e-dace-4ced-807f-11c9b908350c\") " Feb 16 17:45:03 crc kubenswrapper[4794]: I0216 17:45:03.377446 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19ddd02e-dace-4ced-807f-11c9b908350c-config-volume" (OuterVolumeSpecName: "config-volume") pod "19ddd02e-dace-4ced-807f-11c9b908350c" (UID: "19ddd02e-dace-4ced-807f-11c9b908350c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:45:03 crc kubenswrapper[4794]: I0216 17:45:03.382619 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19ddd02e-dace-4ced-807f-11c9b908350c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "19ddd02e-dace-4ced-807f-11c9b908350c" (UID: "19ddd02e-dace-4ced-807f-11c9b908350c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:45:03 crc kubenswrapper[4794]: I0216 17:45:03.383268 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19ddd02e-dace-4ced-807f-11c9b908350c-kube-api-access-nbfpk" (OuterVolumeSpecName: "kube-api-access-nbfpk") pod "19ddd02e-dace-4ced-807f-11c9b908350c" (UID: "19ddd02e-dace-4ced-807f-11c9b908350c"). InnerVolumeSpecName "kube-api-access-nbfpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:45:03 crc kubenswrapper[4794]: I0216 17:45:03.478960 4794 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19ddd02e-dace-4ced-807f-11c9b908350c-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 17:45:03 crc kubenswrapper[4794]: I0216 17:45:03.478991 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbfpk\" (UniqueName: \"kubernetes.io/projected/19ddd02e-dace-4ced-807f-11c9b908350c-kube-api-access-nbfpk\") on node \"crc\" DevicePath \"\"" Feb 16 17:45:03 crc kubenswrapper[4794]: I0216 17:45:03.479003 4794 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19ddd02e-dace-4ced-807f-11c9b908350c-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 17:45:03 crc kubenswrapper[4794]: I0216 17:45:03.792707 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-dhm45" event={"ID":"19ddd02e-dace-4ced-807f-11c9b908350c","Type":"ContainerDied","Data":"befdd4bf70f2e92e7c501b4bddf3aa08287b2653e09185a4d06d6997f76d5568"} Feb 16 17:45:03 crc kubenswrapper[4794]: I0216 17:45:03.792746 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="befdd4bf70f2e92e7c501b4bddf3aa08287b2653e09185a4d06d6997f76d5568" Feb 16 17:45:03 crc kubenswrapper[4794]: I0216 17:45:03.792790 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-dhm45" Feb 16 17:45:04 crc kubenswrapper[4794]: I0216 17:45:04.376108 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521020-v7cql"] Feb 16 17:45:04 crc kubenswrapper[4794]: I0216 17:45:04.388773 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521020-v7cql"] Feb 16 17:45:04 crc kubenswrapper[4794]: I0216 17:45:04.810881 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a6aade7-1b78-4753-a22d-7251a1b27c9e" path="/var/lib/kubelet/pods/1a6aade7-1b78-4753-a22d-7251a1b27c9e/volumes" Feb 16 17:45:12 crc kubenswrapper[4794]: E0216 17:45:12.793751 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:45:16 crc kubenswrapper[4794]: E0216 17:45:16.794570 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:45:26 crc kubenswrapper[4794]: E0216 17:45:26.794216 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:45:29 crc kubenswrapper[4794]: E0216 17:45:29.793727 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:45:38 crc kubenswrapper[4794]: E0216 17:45:38.794286 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:45:41 crc kubenswrapper[4794]: I0216 17:45:41.901611 4794 scope.go:117] "RemoveContainer" containerID="f29f00ae2ffdde34bb1c46c703b87e7473eed209f751f1a3a774a72120fde604" Feb 16 17:45:44 crc kubenswrapper[4794]: E0216 17:45:44.807681 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:45:51 crc kubenswrapper[4794]: E0216 17:45:51.794024 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:45:57 crc kubenswrapper[4794]: E0216 17:45:57.795645 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:46:02 crc kubenswrapper[4794]: E0216 17:46:02.807963 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:46:11 crc kubenswrapper[4794]: E0216 17:46:11.794930 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:46:12 crc kubenswrapper[4794]: I0216 17:46:12.911622 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xfvnh"] Feb 16 17:46:12 crc kubenswrapper[4794]: E0216 17:46:12.912678 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19ddd02e-dace-4ced-807f-11c9b908350c" containerName="collect-profiles" Feb 16 17:46:12 crc kubenswrapper[4794]: I0216 17:46:12.912698 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="19ddd02e-dace-4ced-807f-11c9b908350c" containerName="collect-profiles" Feb 16 17:46:12 crc kubenswrapper[4794]: I0216 17:46:12.912973 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="19ddd02e-dace-4ced-807f-11c9b908350c" containerName="collect-profiles" Feb 16 17:46:12 crc kubenswrapper[4794]: I0216 17:46:12.915113 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xfvnh" Feb 16 17:46:12 crc kubenswrapper[4794]: I0216 17:46:12.931672 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xfvnh"] Feb 16 17:46:13 crc kubenswrapper[4794]: I0216 17:46:13.094829 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05029d4f-13f1-4025-821a-60f1c4d19ab9-utilities\") pod \"redhat-marketplace-xfvnh\" (UID: \"05029d4f-13f1-4025-821a-60f1c4d19ab9\") " pod="openshift-marketplace/redhat-marketplace-xfvnh" Feb 16 17:46:13 crc kubenswrapper[4794]: I0216 17:46:13.095370 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k52f9\" (UniqueName: \"kubernetes.io/projected/05029d4f-13f1-4025-821a-60f1c4d19ab9-kube-api-access-k52f9\") pod \"redhat-marketplace-xfvnh\" (UID: \"05029d4f-13f1-4025-821a-60f1c4d19ab9\") " pod="openshift-marketplace/redhat-marketplace-xfvnh" Feb 16 17:46:13 crc kubenswrapper[4794]: I0216 17:46:13.095824 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05029d4f-13f1-4025-821a-60f1c4d19ab9-catalog-content\") pod \"redhat-marketplace-xfvnh\" (UID: \"05029d4f-13f1-4025-821a-60f1c4d19ab9\") " pod="openshift-marketplace/redhat-marketplace-xfvnh" Feb 16 17:46:13 crc kubenswrapper[4794]: I0216 17:46:13.202600 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k52f9\" (UniqueName: \"kubernetes.io/projected/05029d4f-13f1-4025-821a-60f1c4d19ab9-kube-api-access-k52f9\") pod \"redhat-marketplace-xfvnh\" (UID: \"05029d4f-13f1-4025-821a-60f1c4d19ab9\") " pod="openshift-marketplace/redhat-marketplace-xfvnh" Feb 16 17:46:13 crc kubenswrapper[4794]: I0216 17:46:13.202683 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05029d4f-13f1-4025-821a-60f1c4d19ab9-catalog-content\") pod \"redhat-marketplace-xfvnh\" (UID: \"05029d4f-13f1-4025-821a-60f1c4d19ab9\") " pod="openshift-marketplace/redhat-marketplace-xfvnh" Feb 16 17:46:13 crc kubenswrapper[4794]: I0216 17:46:13.202746 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05029d4f-13f1-4025-821a-60f1c4d19ab9-utilities\") pod \"redhat-marketplace-xfvnh\" (UID: \"05029d4f-13f1-4025-821a-60f1c4d19ab9\") " pod="openshift-marketplace/redhat-marketplace-xfvnh" Feb 16 17:46:13 crc kubenswrapper[4794]: I0216 17:46:13.203475 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05029d4f-13f1-4025-821a-60f1c4d19ab9-utilities\") pod \"redhat-marketplace-xfvnh\" (UID: \"05029d4f-13f1-4025-821a-60f1c4d19ab9\") " pod="openshift-marketplace/redhat-marketplace-xfvnh" Feb 16 17:46:13 crc kubenswrapper[4794]: I0216 17:46:13.203513 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05029d4f-13f1-4025-821a-60f1c4d19ab9-catalog-content\") pod \"redhat-marketplace-xfvnh\" (UID: \"05029d4f-13f1-4025-821a-60f1c4d19ab9\") " pod="openshift-marketplace/redhat-marketplace-xfvnh" Feb 16 17:46:13 crc kubenswrapper[4794]: I0216 17:46:13.242643 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k52f9\" (UniqueName: \"kubernetes.io/projected/05029d4f-13f1-4025-821a-60f1c4d19ab9-kube-api-access-k52f9\") pod \"redhat-marketplace-xfvnh\" (UID: \"05029d4f-13f1-4025-821a-60f1c4d19ab9\") " pod="openshift-marketplace/redhat-marketplace-xfvnh" Feb 16 17:46:13 crc kubenswrapper[4794]: I0216 17:46:13.539653 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xfvnh" Feb 16 17:46:14 crc kubenswrapper[4794]: I0216 17:46:14.042632 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xfvnh"] Feb 16 17:46:14 crc kubenswrapper[4794]: I0216 17:46:14.571416 4794 generic.go:334] "Generic (PLEG): container finished" podID="05029d4f-13f1-4025-821a-60f1c4d19ab9" containerID="b762720ef67145408c3b2db5070ea15b86976573594e6ad529ba255fe73d41eb" exitCode=0 Feb 16 17:46:14 crc kubenswrapper[4794]: I0216 17:46:14.571469 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xfvnh" event={"ID":"05029d4f-13f1-4025-821a-60f1c4d19ab9","Type":"ContainerDied","Data":"b762720ef67145408c3b2db5070ea15b86976573594e6ad529ba255fe73d41eb"} Feb 16 17:46:14 crc kubenswrapper[4794]: I0216 17:46:14.571496 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xfvnh" event={"ID":"05029d4f-13f1-4025-821a-60f1c4d19ab9","Type":"ContainerStarted","Data":"eeb0a44439105f1413487015796ab42a172d275dd00664a8925ddb14223417d1"} Feb 16 17:46:14 crc kubenswrapper[4794]: E0216 17:46:14.800284 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:46:16 crc kubenswrapper[4794]: I0216 17:46:16.590628 4794 generic.go:334] "Generic (PLEG): container finished" podID="05029d4f-13f1-4025-821a-60f1c4d19ab9" containerID="c19e0b469ccb34c949fdd6120648c6b88f494a51588e2c2a30b59bc3cf8e0601" exitCode=0 Feb 16 17:46:16 crc kubenswrapper[4794]: I0216 17:46:16.590754 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xfvnh" event={"ID":"05029d4f-13f1-4025-821a-60f1c4d19ab9","Type":"ContainerDied","Data":"c19e0b469ccb34c949fdd6120648c6b88f494a51588e2c2a30b59bc3cf8e0601"} Feb 16 17:46:17 crc kubenswrapper[4794]: I0216 17:46:17.603877 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xfvnh" event={"ID":"05029d4f-13f1-4025-821a-60f1c4d19ab9","Type":"ContainerStarted","Data":"0de0ffe8fe81a85f5a408102eccdb27f5eac6d11ba60758a0caf27167f9ff61b"} Feb 16 17:46:17 crc kubenswrapper[4794]: I0216 17:46:17.634976 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xfvnh" podStartSLOduration=3.090034549 podStartE2EDuration="5.634949433s" podCreationTimestamp="2026-02-16 17:46:12 +0000 UTC" firstStartedPulling="2026-02-16 17:46:14.574487086 +0000 UTC m=+2800.522581763" lastFinishedPulling="2026-02-16 17:46:17.119402 +0000 UTC m=+2803.067496647" observedRunningTime="2026-02-16 17:46:17.621646849 +0000 UTC m=+2803.569741496" watchObservedRunningTime="2026-02-16 17:46:17.634949433 +0000 UTC m=+2803.583044120" Feb 16 17:46:20 crc kubenswrapper[4794]: I0216 17:46:20.140886 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:46:20 crc kubenswrapper[4794]: I0216 17:46:20.141278 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:46:23 crc kubenswrapper[4794]: I0216 17:46:23.539908 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xfvnh" Feb 16 17:46:23 crc kubenswrapper[4794]: I0216 17:46:23.540750 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xfvnh" Feb 16 17:46:23 crc kubenswrapper[4794]: I0216 17:46:23.608870 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xfvnh" Feb 16 17:46:23 crc kubenswrapper[4794]: I0216 17:46:23.758611 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xfvnh" Feb 16 17:46:23 crc kubenswrapper[4794]: E0216 17:46:23.802490 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:46:23 crc kubenswrapper[4794]: I0216 17:46:23.862889 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xfvnh"] Feb 16 17:46:25 crc kubenswrapper[4794]: I0216 17:46:25.682564 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xfvnh" podUID="05029d4f-13f1-4025-821a-60f1c4d19ab9" containerName="registry-server" containerID="cri-o://0de0ffe8fe81a85f5a408102eccdb27f5eac6d11ba60758a0caf27167f9ff61b" gracePeriod=2 Feb 16 17:46:26 crc kubenswrapper[4794]: I0216 17:46:26.218890 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xfvnh" Feb 16 17:46:26 crc kubenswrapper[4794]: I0216 17:46:26.370836 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k52f9\" (UniqueName: \"kubernetes.io/projected/05029d4f-13f1-4025-821a-60f1c4d19ab9-kube-api-access-k52f9\") pod \"05029d4f-13f1-4025-821a-60f1c4d19ab9\" (UID: \"05029d4f-13f1-4025-821a-60f1c4d19ab9\") " Feb 16 17:46:26 crc kubenswrapper[4794]: I0216 17:46:26.371022 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05029d4f-13f1-4025-821a-60f1c4d19ab9-utilities\") pod \"05029d4f-13f1-4025-821a-60f1c4d19ab9\" (UID: \"05029d4f-13f1-4025-821a-60f1c4d19ab9\") " Feb 16 17:46:26 crc kubenswrapper[4794]: I0216 17:46:26.371050 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05029d4f-13f1-4025-821a-60f1c4d19ab9-catalog-content\") pod \"05029d4f-13f1-4025-821a-60f1c4d19ab9\" (UID: \"05029d4f-13f1-4025-821a-60f1c4d19ab9\") " Feb 16 17:46:26 crc kubenswrapper[4794]: I0216 17:46:26.372173 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05029d4f-13f1-4025-821a-60f1c4d19ab9-utilities" (OuterVolumeSpecName: "utilities") pod "05029d4f-13f1-4025-821a-60f1c4d19ab9" (UID: "05029d4f-13f1-4025-821a-60f1c4d19ab9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:46:26 crc kubenswrapper[4794]: I0216 17:46:26.378567 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05029d4f-13f1-4025-821a-60f1c4d19ab9-kube-api-access-k52f9" (OuterVolumeSpecName: "kube-api-access-k52f9") pod "05029d4f-13f1-4025-821a-60f1c4d19ab9" (UID: "05029d4f-13f1-4025-821a-60f1c4d19ab9"). InnerVolumeSpecName "kube-api-access-k52f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:46:26 crc kubenswrapper[4794]: I0216 17:46:26.429425 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05029d4f-13f1-4025-821a-60f1c4d19ab9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "05029d4f-13f1-4025-821a-60f1c4d19ab9" (UID: "05029d4f-13f1-4025-821a-60f1c4d19ab9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:46:26 crc kubenswrapper[4794]: I0216 17:46:26.473845 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k52f9\" (UniqueName: \"kubernetes.io/projected/05029d4f-13f1-4025-821a-60f1c4d19ab9-kube-api-access-k52f9\") on node \"crc\" DevicePath \"\"" Feb 16 17:46:26 crc kubenswrapper[4794]: I0216 17:46:26.473879 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05029d4f-13f1-4025-821a-60f1c4d19ab9-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:46:26 crc kubenswrapper[4794]: I0216 17:46:26.473893 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05029d4f-13f1-4025-821a-60f1c4d19ab9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:46:26 crc kubenswrapper[4794]: I0216 17:46:26.695611 4794 generic.go:334] "Generic (PLEG): container finished" podID="05029d4f-13f1-4025-821a-60f1c4d19ab9" containerID="0de0ffe8fe81a85f5a408102eccdb27f5eac6d11ba60758a0caf27167f9ff61b" exitCode=0 Feb 16 17:46:26 crc kubenswrapper[4794]: I0216 17:46:26.695659 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xfvnh" event={"ID":"05029d4f-13f1-4025-821a-60f1c4d19ab9","Type":"ContainerDied","Data":"0de0ffe8fe81a85f5a408102eccdb27f5eac6d11ba60758a0caf27167f9ff61b"} Feb 16 17:46:26 crc kubenswrapper[4794]: I0216 17:46:26.695706 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xfvnh" event={"ID":"05029d4f-13f1-4025-821a-60f1c4d19ab9","Type":"ContainerDied","Data":"eeb0a44439105f1413487015796ab42a172d275dd00664a8925ddb14223417d1"} Feb 16 17:46:26 crc kubenswrapper[4794]: I0216 17:46:26.695724 4794 scope.go:117] "RemoveContainer" containerID="0de0ffe8fe81a85f5a408102eccdb27f5eac6d11ba60758a0caf27167f9ff61b" Feb 16 17:46:26 crc kubenswrapper[4794]: I0216 17:46:26.697098 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xfvnh" Feb 16 17:46:26 crc kubenswrapper[4794]: I0216 17:46:26.721199 4794 scope.go:117] "RemoveContainer" containerID="c19e0b469ccb34c949fdd6120648c6b88f494a51588e2c2a30b59bc3cf8e0601" Feb 16 17:46:26 crc kubenswrapper[4794]: I0216 17:46:26.736132 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xfvnh"] Feb 16 17:46:26 crc kubenswrapper[4794]: I0216 17:46:26.747345 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xfvnh"] Feb 16 17:46:26 crc kubenswrapper[4794]: I0216 17:46:26.762233 4794 scope.go:117] "RemoveContainer" containerID="b762720ef67145408c3b2db5070ea15b86976573594e6ad529ba255fe73d41eb" Feb 16 17:46:26 crc kubenswrapper[4794]: I0216 17:46:26.805657 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05029d4f-13f1-4025-821a-60f1c4d19ab9" path="/var/lib/kubelet/pods/05029d4f-13f1-4025-821a-60f1c4d19ab9/volumes" Feb 16 17:46:26 crc kubenswrapper[4794]: I0216 17:46:26.811206 4794 scope.go:117] "RemoveContainer" containerID="0de0ffe8fe81a85f5a408102eccdb27f5eac6d11ba60758a0caf27167f9ff61b" Feb 16 17:46:26 crc kubenswrapper[4794]: E0216 17:46:26.811833 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0de0ffe8fe81a85f5a408102eccdb27f5eac6d11ba60758a0caf27167f9ff61b\": container with ID starting with 0de0ffe8fe81a85f5a408102eccdb27f5eac6d11ba60758a0caf27167f9ff61b not found: ID does not exist" containerID="0de0ffe8fe81a85f5a408102eccdb27f5eac6d11ba60758a0caf27167f9ff61b" Feb 16 17:46:26 crc kubenswrapper[4794]: I0216 17:46:26.811874 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0de0ffe8fe81a85f5a408102eccdb27f5eac6d11ba60758a0caf27167f9ff61b"} err="failed to get container status \"0de0ffe8fe81a85f5a408102eccdb27f5eac6d11ba60758a0caf27167f9ff61b\": rpc error: code = NotFound desc = could not find container \"0de0ffe8fe81a85f5a408102eccdb27f5eac6d11ba60758a0caf27167f9ff61b\": container with ID starting with 0de0ffe8fe81a85f5a408102eccdb27f5eac6d11ba60758a0caf27167f9ff61b not found: ID does not exist" Feb 16 17:46:26 crc kubenswrapper[4794]: I0216 17:46:26.811900 4794 scope.go:117] "RemoveContainer" containerID="c19e0b469ccb34c949fdd6120648c6b88f494a51588e2c2a30b59bc3cf8e0601" Feb 16 17:46:26 crc kubenswrapper[4794]: E0216 17:46:26.812312 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c19e0b469ccb34c949fdd6120648c6b88f494a51588e2c2a30b59bc3cf8e0601\": container with ID starting with c19e0b469ccb34c949fdd6120648c6b88f494a51588e2c2a30b59bc3cf8e0601 not found: ID does not exist" containerID="c19e0b469ccb34c949fdd6120648c6b88f494a51588e2c2a30b59bc3cf8e0601" Feb 16 17:46:26 crc kubenswrapper[4794]: I0216 17:46:26.812341 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c19e0b469ccb34c949fdd6120648c6b88f494a51588e2c2a30b59bc3cf8e0601"} err="failed to get container status \"c19e0b469ccb34c949fdd6120648c6b88f494a51588e2c2a30b59bc3cf8e0601\": rpc error: code = NotFound desc = could not find container \"c19e0b469ccb34c949fdd6120648c6b88f494a51588e2c2a30b59bc3cf8e0601\": container with ID starting with c19e0b469ccb34c949fdd6120648c6b88f494a51588e2c2a30b59bc3cf8e0601 not found: ID does not exist" Feb 16 17:46:26 crc kubenswrapper[4794]: I0216 17:46:26.812362 4794 scope.go:117] "RemoveContainer" containerID="b762720ef67145408c3b2db5070ea15b86976573594e6ad529ba255fe73d41eb" Feb 16 17:46:26 crc kubenswrapper[4794]: E0216 17:46:26.812581 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b762720ef67145408c3b2db5070ea15b86976573594e6ad529ba255fe73d41eb\": container with ID starting with b762720ef67145408c3b2db5070ea15b86976573594e6ad529ba255fe73d41eb not found: ID does not exist" containerID="b762720ef67145408c3b2db5070ea15b86976573594e6ad529ba255fe73d41eb" Feb 16 17:46:26 crc kubenswrapper[4794]: I0216 17:46:26.812608 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b762720ef67145408c3b2db5070ea15b86976573594e6ad529ba255fe73d41eb"} err="failed to get container status \"b762720ef67145408c3b2db5070ea15b86976573594e6ad529ba255fe73d41eb\": rpc error: code = NotFound desc = could not find container \"b762720ef67145408c3b2db5070ea15b86976573594e6ad529ba255fe73d41eb\": container with ID starting with b762720ef67145408c3b2db5070ea15b86976573594e6ad529ba255fe73d41eb not found: ID does not exist" Feb 16 17:46:28 crc kubenswrapper[4794]: E0216 17:46:28.797647 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:46:34 crc kubenswrapper[4794]: E0216 17:46:34.813097 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:46:39 crc kubenswrapper[4794]: E0216 17:46:39.794547 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:46:49 crc kubenswrapper[4794]: E0216 17:46:49.794934 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:46:50 crc kubenswrapper[4794]: I0216 17:46:50.140926 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:46:50 crc kubenswrapper[4794]: I0216 17:46:50.141344 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:46:52 crc kubenswrapper[4794]: E0216 17:46:52.796864 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:47:04 crc kubenswrapper[4794]: I0216 17:47:04.805274 4794 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 17:47:04 crc kubenswrapper[4794]: E0216 17:47:04.893426 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 17:47:04 crc kubenswrapper[4794]: E0216 17:47:04.893532 4794 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 17:47:04 crc kubenswrapper[4794]: E0216 17:47:04.893713 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2h5l2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-7gcsf_openstack(c695f880-15cb-45b1-8545-60d8437ec631): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 17:47:04 crc kubenswrapper[4794]: E0216 17:47:04.894956 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:47:04 crc kubenswrapper[4794]: E0216 17:47:04.924533 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 17:47:04 crc kubenswrapper[4794]: E0216 17:47:04.924625 4794 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 17:47:04 crc kubenswrapper[4794]: E0216 17:47:04.924837 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59fh58dh6ch557h84h55ch564h5bh58fh5c8h5d4h584h669h667h569h59hd5hdbh9dh67ch5f9h59fh597h96h664h687h66dhfch5ddh5b7h88h59cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9v9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(8981f528-1f74-4d56-a93c-22860725b490): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 17:47:04 crc kubenswrapper[4794]: E0216 17:47:04.926553 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:47:17 crc kubenswrapper[4794]: E0216 17:47:17.793712 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:47:18 crc kubenswrapper[4794]: E0216 17:47:18.794802 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:47:20 crc kubenswrapper[4794]: I0216 17:47:20.141198 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:47:20 crc kubenswrapper[4794]: I0216 17:47:20.141619 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:47:20 crc kubenswrapper[4794]: I0216 17:47:20.141683 4794 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" Feb 16 17:47:20 crc kubenswrapper[4794]: I0216 17:47:20.142774 4794 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"edf1be4d50aed76dbd4d7c50b58efe821ce0164013d44ef706bca2dbdf85d295"} pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 17:47:20 crc kubenswrapper[4794]: I0216 17:47:20.142860 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" containerID="cri-o://edf1be4d50aed76dbd4d7c50b58efe821ce0164013d44ef706bca2dbdf85d295" gracePeriod=600 Feb 16 17:47:20 crc kubenswrapper[4794]: E0216 17:47:20.261293 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:47:20 crc kubenswrapper[4794]: I0216 17:47:20.303011 4794 generic.go:334] "Generic (PLEG): container finished" podID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerID="edf1be4d50aed76dbd4d7c50b58efe821ce0164013d44ef706bca2dbdf85d295" exitCode=0 Feb 16 17:47:20 crc kubenswrapper[4794]: I0216 17:47:20.303059 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerDied","Data":"edf1be4d50aed76dbd4d7c50b58efe821ce0164013d44ef706bca2dbdf85d295"} Feb 16 17:47:20 crc kubenswrapper[4794]: I0216 17:47:20.303099 4794 scope.go:117] "RemoveContainer" containerID="606db6f35e9c74feff8bb39ccbb04e71ce2ca1130b67430c806c8e435d10e146" Feb 16 17:47:20 crc kubenswrapper[4794]: I0216 17:47:20.303794 4794 scope.go:117] "RemoveContainer" containerID="edf1be4d50aed76dbd4d7c50b58efe821ce0164013d44ef706bca2dbdf85d295" Feb 16 17:47:20 crc kubenswrapper[4794]: E0216 17:47:20.304047 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:47:29 crc kubenswrapper[4794]: E0216 17:47:29.793468 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:47:32 crc kubenswrapper[4794]: I0216 17:47:32.791742 4794 scope.go:117] "RemoveContainer" containerID="edf1be4d50aed76dbd4d7c50b58efe821ce0164013d44ef706bca2dbdf85d295" Feb 16 17:47:32 crc kubenswrapper[4794]: E0216 17:47:32.792360 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:47:32 crc kubenswrapper[4794]: E0216 17:47:32.792958 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:47:42 crc kubenswrapper[4794]: E0216 17:47:42.794580 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:47:44 crc kubenswrapper[4794]: E0216 17:47:44.810657 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:47:46 crc kubenswrapper[4794]: I0216 17:47:46.793205 4794 scope.go:117] "RemoveContainer" containerID="edf1be4d50aed76dbd4d7c50b58efe821ce0164013d44ef706bca2dbdf85d295" Feb 16 17:47:46 crc kubenswrapper[4794]: E0216 17:47:46.794300 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:47:55 crc kubenswrapper[4794]: E0216 17:47:55.793697 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:47:57 crc kubenswrapper[4794]: I0216 17:47:57.791383 4794 scope.go:117] "RemoveContainer" containerID="edf1be4d50aed76dbd4d7c50b58efe821ce0164013d44ef706bca2dbdf85d295" Feb 16 17:47:57 crc kubenswrapper[4794]: E0216 17:47:57.792065 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:47:59 crc kubenswrapper[4794]: E0216 17:47:59.794914 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:48:08 crc kubenswrapper[4794]: E0216 17:48:08.795240 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:48:09 crc kubenswrapper[4794]: I0216 17:48:09.792998 4794 scope.go:117] "RemoveContainer" containerID="edf1be4d50aed76dbd4d7c50b58efe821ce0164013d44ef706bca2dbdf85d295" Feb 16 17:48:09 crc kubenswrapper[4794]: E0216 17:48:09.793984 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:48:11 crc kubenswrapper[4794]: E0216 17:48:11.793609 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:48:21 crc kubenswrapper[4794]: I0216 17:48:21.791243 4794 scope.go:117] "RemoveContainer" containerID="edf1be4d50aed76dbd4d7c50b58efe821ce0164013d44ef706bca2dbdf85d295" Feb 16 17:48:21 crc kubenswrapper[4794]: E0216 17:48:21.792156 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:48:22 crc kubenswrapper[4794]: E0216 17:48:22.793278 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:48:22 crc kubenswrapper[4794]: E0216 17:48:22.793973 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:48:33 crc kubenswrapper[4794]: E0216 17:48:33.796206 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:48:36 crc kubenswrapper[4794]: I0216 17:48:36.795370 4794 scope.go:117] "RemoveContainer" containerID="edf1be4d50aed76dbd4d7c50b58efe821ce0164013d44ef706bca2dbdf85d295" Feb 16 17:48:36 crc kubenswrapper[4794]: E0216 17:48:36.795941 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:48:36 crc kubenswrapper[4794]: E0216 17:48:36.798060 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:48:47 crc kubenswrapper[4794]: E0216 17:48:47.793677 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:48:48 crc kubenswrapper[4794]: I0216 17:48:48.791605 4794 scope.go:117] "RemoveContainer" containerID="edf1be4d50aed76dbd4d7c50b58efe821ce0164013d44ef706bca2dbdf85d295" Feb 16 17:48:48 crc kubenswrapper[4794]: E0216 17:48:48.792190 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:48:50 crc kubenswrapper[4794]: E0216 17:48:50.793965 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:48:59 crc kubenswrapper[4794]: E0216 17:48:59.796597 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:49:03 crc kubenswrapper[4794]: I0216 17:49:03.791202 4794 scope.go:117] "RemoveContainer" containerID="edf1be4d50aed76dbd4d7c50b58efe821ce0164013d44ef706bca2dbdf85d295" Feb 16 17:49:03 crc kubenswrapper[4794]: E0216 17:49:03.791712 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:49:04 crc kubenswrapper[4794]: E0216 17:49:04.800773 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:49:12 crc kubenswrapper[4794]: E0216 17:49:12.793925 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:49:14 crc kubenswrapper[4794]: I0216 17:49:14.809981 4794 scope.go:117] "RemoveContainer" containerID="edf1be4d50aed76dbd4d7c50b58efe821ce0164013d44ef706bca2dbdf85d295" Feb 16 17:49:14 crc kubenswrapper[4794]: E0216 17:49:14.810433 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:49:16 crc kubenswrapper[4794]: E0216 17:49:16.793951 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:49:25 crc kubenswrapper[4794]: I0216 17:49:25.792208 4794 scope.go:117] "RemoveContainer" containerID="edf1be4d50aed76dbd4d7c50b58efe821ce0164013d44ef706bca2dbdf85d295" Feb 16 17:49:25 crc kubenswrapper[4794]: E0216 17:49:25.794335 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:49:25 crc kubenswrapper[4794]: E0216 17:49:25.795445 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:49:30 crc kubenswrapper[4794]: E0216 17:49:30.797347 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:49:36 crc kubenswrapper[4794]: E0216 17:49:36.794998 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:49:40 crc kubenswrapper[4794]: I0216 17:49:40.792443 4794 scope.go:117] "RemoveContainer" containerID="edf1be4d50aed76dbd4d7c50b58efe821ce0164013d44ef706bca2dbdf85d295" Feb 16 17:49:40 crc kubenswrapper[4794]: E0216 17:49:40.793236 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:49:41 crc kubenswrapper[4794]: E0216 17:49:41.792748 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:49:49 crc kubenswrapper[4794]: E0216 17:49:49.795128 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:49:55 crc kubenswrapper[4794]: I0216 17:49:55.791623 4794 scope.go:117] "RemoveContainer" containerID="edf1be4d50aed76dbd4d7c50b58efe821ce0164013d44ef706bca2dbdf85d295" Feb 16 17:49:55 crc kubenswrapper[4794]: E0216 17:49:55.792416 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:49:55 crc kubenswrapper[4794]: E0216 17:49:55.793868 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:50:01 crc kubenswrapper[4794]: E0216 17:50:01.793674 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:50:07 crc kubenswrapper[4794]: E0216 17:50:07.793950 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:50:08 crc kubenswrapper[4794]: I0216 17:50:08.792217 4794 scope.go:117] "RemoveContainer" containerID="edf1be4d50aed76dbd4d7c50b58efe821ce0164013d44ef706bca2dbdf85d295" Feb 16 17:50:08 crc kubenswrapper[4794]: E0216 17:50:08.792639 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:50:14 crc kubenswrapper[4794]: E0216 17:50:14.802154 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:50:15 crc kubenswrapper[4794]: I0216 17:50:15.757535 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2sxnd"] Feb 16 17:50:15 crc kubenswrapper[4794]: E0216 17:50:15.758176 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05029d4f-13f1-4025-821a-60f1c4d19ab9" containerName="extract-utilities" Feb 16 17:50:15 crc kubenswrapper[4794]: I0216 17:50:15.758198 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="05029d4f-13f1-4025-821a-60f1c4d19ab9" containerName="extract-utilities" Feb 16 17:50:15 crc kubenswrapper[4794]: E0216 17:50:15.758238 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05029d4f-13f1-4025-821a-60f1c4d19ab9" containerName="registry-server" Feb 16 17:50:15 crc kubenswrapper[4794]: I0216 17:50:15.758244 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="05029d4f-13f1-4025-821a-60f1c4d19ab9" containerName="registry-server" Feb 16 17:50:15 crc kubenswrapper[4794]: E0216 17:50:15.758269 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05029d4f-13f1-4025-821a-60f1c4d19ab9" containerName="extract-content" Feb 16 17:50:15 crc kubenswrapper[4794]: I0216 17:50:15.758276 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="05029d4f-13f1-4025-821a-60f1c4d19ab9" containerName="extract-content" Feb 16 17:50:15 crc kubenswrapper[4794]: I0216 17:50:15.758577 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="05029d4f-13f1-4025-821a-60f1c4d19ab9" containerName="registry-server" Feb 16 17:50:15 crc kubenswrapper[4794]: I0216 17:50:15.762805 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2sxnd" Feb 16 17:50:15 crc kubenswrapper[4794]: I0216 17:50:15.774197 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2sxnd"] Feb 16 17:50:15 crc kubenswrapper[4794]: I0216 17:50:15.828501 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12df55d1-b4ca-474a-98d5-6a736b66bf6c-catalog-content\") pod \"community-operators-2sxnd\" (UID: \"12df55d1-b4ca-474a-98d5-6a736b66bf6c\") " pod="openshift-marketplace/community-operators-2sxnd" Feb 16 17:50:15 crc kubenswrapper[4794]: I0216 17:50:15.829225 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12df55d1-b4ca-474a-98d5-6a736b66bf6c-utilities\") pod \"community-operators-2sxnd\" (UID: \"12df55d1-b4ca-474a-98d5-6a736b66bf6c\") " pod="openshift-marketplace/community-operators-2sxnd" Feb 16 17:50:15 crc kubenswrapper[4794]: I0216 17:50:15.829374 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxz68\" (UniqueName: \"kubernetes.io/projected/12df55d1-b4ca-474a-98d5-6a736b66bf6c-kube-api-access-vxz68\") pod \"community-operators-2sxnd\" (UID: \"12df55d1-b4ca-474a-98d5-6a736b66bf6c\") " pod="openshift-marketplace/community-operators-2sxnd" Feb 16 17:50:15 crc kubenswrapper[4794]: I0216 17:50:15.931536 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12df55d1-b4ca-474a-98d5-6a736b66bf6c-utilities\") pod \"community-operators-2sxnd\" (UID: \"12df55d1-b4ca-474a-98d5-6a736b66bf6c\") " pod="openshift-marketplace/community-operators-2sxnd" Feb 16 17:50:15 crc kubenswrapper[4794]: I0216 17:50:15.931645 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxz68\" (UniqueName: \"kubernetes.io/projected/12df55d1-b4ca-474a-98d5-6a736b66bf6c-kube-api-access-vxz68\") pod \"community-operators-2sxnd\" (UID: \"12df55d1-b4ca-474a-98d5-6a736b66bf6c\") " pod="openshift-marketplace/community-operators-2sxnd" Feb 16 17:50:15 crc kubenswrapper[4794]: I0216 17:50:15.931698 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12df55d1-b4ca-474a-98d5-6a736b66bf6c-catalog-content\") pod \"community-operators-2sxnd\" (UID: \"12df55d1-b4ca-474a-98d5-6a736b66bf6c\") " pod="openshift-marketplace/community-operators-2sxnd" Feb 16 17:50:15 crc kubenswrapper[4794]: I0216 17:50:15.932205 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12df55d1-b4ca-474a-98d5-6a736b66bf6c-utilities\") pod \"community-operators-2sxnd\" (UID: \"12df55d1-b4ca-474a-98d5-6a736b66bf6c\") " pod="openshift-marketplace/community-operators-2sxnd" Feb 16 17:50:15 crc kubenswrapper[4794]: I0216 17:50:15.932294 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12df55d1-b4ca-474a-98d5-6a736b66bf6c-catalog-content\") pod \"community-operators-2sxnd\" (UID: \"12df55d1-b4ca-474a-98d5-6a736b66bf6c\") " pod="openshift-marketplace/community-operators-2sxnd" Feb 16 17:50:15 crc kubenswrapper[4794]: I0216 17:50:15.957120 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxz68\" (UniqueName: \"kubernetes.io/projected/12df55d1-b4ca-474a-98d5-6a736b66bf6c-kube-api-access-vxz68\") pod \"community-operators-2sxnd\" (UID: \"12df55d1-b4ca-474a-98d5-6a736b66bf6c\") " pod="openshift-marketplace/community-operators-2sxnd" Feb 16 17:50:16 crc kubenswrapper[4794]: I0216 17:50:16.096717 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2sxnd" Feb 16 17:50:16 crc kubenswrapper[4794]: I0216 17:50:16.679368 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2sxnd"] Feb 16 17:50:17 crc kubenswrapper[4794]: I0216 17:50:17.314227 4794 generic.go:334] "Generic (PLEG): container finished" podID="12df55d1-b4ca-474a-98d5-6a736b66bf6c" containerID="0ae9b936a08a6c976e037017d0f964dc5dce42e6b52f228c08f04d00f67fb7bd" exitCode=0 Feb 16 17:50:17 crc kubenswrapper[4794]: I0216 17:50:17.314363 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2sxnd" event={"ID":"12df55d1-b4ca-474a-98d5-6a736b66bf6c","Type":"ContainerDied","Data":"0ae9b936a08a6c976e037017d0f964dc5dce42e6b52f228c08f04d00f67fb7bd"} Feb 16 17:50:17 crc kubenswrapper[4794]: I0216 17:50:17.314642 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2sxnd" event={"ID":"12df55d1-b4ca-474a-98d5-6a736b66bf6c","Type":"ContainerStarted","Data":"30ca507a4ec1d7f92e4bcfd41d913622bc3352349100a0ea5502d44e92e08378"} Feb 16 17:50:18 crc kubenswrapper[4794]: I0216 17:50:18.328003 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2sxnd" event={"ID":"12df55d1-b4ca-474a-98d5-6a736b66bf6c","Type":"ContainerStarted","Data":"d9ff78678dc401294689401abd16eaf0f66caf4bf08322e9fe86bef946a89021"} Feb 16 17:50:18 crc kubenswrapper[4794]: I0216 17:50:18.747145 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l7zzl"] Feb 16 17:50:18 crc kubenswrapper[4794]: I0216 17:50:18.749954 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l7zzl" Feb 16 17:50:18 crc kubenswrapper[4794]: I0216 17:50:18.784719 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l7zzl"] Feb 16 17:50:18 crc kubenswrapper[4794]: I0216 17:50:18.927674 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbml9\" (UniqueName: \"kubernetes.io/projected/ea1dc135-ec78-4c46-8008-7c56fed48b38-kube-api-access-tbml9\") pod \"redhat-operators-l7zzl\" (UID: \"ea1dc135-ec78-4c46-8008-7c56fed48b38\") " pod="openshift-marketplace/redhat-operators-l7zzl" Feb 16 17:50:18 crc kubenswrapper[4794]: I0216 17:50:18.927760 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea1dc135-ec78-4c46-8008-7c56fed48b38-catalog-content\") pod \"redhat-operators-l7zzl\" (UID: \"ea1dc135-ec78-4c46-8008-7c56fed48b38\") " pod="openshift-marketplace/redhat-operators-l7zzl" Feb 16 17:50:18 crc kubenswrapper[4794]: I0216 17:50:18.927822 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea1dc135-ec78-4c46-8008-7c56fed48b38-utilities\") pod \"redhat-operators-l7zzl\" (UID: \"ea1dc135-ec78-4c46-8008-7c56fed48b38\") " pod="openshift-marketplace/redhat-operators-l7zzl" Feb 16 17:50:19 crc kubenswrapper[4794]: I0216 17:50:19.031003 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbml9\" (UniqueName: \"kubernetes.io/projected/ea1dc135-ec78-4c46-8008-7c56fed48b38-kube-api-access-tbml9\") pod \"redhat-operators-l7zzl\" (UID: \"ea1dc135-ec78-4c46-8008-7c56fed48b38\") " pod="openshift-marketplace/redhat-operators-l7zzl" Feb 16 17:50:19 crc kubenswrapper[4794]: I0216 17:50:19.031091 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea1dc135-ec78-4c46-8008-7c56fed48b38-catalog-content\") pod \"redhat-operators-l7zzl\" (UID: \"ea1dc135-ec78-4c46-8008-7c56fed48b38\") " pod="openshift-marketplace/redhat-operators-l7zzl" Feb 16 17:50:19 crc kubenswrapper[4794]: I0216 17:50:19.031141 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea1dc135-ec78-4c46-8008-7c56fed48b38-utilities\") pod \"redhat-operators-l7zzl\" (UID: \"ea1dc135-ec78-4c46-8008-7c56fed48b38\") " pod="openshift-marketplace/redhat-operators-l7zzl" Feb 16 17:50:19 crc kubenswrapper[4794]: I0216 17:50:19.031813 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea1dc135-ec78-4c46-8008-7c56fed48b38-catalog-content\") pod \"redhat-operators-l7zzl\" (UID: \"ea1dc135-ec78-4c46-8008-7c56fed48b38\") " pod="openshift-marketplace/redhat-operators-l7zzl" Feb 16 17:50:19 crc kubenswrapper[4794]: I0216 17:50:19.031833 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea1dc135-ec78-4c46-8008-7c56fed48b38-utilities\") pod \"redhat-operators-l7zzl\" (UID: \"ea1dc135-ec78-4c46-8008-7c56fed48b38\") " pod="openshift-marketplace/redhat-operators-l7zzl" Feb 16 17:50:19 crc kubenswrapper[4794]: I0216 17:50:19.056005 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbml9\" (UniqueName: \"kubernetes.io/projected/ea1dc135-ec78-4c46-8008-7c56fed48b38-kube-api-access-tbml9\") pod \"redhat-operators-l7zzl\" (UID: \"ea1dc135-ec78-4c46-8008-7c56fed48b38\") " pod="openshift-marketplace/redhat-operators-l7zzl" Feb 16 17:50:19 crc kubenswrapper[4794]: I0216 17:50:19.077870 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l7zzl" Feb 16 17:50:19 crc kubenswrapper[4794]: I0216 17:50:19.762513 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l7zzl"] Feb 16 17:50:19 crc kubenswrapper[4794]: W0216 17:50:19.764435 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea1dc135_ec78_4c46_8008_7c56fed48b38.slice/crio-6c1d958e64a0e4db6bdeb58aabcce382805a489498de595640bab325675cf2f8 WatchSource:0}: Error finding container 6c1d958e64a0e4db6bdeb58aabcce382805a489498de595640bab325675cf2f8: Status 404 returned error can't find the container with id 6c1d958e64a0e4db6bdeb58aabcce382805a489498de595640bab325675cf2f8 Feb 16 17:50:20 crc kubenswrapper[4794]: I0216 17:50:20.357809 4794 generic.go:334] "Generic (PLEG): container finished" podID="12df55d1-b4ca-474a-98d5-6a736b66bf6c" containerID="d9ff78678dc401294689401abd16eaf0f66caf4bf08322e9fe86bef946a89021" exitCode=0 Feb 16 17:50:20 crc kubenswrapper[4794]: I0216 17:50:20.357863 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2sxnd" event={"ID":"12df55d1-b4ca-474a-98d5-6a736b66bf6c","Type":"ContainerDied","Data":"d9ff78678dc401294689401abd16eaf0f66caf4bf08322e9fe86bef946a89021"} Feb 16 17:50:20 crc kubenswrapper[4794]: I0216 17:50:20.360867 4794 generic.go:334] "Generic (PLEG): container finished" podID="ea1dc135-ec78-4c46-8008-7c56fed48b38" containerID="7b282a733ced30878701fb5870e58f1d00f897888c6c7211f36686c0fcd932e7" exitCode=0 Feb 16 17:50:20 crc kubenswrapper[4794]: I0216 17:50:20.360911 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7zzl" event={"ID":"ea1dc135-ec78-4c46-8008-7c56fed48b38","Type":"ContainerDied","Data":"7b282a733ced30878701fb5870e58f1d00f897888c6c7211f36686c0fcd932e7"} Feb 16 17:50:20 crc kubenswrapper[4794]: I0216 17:50:20.360939 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7zzl" event={"ID":"ea1dc135-ec78-4c46-8008-7c56fed48b38","Type":"ContainerStarted","Data":"6c1d958e64a0e4db6bdeb58aabcce382805a489498de595640bab325675cf2f8"} Feb 16 17:50:21 crc kubenswrapper[4794]: I0216 17:50:21.374569 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2sxnd" event={"ID":"12df55d1-b4ca-474a-98d5-6a736b66bf6c","Type":"ContainerStarted","Data":"2a86c977df85bfdebaf6bdf14bebdde766f2469dbd2ec778aa60bb3668e682cf"} Feb 16 17:50:21 crc kubenswrapper[4794]: I0216 17:50:21.380157 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7zzl" event={"ID":"ea1dc135-ec78-4c46-8008-7c56fed48b38","Type":"ContainerStarted","Data":"1386183a88d09196b27bb6efc7ee796d7c81bfc90bb53d97be4907d3421b9d25"} Feb 16 17:50:21 crc kubenswrapper[4794]: I0216 17:50:21.407368 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2sxnd" podStartSLOduration=2.87435154 podStartE2EDuration="6.407340974s" podCreationTimestamp="2026-02-16 17:50:15 +0000 UTC" firstStartedPulling="2026-02-16 17:50:17.317219677 +0000 UTC m=+3043.265314324" lastFinishedPulling="2026-02-16 17:50:20.850209111 +0000 UTC m=+3046.798303758" observedRunningTime="2026-02-16 17:50:21.394477971 +0000 UTC m=+3047.342572628" watchObservedRunningTime="2026-02-16 17:50:21.407340974 +0000 UTC m=+3047.355435621" Feb 16 17:50:22 crc kubenswrapper[4794]: E0216 17:50:22.795548 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:50:23 crc kubenswrapper[4794]: I0216 17:50:23.791433 4794 scope.go:117] "RemoveContainer" containerID="edf1be4d50aed76dbd4d7c50b58efe821ce0164013d44ef706bca2dbdf85d295" Feb 16 17:50:23 crc kubenswrapper[4794]: E0216 17:50:23.791769 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:50:26 crc kubenswrapper[4794]: I0216 17:50:26.097042 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2sxnd" Feb 16 17:50:26 crc kubenswrapper[4794]: I0216 17:50:26.097618 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2sxnd" Feb 16 17:50:26 crc kubenswrapper[4794]: I0216 17:50:26.435924 4794 generic.go:334] "Generic (PLEG): container finished" podID="ea1dc135-ec78-4c46-8008-7c56fed48b38" containerID="1386183a88d09196b27bb6efc7ee796d7c81bfc90bb53d97be4907d3421b9d25" exitCode=0 Feb 16 17:50:26 crc kubenswrapper[4794]: I0216 17:50:26.435968 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7zzl" event={"ID":"ea1dc135-ec78-4c46-8008-7c56fed48b38","Type":"ContainerDied","Data":"1386183a88d09196b27bb6efc7ee796d7c81bfc90bb53d97be4907d3421b9d25"} Feb 16 17:50:27 crc kubenswrapper[4794]: I0216 17:50:27.149084 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-2sxnd" podUID="12df55d1-b4ca-474a-98d5-6a736b66bf6c" containerName="registry-server" probeResult="failure" output=< Feb 16 17:50:27 crc kubenswrapper[4794]: timeout: failed to connect service ":50051" within 1s Feb 16 17:50:27 crc kubenswrapper[4794]: > Feb 16 17:50:27 crc kubenswrapper[4794]: I0216 17:50:27.448410 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7zzl" event={"ID":"ea1dc135-ec78-4c46-8008-7c56fed48b38","Type":"ContainerStarted","Data":"45f78f5894ab8e64326da98882ee8fd5dbe7531b7038729e7c55edd0df06354d"} Feb 16 17:50:27 crc kubenswrapper[4794]: I0216 17:50:27.474627 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l7zzl" podStartSLOduration=3.025895185 podStartE2EDuration="9.474602382s" podCreationTimestamp="2026-02-16 17:50:18 +0000 UTC" firstStartedPulling="2026-02-16 17:50:20.362853175 +0000 UTC m=+3046.310947832" lastFinishedPulling="2026-02-16 17:50:26.811560382 +0000 UTC m=+3052.759655029" observedRunningTime="2026-02-16 17:50:27.471714231 +0000 UTC m=+3053.419808908" watchObservedRunningTime="2026-02-16 17:50:27.474602382 +0000 UTC m=+3053.422697069" Feb 16 17:50:29 crc kubenswrapper[4794]: I0216 17:50:29.078640 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l7zzl" Feb 16 17:50:29 crc kubenswrapper[4794]: I0216 17:50:29.078954 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l7zzl" Feb 16 17:50:29 crc kubenswrapper[4794]: E0216 17:50:29.794673 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:50:30 crc kubenswrapper[4794]: I0216 17:50:30.135329 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l7zzl" podUID="ea1dc135-ec78-4c46-8008-7c56fed48b38" containerName="registry-server" probeResult="failure" output=< Feb 16 17:50:30 crc kubenswrapper[4794]: timeout: failed to connect service ":50051" within 1s Feb 16 17:50:30 crc kubenswrapper[4794]: > Feb 16 17:50:36 crc kubenswrapper[4794]: I0216 17:50:36.153283 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2sxnd" Feb 16 17:50:36 crc kubenswrapper[4794]: I0216 17:50:36.212386 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2sxnd" Feb 16 17:50:36 crc kubenswrapper[4794]: I0216 17:50:36.398407 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2sxnd"] Feb 16 17:50:36 crc kubenswrapper[4794]: E0216 17:50:36.794824 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:50:37 crc kubenswrapper[4794]: I0216 17:50:37.549295 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2sxnd" podUID="12df55d1-b4ca-474a-98d5-6a736b66bf6c" containerName="registry-server" containerID="cri-o://2a86c977df85bfdebaf6bdf14bebdde766f2469dbd2ec778aa60bb3668e682cf" gracePeriod=2 Feb 16 17:50:38 crc kubenswrapper[4794]: I0216 17:50:38.127350 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2sxnd" Feb 16 17:50:38 crc kubenswrapper[4794]: I0216 17:50:38.220406 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxz68\" (UniqueName: \"kubernetes.io/projected/12df55d1-b4ca-474a-98d5-6a736b66bf6c-kube-api-access-vxz68\") pod \"12df55d1-b4ca-474a-98d5-6a736b66bf6c\" (UID: \"12df55d1-b4ca-474a-98d5-6a736b66bf6c\") " Feb 16 17:50:38 crc kubenswrapper[4794]: I0216 17:50:38.221540 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12df55d1-b4ca-474a-98d5-6a736b66bf6c-utilities\") pod \"12df55d1-b4ca-474a-98d5-6a736b66bf6c\" (UID: \"12df55d1-b4ca-474a-98d5-6a736b66bf6c\") " Feb 16 17:50:38 crc kubenswrapper[4794]: I0216 17:50:38.221732 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12df55d1-b4ca-474a-98d5-6a736b66bf6c-catalog-content\") pod \"12df55d1-b4ca-474a-98d5-6a736b66bf6c\" (UID: \"12df55d1-b4ca-474a-98d5-6a736b66bf6c\") " Feb 16 17:50:38 crc kubenswrapper[4794]: I0216 17:50:38.222514 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12df55d1-b4ca-474a-98d5-6a736b66bf6c-utilities" (OuterVolumeSpecName: "utilities") pod "12df55d1-b4ca-474a-98d5-6a736b66bf6c" (UID: "12df55d1-b4ca-474a-98d5-6a736b66bf6c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:50:38 crc kubenswrapper[4794]: I0216 17:50:38.226279 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12df55d1-b4ca-474a-98d5-6a736b66bf6c-kube-api-access-vxz68" (OuterVolumeSpecName: "kube-api-access-vxz68") pod "12df55d1-b4ca-474a-98d5-6a736b66bf6c" (UID: "12df55d1-b4ca-474a-98d5-6a736b66bf6c"). InnerVolumeSpecName "kube-api-access-vxz68". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:50:38 crc kubenswrapper[4794]: I0216 17:50:38.273233 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12df55d1-b4ca-474a-98d5-6a736b66bf6c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "12df55d1-b4ca-474a-98d5-6a736b66bf6c" (UID: "12df55d1-b4ca-474a-98d5-6a736b66bf6c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:50:38 crc kubenswrapper[4794]: I0216 17:50:38.324058 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxz68\" (UniqueName: \"kubernetes.io/projected/12df55d1-b4ca-474a-98d5-6a736b66bf6c-kube-api-access-vxz68\") on node \"crc\" DevicePath \"\"" Feb 16 17:50:38 crc kubenswrapper[4794]: I0216 17:50:38.324100 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12df55d1-b4ca-474a-98d5-6a736b66bf6c-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:50:38 crc kubenswrapper[4794]: I0216 17:50:38.324110 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12df55d1-b4ca-474a-98d5-6a736b66bf6c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:50:38 crc kubenswrapper[4794]: I0216 17:50:38.560913 4794 generic.go:334] "Generic (PLEG): container finished" podID="12df55d1-b4ca-474a-98d5-6a736b66bf6c" containerID="2a86c977df85bfdebaf6bdf14bebdde766f2469dbd2ec778aa60bb3668e682cf" exitCode=0 Feb 16 17:50:38 crc kubenswrapper[4794]: I0216 17:50:38.560964 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2sxnd" event={"ID":"12df55d1-b4ca-474a-98d5-6a736b66bf6c","Type":"ContainerDied","Data":"2a86c977df85bfdebaf6bdf14bebdde766f2469dbd2ec778aa60bb3668e682cf"} Feb 16 17:50:38 crc kubenswrapper[4794]: I0216 17:50:38.560981 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2sxnd" Feb 16 17:50:38 crc kubenswrapper[4794]: I0216 17:50:38.560997 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2sxnd" event={"ID":"12df55d1-b4ca-474a-98d5-6a736b66bf6c","Type":"ContainerDied","Data":"30ca507a4ec1d7f92e4bcfd41d913622bc3352349100a0ea5502d44e92e08378"} Feb 16 17:50:38 crc kubenswrapper[4794]: I0216 17:50:38.561033 4794 scope.go:117] "RemoveContainer" containerID="2a86c977df85bfdebaf6bdf14bebdde766f2469dbd2ec778aa60bb3668e682cf" Feb 16 17:50:38 crc kubenswrapper[4794]: I0216 17:50:38.586282 4794 scope.go:117] "RemoveContainer" containerID="d9ff78678dc401294689401abd16eaf0f66caf4bf08322e9fe86bef946a89021" Feb 16 17:50:38 crc kubenswrapper[4794]: I0216 17:50:38.597065 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2sxnd"] Feb 16 17:50:38 crc kubenswrapper[4794]: I0216 17:50:38.606073 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2sxnd"] Feb 16 17:50:38 crc kubenswrapper[4794]: I0216 17:50:38.615985 4794 scope.go:117] "RemoveContainer" containerID="0ae9b936a08a6c976e037017d0f964dc5dce42e6b52f228c08f04d00f67fb7bd" Feb 16 17:50:38 crc kubenswrapper[4794]: I0216 17:50:38.667139 4794 scope.go:117] "RemoveContainer" containerID="2a86c977df85bfdebaf6bdf14bebdde766f2469dbd2ec778aa60bb3668e682cf" Feb 16 17:50:38 crc kubenswrapper[4794]: E0216 17:50:38.667801 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a86c977df85bfdebaf6bdf14bebdde766f2469dbd2ec778aa60bb3668e682cf\": container with ID starting with 2a86c977df85bfdebaf6bdf14bebdde766f2469dbd2ec778aa60bb3668e682cf not found: ID does not exist" containerID="2a86c977df85bfdebaf6bdf14bebdde766f2469dbd2ec778aa60bb3668e682cf" Feb 16 17:50:38 crc kubenswrapper[4794]: I0216 17:50:38.667860 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a86c977df85bfdebaf6bdf14bebdde766f2469dbd2ec778aa60bb3668e682cf"} err="failed to get container status \"2a86c977df85bfdebaf6bdf14bebdde766f2469dbd2ec778aa60bb3668e682cf\": rpc error: code = NotFound desc = could not find container \"2a86c977df85bfdebaf6bdf14bebdde766f2469dbd2ec778aa60bb3668e682cf\": container with ID starting with 2a86c977df85bfdebaf6bdf14bebdde766f2469dbd2ec778aa60bb3668e682cf not found: ID does not exist" Feb 16 17:50:38 crc kubenswrapper[4794]: I0216 17:50:38.667897 4794 scope.go:117] "RemoveContainer" containerID="d9ff78678dc401294689401abd16eaf0f66caf4bf08322e9fe86bef946a89021" Feb 16 17:50:38 crc kubenswrapper[4794]: E0216 17:50:38.668416 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9ff78678dc401294689401abd16eaf0f66caf4bf08322e9fe86bef946a89021\": container with ID starting with d9ff78678dc401294689401abd16eaf0f66caf4bf08322e9fe86bef946a89021 not found: ID does not exist" containerID="d9ff78678dc401294689401abd16eaf0f66caf4bf08322e9fe86bef946a89021" Feb 16 17:50:38 crc kubenswrapper[4794]: I0216 17:50:38.668452 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9ff78678dc401294689401abd16eaf0f66caf4bf08322e9fe86bef946a89021"} err="failed to get container status \"d9ff78678dc401294689401abd16eaf0f66caf4bf08322e9fe86bef946a89021\": rpc error: code = NotFound desc = could not find container \"d9ff78678dc401294689401abd16eaf0f66caf4bf08322e9fe86bef946a89021\": container with ID starting with d9ff78678dc401294689401abd16eaf0f66caf4bf08322e9fe86bef946a89021 not found: ID does not exist" Feb 16 17:50:38 crc kubenswrapper[4794]: I0216 17:50:38.668476 4794 scope.go:117] "RemoveContainer" containerID="0ae9b936a08a6c976e037017d0f964dc5dce42e6b52f228c08f04d00f67fb7bd" Feb 16 17:50:38 crc kubenswrapper[4794]: E0216 17:50:38.668751 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ae9b936a08a6c976e037017d0f964dc5dce42e6b52f228c08f04d00f67fb7bd\": container with ID starting with 0ae9b936a08a6c976e037017d0f964dc5dce42e6b52f228c08f04d00f67fb7bd not found: ID does not exist" containerID="0ae9b936a08a6c976e037017d0f964dc5dce42e6b52f228c08f04d00f67fb7bd" Feb 16 17:50:38 crc kubenswrapper[4794]: I0216 17:50:38.668774 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ae9b936a08a6c976e037017d0f964dc5dce42e6b52f228c08f04d00f67fb7bd"} err="failed to get container status \"0ae9b936a08a6c976e037017d0f964dc5dce42e6b52f228c08f04d00f67fb7bd\": rpc error: code = NotFound desc = could not find container \"0ae9b936a08a6c976e037017d0f964dc5dce42e6b52f228c08f04d00f67fb7bd\": container with ID starting with 0ae9b936a08a6c976e037017d0f964dc5dce42e6b52f228c08f04d00f67fb7bd not found: ID does not exist" Feb 16 17:50:38 crc kubenswrapper[4794]: I0216 17:50:38.791993 4794 scope.go:117] "RemoveContainer" containerID="edf1be4d50aed76dbd4d7c50b58efe821ce0164013d44ef706bca2dbdf85d295" Feb 16 17:50:38 crc kubenswrapper[4794]: E0216 17:50:38.792292 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:50:38 crc kubenswrapper[4794]: I0216 17:50:38.810652 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12df55d1-b4ca-474a-98d5-6a736b66bf6c" path="/var/lib/kubelet/pods/12df55d1-b4ca-474a-98d5-6a736b66bf6c/volumes" Feb 16 17:50:40 crc kubenswrapper[4794]: I0216 17:50:40.139948 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l7zzl" podUID="ea1dc135-ec78-4c46-8008-7c56fed48b38" containerName="registry-server" probeResult="failure" output=< Feb 16 17:50:40 crc kubenswrapper[4794]: timeout: failed to connect service ":50051" within 1s Feb 16 17:50:40 crc kubenswrapper[4794]: > Feb 16 17:50:43 crc kubenswrapper[4794]: E0216 17:50:43.794600 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:50:49 crc kubenswrapper[4794]: I0216 17:50:49.674168 4794 generic.go:334] "Generic (PLEG): container finished" podID="8e0581f8-9225-4111-9249-c8b122cb33d3" containerID="408c6bcfe91b6b6e76e5c88d93475ba2bc374517c1146658c1f1370a42fdbdf9" exitCode=2 Feb 16 17:50:49 crc kubenswrapper[4794]: I0216 17:50:49.674256 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wt7qf" event={"ID":"8e0581f8-9225-4111-9249-c8b122cb33d3","Type":"ContainerDied","Data":"408c6bcfe91b6b6e76e5c88d93475ba2bc374517c1146658c1f1370a42fdbdf9"} Feb 16 17:50:50 crc kubenswrapper[4794]: I0216 17:50:50.125869 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l7zzl" podUID="ea1dc135-ec78-4c46-8008-7c56fed48b38" containerName="registry-server" probeResult="failure" output=< Feb 16 17:50:50 crc kubenswrapper[4794]: timeout: failed to connect service ":50051" within 1s Feb 16 17:50:50 crc kubenswrapper[4794]: > Feb 16 17:50:51 crc kubenswrapper[4794]: I0216 17:50:51.241151 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wt7qf" Feb 16 17:50:51 crc kubenswrapper[4794]: I0216 17:50:51.367483 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqlc8\" (UniqueName: \"kubernetes.io/projected/8e0581f8-9225-4111-9249-c8b122cb33d3-kube-api-access-gqlc8\") pod \"8e0581f8-9225-4111-9249-c8b122cb33d3\" (UID: \"8e0581f8-9225-4111-9249-c8b122cb33d3\") " Feb 16 17:50:51 crc kubenswrapper[4794]: I0216 17:50:51.367893 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e0581f8-9225-4111-9249-c8b122cb33d3-ssh-key-openstack-edpm-ipam\") pod \"8e0581f8-9225-4111-9249-c8b122cb33d3\" (UID: \"8e0581f8-9225-4111-9249-c8b122cb33d3\") " Feb 16 17:50:51 crc kubenswrapper[4794]: I0216 17:50:51.368039 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e0581f8-9225-4111-9249-c8b122cb33d3-inventory\") pod \"8e0581f8-9225-4111-9249-c8b122cb33d3\" (UID: \"8e0581f8-9225-4111-9249-c8b122cb33d3\") " Feb 16 17:50:51 crc kubenswrapper[4794]: I0216 17:50:51.373564 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e0581f8-9225-4111-9249-c8b122cb33d3-kube-api-access-gqlc8" (OuterVolumeSpecName: "kube-api-access-gqlc8") pod "8e0581f8-9225-4111-9249-c8b122cb33d3" (UID: "8e0581f8-9225-4111-9249-c8b122cb33d3"). InnerVolumeSpecName "kube-api-access-gqlc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:50:51 crc kubenswrapper[4794]: I0216 17:50:51.399612 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e0581f8-9225-4111-9249-c8b122cb33d3-inventory" (OuterVolumeSpecName: "inventory") pod "8e0581f8-9225-4111-9249-c8b122cb33d3" (UID: "8e0581f8-9225-4111-9249-c8b122cb33d3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:50:51 crc kubenswrapper[4794]: I0216 17:50:51.406119 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e0581f8-9225-4111-9249-c8b122cb33d3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8e0581f8-9225-4111-9249-c8b122cb33d3" (UID: "8e0581f8-9225-4111-9249-c8b122cb33d3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:50:51 crc kubenswrapper[4794]: I0216 17:50:51.471749 4794 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e0581f8-9225-4111-9249-c8b122cb33d3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 17:50:51 crc kubenswrapper[4794]: I0216 17:50:51.471987 4794 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e0581f8-9225-4111-9249-c8b122cb33d3-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 17:50:51 crc kubenswrapper[4794]: I0216 17:50:51.472084 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqlc8\" (UniqueName: \"kubernetes.io/projected/8e0581f8-9225-4111-9249-c8b122cb33d3-kube-api-access-gqlc8\") on node \"crc\" DevicePath \"\"" Feb 16 17:50:51 crc kubenswrapper[4794]: I0216 17:50:51.743745 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wt7qf" event={"ID":"8e0581f8-9225-4111-9249-c8b122cb33d3","Type":"ContainerDied","Data":"6e09e45c5b268c9f21f2dccfdc181d6104b7b16daa9f81437b54e2427c70c826"} Feb 16 17:50:51 crc kubenswrapper[4794]: I0216 17:50:51.743798 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e09e45c5b268c9f21f2dccfdc181d6104b7b16daa9f81437b54e2427c70c826" Feb 16 17:50:51 crc kubenswrapper[4794]: I0216 17:50:51.743798 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wt7qf" Feb 16 17:50:51 crc kubenswrapper[4794]: E0216 17:50:51.793098 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:50:53 crc kubenswrapper[4794]: I0216 17:50:53.792219 4794 scope.go:117] "RemoveContainer" containerID="edf1be4d50aed76dbd4d7c50b58efe821ce0164013d44ef706bca2dbdf85d295" Feb 16 17:50:53 crc kubenswrapper[4794]: E0216 17:50:53.792782 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:50:54 crc kubenswrapper[4794]: E0216 17:50:54.801532 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:50:59 crc kubenswrapper[4794]: I0216 17:50:59.176066 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l7zzl" Feb 16 17:50:59 crc kubenswrapper[4794]: I0216 17:50:59.242878 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l7zzl" Feb 16 17:50:59 crc kubenswrapper[4794]: I0216 17:50:59.430457 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l7zzl"] Feb 16 17:51:00 crc kubenswrapper[4794]: I0216 17:51:00.825523 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l7zzl" podUID="ea1dc135-ec78-4c46-8008-7c56fed48b38" containerName="registry-server" containerID="cri-o://45f78f5894ab8e64326da98882ee8fd5dbe7531b7038729e7c55edd0df06354d" gracePeriod=2 Feb 16 17:51:01 crc kubenswrapper[4794]: I0216 17:51:01.353978 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l7zzl" Feb 16 17:51:01 crc kubenswrapper[4794]: I0216 17:51:01.471759 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbml9\" (UniqueName: \"kubernetes.io/projected/ea1dc135-ec78-4c46-8008-7c56fed48b38-kube-api-access-tbml9\") pod \"ea1dc135-ec78-4c46-8008-7c56fed48b38\" (UID: \"ea1dc135-ec78-4c46-8008-7c56fed48b38\") " Feb 16 17:51:01 crc kubenswrapper[4794]: I0216 17:51:01.472101 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea1dc135-ec78-4c46-8008-7c56fed48b38-catalog-content\") pod \"ea1dc135-ec78-4c46-8008-7c56fed48b38\" (UID: \"ea1dc135-ec78-4c46-8008-7c56fed48b38\") " Feb 16 17:51:01 crc kubenswrapper[4794]: I0216 17:51:01.472600 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea1dc135-ec78-4c46-8008-7c56fed48b38-utilities\") pod \"ea1dc135-ec78-4c46-8008-7c56fed48b38\" (UID: \"ea1dc135-ec78-4c46-8008-7c56fed48b38\") " Feb 16 17:51:01 crc kubenswrapper[4794]: I0216 17:51:01.473878 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea1dc135-ec78-4c46-8008-7c56fed48b38-utilities" (OuterVolumeSpecName: "utilities") pod "ea1dc135-ec78-4c46-8008-7c56fed48b38" (UID: "ea1dc135-ec78-4c46-8008-7c56fed48b38"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:51:01 crc kubenswrapper[4794]: I0216 17:51:01.487398 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea1dc135-ec78-4c46-8008-7c56fed48b38-kube-api-access-tbml9" (OuterVolumeSpecName: "kube-api-access-tbml9") pod "ea1dc135-ec78-4c46-8008-7c56fed48b38" (UID: "ea1dc135-ec78-4c46-8008-7c56fed48b38"). InnerVolumeSpecName "kube-api-access-tbml9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:51:01 crc kubenswrapper[4794]: I0216 17:51:01.577897 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea1dc135-ec78-4c46-8008-7c56fed48b38-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:51:01 crc kubenswrapper[4794]: I0216 17:51:01.577937 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbml9\" (UniqueName: \"kubernetes.io/projected/ea1dc135-ec78-4c46-8008-7c56fed48b38-kube-api-access-tbml9\") on node \"crc\" DevicePath \"\"" Feb 16 17:51:01 crc kubenswrapper[4794]: I0216 17:51:01.605194 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea1dc135-ec78-4c46-8008-7c56fed48b38-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea1dc135-ec78-4c46-8008-7c56fed48b38" (UID: "ea1dc135-ec78-4c46-8008-7c56fed48b38"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:51:01 crc kubenswrapper[4794]: I0216 17:51:01.680095 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea1dc135-ec78-4c46-8008-7c56fed48b38-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:51:01 crc kubenswrapper[4794]: I0216 17:51:01.838565 4794 generic.go:334] "Generic (PLEG): container finished" podID="ea1dc135-ec78-4c46-8008-7c56fed48b38" containerID="45f78f5894ab8e64326da98882ee8fd5dbe7531b7038729e7c55edd0df06354d" exitCode=0 Feb 16 17:51:01 crc kubenswrapper[4794]: I0216 17:51:01.838620 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7zzl" event={"ID":"ea1dc135-ec78-4c46-8008-7c56fed48b38","Type":"ContainerDied","Data":"45f78f5894ab8e64326da98882ee8fd5dbe7531b7038729e7c55edd0df06354d"} Feb 16 17:51:01 crc kubenswrapper[4794]: I0216 17:51:01.838651 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7zzl" event={"ID":"ea1dc135-ec78-4c46-8008-7c56fed48b38","Type":"ContainerDied","Data":"6c1d958e64a0e4db6bdeb58aabcce382805a489498de595640bab325675cf2f8"} Feb 16 17:51:01 crc kubenswrapper[4794]: I0216 17:51:01.838672 4794 scope.go:117] "RemoveContainer" containerID="45f78f5894ab8e64326da98882ee8fd5dbe7531b7038729e7c55edd0df06354d" Feb 16 17:51:01 crc kubenswrapper[4794]: I0216 17:51:01.838857 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l7zzl" Feb 16 17:51:01 crc kubenswrapper[4794]: I0216 17:51:01.874170 4794 scope.go:117] "RemoveContainer" containerID="1386183a88d09196b27bb6efc7ee796d7c81bfc90bb53d97be4907d3421b9d25" Feb 16 17:51:01 crc kubenswrapper[4794]: I0216 17:51:01.879348 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l7zzl"] Feb 16 17:51:01 crc kubenswrapper[4794]: I0216 17:51:01.888478 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l7zzl"] Feb 16 17:51:01 crc kubenswrapper[4794]: I0216 17:51:01.900104 4794 scope.go:117] "RemoveContainer" containerID="7b282a733ced30878701fb5870e58f1d00f897888c6c7211f36686c0fcd932e7" Feb 16 17:51:01 crc kubenswrapper[4794]: I0216 17:51:01.962690 4794 scope.go:117] "RemoveContainer" containerID="45f78f5894ab8e64326da98882ee8fd5dbe7531b7038729e7c55edd0df06354d" Feb 16 17:51:01 crc kubenswrapper[4794]: E0216 17:51:01.963035 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45f78f5894ab8e64326da98882ee8fd5dbe7531b7038729e7c55edd0df06354d\": container with ID starting with 45f78f5894ab8e64326da98882ee8fd5dbe7531b7038729e7c55edd0df06354d not found: ID does not exist" containerID="45f78f5894ab8e64326da98882ee8fd5dbe7531b7038729e7c55edd0df06354d" Feb 16 17:51:01 crc kubenswrapper[4794]: I0216 17:51:01.963064 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45f78f5894ab8e64326da98882ee8fd5dbe7531b7038729e7c55edd0df06354d"} err="failed to get container status \"45f78f5894ab8e64326da98882ee8fd5dbe7531b7038729e7c55edd0df06354d\": rpc error: code = NotFound desc = could not find container \"45f78f5894ab8e64326da98882ee8fd5dbe7531b7038729e7c55edd0df06354d\": container with ID starting with 45f78f5894ab8e64326da98882ee8fd5dbe7531b7038729e7c55edd0df06354d not found: ID does not exist" Feb 16 17:51:01 crc kubenswrapper[4794]: I0216 17:51:01.963085 4794 scope.go:117] "RemoveContainer" containerID="1386183a88d09196b27bb6efc7ee796d7c81bfc90bb53d97be4907d3421b9d25" Feb 16 17:51:01 crc kubenswrapper[4794]: E0216 17:51:01.963464 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1386183a88d09196b27bb6efc7ee796d7c81bfc90bb53d97be4907d3421b9d25\": container with ID starting with 1386183a88d09196b27bb6efc7ee796d7c81bfc90bb53d97be4907d3421b9d25 not found: ID does not exist" containerID="1386183a88d09196b27bb6efc7ee796d7c81bfc90bb53d97be4907d3421b9d25" Feb 16 17:51:01 crc kubenswrapper[4794]: I0216 17:51:01.963510 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1386183a88d09196b27bb6efc7ee796d7c81bfc90bb53d97be4907d3421b9d25"} err="failed to get container status \"1386183a88d09196b27bb6efc7ee796d7c81bfc90bb53d97be4907d3421b9d25\": rpc error: code = NotFound desc = could not find container \"1386183a88d09196b27bb6efc7ee796d7c81bfc90bb53d97be4907d3421b9d25\": container with ID starting with 1386183a88d09196b27bb6efc7ee796d7c81bfc90bb53d97be4907d3421b9d25 not found: ID does not exist" Feb 16 17:51:01 crc kubenswrapper[4794]: I0216 17:51:01.963527 4794 scope.go:117] "RemoveContainer" containerID="7b282a733ced30878701fb5870e58f1d00f897888c6c7211f36686c0fcd932e7" Feb 16 17:51:01 crc kubenswrapper[4794]: E0216 17:51:01.963869 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b282a733ced30878701fb5870e58f1d00f897888c6c7211f36686c0fcd932e7\": container with ID starting with 7b282a733ced30878701fb5870e58f1d00f897888c6c7211f36686c0fcd932e7 not found: ID does not exist" containerID="7b282a733ced30878701fb5870e58f1d00f897888c6c7211f36686c0fcd932e7" Feb 16 17:51:01 crc kubenswrapper[4794]: I0216 17:51:01.963914 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b282a733ced30878701fb5870e58f1d00f897888c6c7211f36686c0fcd932e7"} err="failed to get container status \"7b282a733ced30878701fb5870e58f1d00f897888c6c7211f36686c0fcd932e7\": rpc error: code = NotFound desc = could not find container \"7b282a733ced30878701fb5870e58f1d00f897888c6c7211f36686c0fcd932e7\": container with ID starting with 7b282a733ced30878701fb5870e58f1d00f897888c6c7211f36686c0fcd932e7 not found: ID does not exist" Feb 16 17:51:02 crc kubenswrapper[4794]: E0216 17:51:02.792587 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:51:02 crc kubenswrapper[4794]: I0216 17:51:02.807002 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea1dc135-ec78-4c46-8008-7c56fed48b38" path="/var/lib/kubelet/pods/ea1dc135-ec78-4c46-8008-7c56fed48b38/volumes" Feb 16 17:51:06 crc kubenswrapper[4794]: E0216 17:51:06.794467 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:51:08 crc kubenswrapper[4794]: I0216 17:51:08.792062 4794 scope.go:117] "RemoveContainer" containerID="edf1be4d50aed76dbd4d7c50b58efe821ce0164013d44ef706bca2dbdf85d295" Feb 16 17:51:08 crc kubenswrapper[4794]: E0216 17:51:08.793108 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:51:11 crc kubenswrapper[4794]: I0216 17:51:11.177149 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5bdbz"] Feb 16 17:51:11 crc kubenswrapper[4794]: E0216 17:51:11.178527 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12df55d1-b4ca-474a-98d5-6a736b66bf6c" containerName="extract-content" Feb 16 17:51:11 crc kubenswrapper[4794]: I0216 17:51:11.178547 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="12df55d1-b4ca-474a-98d5-6a736b66bf6c" containerName="extract-content" Feb 16 17:51:11 crc kubenswrapper[4794]: E0216 17:51:11.178573 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12df55d1-b4ca-474a-98d5-6a736b66bf6c" containerName="registry-server" Feb 16 17:51:11 crc kubenswrapper[4794]: I0216 17:51:11.178583 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="12df55d1-b4ca-474a-98d5-6a736b66bf6c" containerName="registry-server" Feb 16 17:51:11 crc kubenswrapper[4794]: E0216 17:51:11.178612 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea1dc135-ec78-4c46-8008-7c56fed48b38" containerName="registry-server" Feb 16 17:51:11 crc kubenswrapper[4794]: I0216 17:51:11.178619 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea1dc135-ec78-4c46-8008-7c56fed48b38" containerName="registry-server" Feb 16 17:51:11 crc kubenswrapper[4794]: E0216 17:51:11.178631 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12df55d1-b4ca-474a-98d5-6a736b66bf6c" containerName="extract-utilities" Feb 16 17:51:11 crc kubenswrapper[4794]: I0216 17:51:11.178639 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="12df55d1-b4ca-474a-98d5-6a736b66bf6c" containerName="extract-utilities" Feb 16 17:51:11 crc kubenswrapper[4794]: E0216 17:51:11.178662 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e0581f8-9225-4111-9249-c8b122cb33d3" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 17:51:11 crc kubenswrapper[4794]: I0216 17:51:11.178671 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e0581f8-9225-4111-9249-c8b122cb33d3" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 17:51:11 crc kubenswrapper[4794]: E0216 17:51:11.178687 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea1dc135-ec78-4c46-8008-7c56fed48b38" containerName="extract-utilities" Feb 16 17:51:11 crc kubenswrapper[4794]: I0216 17:51:11.178696 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea1dc135-ec78-4c46-8008-7c56fed48b38" containerName="extract-utilities" Feb 16 17:51:11 crc kubenswrapper[4794]: E0216 17:51:11.178714 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea1dc135-ec78-4c46-8008-7c56fed48b38" containerName="extract-content" Feb 16 17:51:11 crc kubenswrapper[4794]: I0216 17:51:11.178720 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea1dc135-ec78-4c46-8008-7c56fed48b38" containerName="extract-content" Feb 16 17:51:11 crc kubenswrapper[4794]: I0216 17:51:11.179067 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="12df55d1-b4ca-474a-98d5-6a736b66bf6c" containerName="registry-server" Feb 16 17:51:11 crc kubenswrapper[4794]: I0216 17:51:11.179090 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea1dc135-ec78-4c46-8008-7c56fed48b38" containerName="registry-server" Feb 16 17:51:11 crc kubenswrapper[4794]: I0216 17:51:11.179103 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e0581f8-9225-4111-9249-c8b122cb33d3" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 17:51:11 crc kubenswrapper[4794]: I0216 17:51:11.182234 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5bdbz" Feb 16 17:51:11 crc kubenswrapper[4794]: I0216 17:51:11.195369 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5bdbz"] Feb 16 17:51:11 crc kubenswrapper[4794]: I0216 17:51:11.235182 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mshx\" (UniqueName: \"kubernetes.io/projected/79de712b-ffd9-4691-aafa-87ed1a4de57c-kube-api-access-8mshx\") pod \"certified-operators-5bdbz\" (UID: \"79de712b-ffd9-4691-aafa-87ed1a4de57c\") " pod="openshift-marketplace/certified-operators-5bdbz" Feb 16 17:51:11 crc kubenswrapper[4794]: I0216 17:51:11.235464 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79de712b-ffd9-4691-aafa-87ed1a4de57c-utilities\") pod \"certified-operators-5bdbz\" (UID: \"79de712b-ffd9-4691-aafa-87ed1a4de57c\") " pod="openshift-marketplace/certified-operators-5bdbz" Feb 16 17:51:11 crc kubenswrapper[4794]: I0216 17:51:11.235512 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79de712b-ffd9-4691-aafa-87ed1a4de57c-catalog-content\") pod \"certified-operators-5bdbz\" (UID: \"79de712b-ffd9-4691-aafa-87ed1a4de57c\") " pod="openshift-marketplace/certified-operators-5bdbz" Feb 16 17:51:11 crc kubenswrapper[4794]: I0216 17:51:11.338170 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mshx\" (UniqueName: \"kubernetes.io/projected/79de712b-ffd9-4691-aafa-87ed1a4de57c-kube-api-access-8mshx\") pod \"certified-operators-5bdbz\" (UID: \"79de712b-ffd9-4691-aafa-87ed1a4de57c\") " pod="openshift-marketplace/certified-operators-5bdbz" Feb 16 17:51:11 crc kubenswrapper[4794]: I0216 17:51:11.338355 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79de712b-ffd9-4691-aafa-87ed1a4de57c-utilities\") pod \"certified-operators-5bdbz\" (UID: \"79de712b-ffd9-4691-aafa-87ed1a4de57c\") " pod="openshift-marketplace/certified-operators-5bdbz" Feb 16 17:51:11 crc kubenswrapper[4794]: I0216 17:51:11.338394 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79de712b-ffd9-4691-aafa-87ed1a4de57c-catalog-content\") pod \"certified-operators-5bdbz\" (UID: \"79de712b-ffd9-4691-aafa-87ed1a4de57c\") " pod="openshift-marketplace/certified-operators-5bdbz" Feb 16 17:51:11 crc kubenswrapper[4794]: I0216 17:51:11.338908 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79de712b-ffd9-4691-aafa-87ed1a4de57c-catalog-content\") pod \"certified-operators-5bdbz\" (UID: \"79de712b-ffd9-4691-aafa-87ed1a4de57c\") " pod="openshift-marketplace/certified-operators-5bdbz" Feb 16 17:51:11 crc kubenswrapper[4794]: I0216 17:51:11.339084 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79de712b-ffd9-4691-aafa-87ed1a4de57c-utilities\") pod \"certified-operators-5bdbz\" (UID: \"79de712b-ffd9-4691-aafa-87ed1a4de57c\") " pod="openshift-marketplace/certified-operators-5bdbz" Feb 16 17:51:11 crc kubenswrapper[4794]: I0216 17:51:11.356608 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mshx\" (UniqueName: \"kubernetes.io/projected/79de712b-ffd9-4691-aafa-87ed1a4de57c-kube-api-access-8mshx\") pod \"certified-operators-5bdbz\" (UID: \"79de712b-ffd9-4691-aafa-87ed1a4de57c\") " pod="openshift-marketplace/certified-operators-5bdbz" Feb 16 17:51:11 crc kubenswrapper[4794]: I0216 17:51:11.511757 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5bdbz" Feb 16 17:51:12 crc kubenswrapper[4794]: I0216 17:51:12.111397 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5bdbz"] Feb 16 17:51:13 crc kubenswrapper[4794]: I0216 17:51:13.000663 4794 generic.go:334] "Generic (PLEG): container finished" podID="79de712b-ffd9-4691-aafa-87ed1a4de57c" containerID="a9de450664bffdc954d9d9294b6d738a70ffdb134e7d6884818a869e054fe7d8" exitCode=0 Feb 16 17:51:13 crc kubenswrapper[4794]: I0216 17:51:13.000770 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5bdbz" event={"ID":"79de712b-ffd9-4691-aafa-87ed1a4de57c","Type":"ContainerDied","Data":"a9de450664bffdc954d9d9294b6d738a70ffdb134e7d6884818a869e054fe7d8"} Feb 16 17:51:13 crc kubenswrapper[4794]: I0216 17:51:13.001091 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5bdbz" event={"ID":"79de712b-ffd9-4691-aafa-87ed1a4de57c","Type":"ContainerStarted","Data":"e022056048f74d544c6fe8e84af5652fef72939ad31fb04ed586469853e200be"} Feb 16 17:51:14 crc kubenswrapper[4794]: I0216 17:51:14.015005 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5bdbz" event={"ID":"79de712b-ffd9-4691-aafa-87ed1a4de57c","Type":"ContainerStarted","Data":"a11c25c715a91ad12e7ea2c9f10c56eea08b630f903ae3cf17bc16e90958c21c"} Feb 16 17:51:15 crc kubenswrapper[4794]: I0216 17:51:15.029234 4794 generic.go:334] "Generic (PLEG): container finished" podID="79de712b-ffd9-4691-aafa-87ed1a4de57c" containerID="a11c25c715a91ad12e7ea2c9f10c56eea08b630f903ae3cf17bc16e90958c21c" exitCode=0 Feb 16 17:51:15 crc kubenswrapper[4794]: I0216 17:51:15.029284 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5bdbz" event={"ID":"79de712b-ffd9-4691-aafa-87ed1a4de57c","Type":"ContainerDied","Data":"a11c25c715a91ad12e7ea2c9f10c56eea08b630f903ae3cf17bc16e90958c21c"} Feb 16 17:51:15 crc kubenswrapper[4794]: E0216 17:51:15.795017 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:51:16 crc kubenswrapper[4794]: I0216 17:51:16.043923 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5bdbz" event={"ID":"79de712b-ffd9-4691-aafa-87ed1a4de57c","Type":"ContainerStarted","Data":"1059ed2a4160c22f9c990cc86c679e41fc5f865492326174d77ee449f79cc6cb"} Feb 16 17:51:16 crc kubenswrapper[4794]: I0216 17:51:16.071171 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5bdbz" podStartSLOduration=2.675201185 podStartE2EDuration="5.07114514s" podCreationTimestamp="2026-02-16 17:51:11 +0000 UTC" firstStartedPulling="2026-02-16 17:51:13.002523794 +0000 UTC m=+3098.950618441" lastFinishedPulling="2026-02-16 17:51:15.398467749 +0000 UTC m=+3101.346562396" observedRunningTime="2026-02-16 17:51:16.063455773 +0000 UTC m=+3102.011550430" watchObservedRunningTime="2026-02-16 17:51:16.07114514 +0000 UTC m=+3102.019239787" Feb 16 17:51:17 crc kubenswrapper[4794]: E0216 17:51:17.795945 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:51:21 crc kubenswrapper[4794]: I0216 17:51:21.512672 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5bdbz" Feb 16 17:51:21 crc kubenswrapper[4794]: I0216 17:51:21.513540 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5bdbz" Feb 16 17:51:21 crc kubenswrapper[4794]: I0216 17:51:21.600490 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5bdbz" Feb 16 17:51:22 crc kubenswrapper[4794]: I0216 17:51:22.162774 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5bdbz" Feb 16 17:51:22 crc kubenswrapper[4794]: I0216 17:51:22.939883 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5bdbz"] Feb 16 17:51:23 crc kubenswrapper[4794]: I0216 17:51:23.791285 4794 scope.go:117] "RemoveContainer" containerID="edf1be4d50aed76dbd4d7c50b58efe821ce0164013d44ef706bca2dbdf85d295" Feb 16 17:51:23 crc kubenswrapper[4794]: E0216 17:51:23.791978 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:51:24 crc kubenswrapper[4794]: I0216 17:51:24.133481 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5bdbz" podUID="79de712b-ffd9-4691-aafa-87ed1a4de57c" containerName="registry-server" containerID="cri-o://1059ed2a4160c22f9c990cc86c679e41fc5f865492326174d77ee449f79cc6cb" gracePeriod=2 Feb 16 17:51:24 crc kubenswrapper[4794]: I0216 17:51:24.888256 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5bdbz" Feb 16 17:51:24 crc kubenswrapper[4794]: I0216 17:51:24.945630 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79de712b-ffd9-4691-aafa-87ed1a4de57c-utilities\") pod \"79de712b-ffd9-4691-aafa-87ed1a4de57c\" (UID: \"79de712b-ffd9-4691-aafa-87ed1a4de57c\") " Feb 16 17:51:24 crc kubenswrapper[4794]: I0216 17:51:24.945881 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79de712b-ffd9-4691-aafa-87ed1a4de57c-catalog-content\") pod \"79de712b-ffd9-4691-aafa-87ed1a4de57c\" (UID: \"79de712b-ffd9-4691-aafa-87ed1a4de57c\") " Feb 16 17:51:24 crc kubenswrapper[4794]: I0216 17:51:24.945929 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mshx\" (UniqueName: \"kubernetes.io/projected/79de712b-ffd9-4691-aafa-87ed1a4de57c-kube-api-access-8mshx\") pod \"79de712b-ffd9-4691-aafa-87ed1a4de57c\" (UID: \"79de712b-ffd9-4691-aafa-87ed1a4de57c\") " Feb 16 17:51:24 crc kubenswrapper[4794]: I0216 17:51:24.948603 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79de712b-ffd9-4691-aafa-87ed1a4de57c-utilities" (OuterVolumeSpecName: "utilities") pod "79de712b-ffd9-4691-aafa-87ed1a4de57c" (UID: "79de712b-ffd9-4691-aafa-87ed1a4de57c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:51:24 crc kubenswrapper[4794]: I0216 17:51:24.953296 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79de712b-ffd9-4691-aafa-87ed1a4de57c-kube-api-access-8mshx" (OuterVolumeSpecName: "kube-api-access-8mshx") pod "79de712b-ffd9-4691-aafa-87ed1a4de57c" (UID: "79de712b-ffd9-4691-aafa-87ed1a4de57c"). InnerVolumeSpecName "kube-api-access-8mshx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:51:25 crc kubenswrapper[4794]: I0216 17:51:25.026232 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79de712b-ffd9-4691-aafa-87ed1a4de57c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "79de712b-ffd9-4691-aafa-87ed1a4de57c" (UID: "79de712b-ffd9-4691-aafa-87ed1a4de57c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:51:25 crc kubenswrapper[4794]: I0216 17:51:25.048950 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79de712b-ffd9-4691-aafa-87ed1a4de57c-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:51:25 crc kubenswrapper[4794]: I0216 17:51:25.048989 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79de712b-ffd9-4691-aafa-87ed1a4de57c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:51:25 crc kubenswrapper[4794]: I0216 17:51:25.049001 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mshx\" (UniqueName: \"kubernetes.io/projected/79de712b-ffd9-4691-aafa-87ed1a4de57c-kube-api-access-8mshx\") on node \"crc\" DevicePath \"\"" Feb 16 17:51:25 crc kubenswrapper[4794]: I0216 17:51:25.148762 4794 generic.go:334] "Generic (PLEG): container finished" podID="79de712b-ffd9-4691-aafa-87ed1a4de57c" containerID="1059ed2a4160c22f9c990cc86c679e41fc5f865492326174d77ee449f79cc6cb" exitCode=0 Feb 16 17:51:25 crc kubenswrapper[4794]: I0216 17:51:25.148821 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5bdbz" Feb 16 17:51:25 crc kubenswrapper[4794]: I0216 17:51:25.149551 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5bdbz" event={"ID":"79de712b-ffd9-4691-aafa-87ed1a4de57c","Type":"ContainerDied","Data":"1059ed2a4160c22f9c990cc86c679e41fc5f865492326174d77ee449f79cc6cb"} Feb 16 17:51:25 crc kubenswrapper[4794]: I0216 17:51:25.149649 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5bdbz" event={"ID":"79de712b-ffd9-4691-aafa-87ed1a4de57c","Type":"ContainerDied","Data":"e022056048f74d544c6fe8e84af5652fef72939ad31fb04ed586469853e200be"} Feb 16 17:51:25 crc kubenswrapper[4794]: I0216 17:51:25.149687 4794 scope.go:117] "RemoveContainer" containerID="1059ed2a4160c22f9c990cc86c679e41fc5f865492326174d77ee449f79cc6cb" Feb 16 17:51:25 crc kubenswrapper[4794]: I0216 17:51:25.182783 4794 scope.go:117] "RemoveContainer" containerID="a11c25c715a91ad12e7ea2c9f10c56eea08b630f903ae3cf17bc16e90958c21c" Feb 16 17:51:25 crc kubenswrapper[4794]: I0216 17:51:25.186670 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5bdbz"] Feb 16 17:51:25 crc kubenswrapper[4794]: I0216 17:51:25.200364 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5bdbz"] Feb 16 17:51:25 crc kubenswrapper[4794]: I0216 17:51:25.206066 4794 scope.go:117] "RemoveContainer" containerID="a9de450664bffdc954d9d9294b6d738a70ffdb134e7d6884818a869e054fe7d8" Feb 16 17:51:25 crc kubenswrapper[4794]: I0216 17:51:25.261188 4794 scope.go:117] "RemoveContainer" containerID="1059ed2a4160c22f9c990cc86c679e41fc5f865492326174d77ee449f79cc6cb" Feb 16 17:51:25 crc kubenswrapper[4794]: E0216 17:51:25.261685 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1059ed2a4160c22f9c990cc86c679e41fc5f865492326174d77ee449f79cc6cb\": container with ID starting with 1059ed2a4160c22f9c990cc86c679e41fc5f865492326174d77ee449f79cc6cb not found: ID does not exist" containerID="1059ed2a4160c22f9c990cc86c679e41fc5f865492326174d77ee449f79cc6cb" Feb 16 17:51:25 crc kubenswrapper[4794]: I0216 17:51:25.261717 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1059ed2a4160c22f9c990cc86c679e41fc5f865492326174d77ee449f79cc6cb"} err="failed to get container status \"1059ed2a4160c22f9c990cc86c679e41fc5f865492326174d77ee449f79cc6cb\": rpc error: code = NotFound desc = could not find container \"1059ed2a4160c22f9c990cc86c679e41fc5f865492326174d77ee449f79cc6cb\": container with ID starting with 1059ed2a4160c22f9c990cc86c679e41fc5f865492326174d77ee449f79cc6cb not found: ID does not exist" Feb 16 17:51:25 crc kubenswrapper[4794]: I0216 17:51:25.261736 4794 scope.go:117] "RemoveContainer" containerID="a11c25c715a91ad12e7ea2c9f10c56eea08b630f903ae3cf17bc16e90958c21c" Feb 16 17:51:25 crc kubenswrapper[4794]: E0216 17:51:25.262058 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a11c25c715a91ad12e7ea2c9f10c56eea08b630f903ae3cf17bc16e90958c21c\": container with ID starting with a11c25c715a91ad12e7ea2c9f10c56eea08b630f903ae3cf17bc16e90958c21c not found: ID does not exist" containerID="a11c25c715a91ad12e7ea2c9f10c56eea08b630f903ae3cf17bc16e90958c21c" Feb 16 17:51:25 crc kubenswrapper[4794]: I0216 17:51:25.262101 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a11c25c715a91ad12e7ea2c9f10c56eea08b630f903ae3cf17bc16e90958c21c"} err="failed to get container status \"a11c25c715a91ad12e7ea2c9f10c56eea08b630f903ae3cf17bc16e90958c21c\": rpc error: code = NotFound desc = could not find container \"a11c25c715a91ad12e7ea2c9f10c56eea08b630f903ae3cf17bc16e90958c21c\": container with ID starting with a11c25c715a91ad12e7ea2c9f10c56eea08b630f903ae3cf17bc16e90958c21c not found: ID does not exist" Feb 16 17:51:25 crc kubenswrapper[4794]: I0216 17:51:25.262130 4794 scope.go:117] "RemoveContainer" containerID="a9de450664bffdc954d9d9294b6d738a70ffdb134e7d6884818a869e054fe7d8" Feb 16 17:51:25 crc kubenswrapper[4794]: E0216 17:51:25.262352 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9de450664bffdc954d9d9294b6d738a70ffdb134e7d6884818a869e054fe7d8\": container with ID starting with a9de450664bffdc954d9d9294b6d738a70ffdb134e7d6884818a869e054fe7d8 not found: ID does not exist" containerID="a9de450664bffdc954d9d9294b6d738a70ffdb134e7d6884818a869e054fe7d8" Feb 16 17:51:25 crc kubenswrapper[4794]: I0216 17:51:25.262378 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9de450664bffdc954d9d9294b6d738a70ffdb134e7d6884818a869e054fe7d8"} err="failed to get container status \"a9de450664bffdc954d9d9294b6d738a70ffdb134e7d6884818a869e054fe7d8\": rpc error: code = NotFound desc = could not find container \"a9de450664bffdc954d9d9294b6d738a70ffdb134e7d6884818a869e054fe7d8\": container with ID starting with a9de450664bffdc954d9d9294b6d738a70ffdb134e7d6884818a869e054fe7d8 not found: ID does not exist" Feb 16 17:51:26 crc kubenswrapper[4794]: I0216 17:51:26.805331 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79de712b-ffd9-4691-aafa-87ed1a4de57c" path="/var/lib/kubelet/pods/79de712b-ffd9-4691-aafa-87ed1a4de57c/volumes" Feb 16 17:51:28 crc kubenswrapper[4794]: I0216 17:51:28.040489 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-c2qcd"] Feb 16 17:51:28 crc kubenswrapper[4794]: E0216 17:51:28.041281 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79de712b-ffd9-4691-aafa-87ed1a4de57c" containerName="extract-content" Feb 16 17:51:28 crc kubenswrapper[4794]: I0216 17:51:28.041295 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="79de712b-ffd9-4691-aafa-87ed1a4de57c" containerName="extract-content" Feb 16 17:51:28 crc kubenswrapper[4794]: E0216 17:51:28.041349 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79de712b-ffd9-4691-aafa-87ed1a4de57c" containerName="registry-server" Feb 16 17:51:28 crc kubenswrapper[4794]: I0216 17:51:28.041356 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="79de712b-ffd9-4691-aafa-87ed1a4de57c" containerName="registry-server" Feb 16 17:51:28 crc kubenswrapper[4794]: E0216 17:51:28.041369 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79de712b-ffd9-4691-aafa-87ed1a4de57c" containerName="extract-utilities" Feb 16 17:51:28 crc kubenswrapper[4794]: I0216 17:51:28.041375 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="79de712b-ffd9-4691-aafa-87ed1a4de57c" containerName="extract-utilities" Feb 16 17:51:28 crc kubenswrapper[4794]: I0216 17:51:28.041586 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="79de712b-ffd9-4691-aafa-87ed1a4de57c" containerName="registry-server" Feb 16 17:51:28 crc kubenswrapper[4794]: I0216 17:51:28.042458 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-c2qcd" Feb 16 17:51:28 crc kubenswrapper[4794]: I0216 17:51:28.046011 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 17:51:28 crc kubenswrapper[4794]: I0216 17:51:28.046028 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 17:51:28 crc kubenswrapper[4794]: I0216 17:51:28.046161 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 17:51:28 crc kubenswrapper[4794]: I0216 17:51:28.046436 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kshzw" Feb 16 17:51:28 crc kubenswrapper[4794]: I0216 17:51:28.052903 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-c2qcd"] Feb 16 17:51:28 crc kubenswrapper[4794]: I0216 17:51:28.121092 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1acb8748-d3eb-4984-91a5-2f2b43926abf-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-c2qcd\" (UID: \"1acb8748-d3eb-4984-91a5-2f2b43926abf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-c2qcd" Feb 16 17:51:28 crc kubenswrapper[4794]: I0216 17:51:28.121337 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qx8d\" (UniqueName: \"kubernetes.io/projected/1acb8748-d3eb-4984-91a5-2f2b43926abf-kube-api-access-7qx8d\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-c2qcd\" (UID: \"1acb8748-d3eb-4984-91a5-2f2b43926abf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-c2qcd" Feb 16 17:51:28 crc kubenswrapper[4794]: I0216 17:51:28.121498 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1acb8748-d3eb-4984-91a5-2f2b43926abf-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-c2qcd\" (UID: \"1acb8748-d3eb-4984-91a5-2f2b43926abf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-c2qcd" Feb 16 17:51:28 crc kubenswrapper[4794]: I0216 17:51:28.223175 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qx8d\" (UniqueName: \"kubernetes.io/projected/1acb8748-d3eb-4984-91a5-2f2b43926abf-kube-api-access-7qx8d\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-c2qcd\" (UID: \"1acb8748-d3eb-4984-91a5-2f2b43926abf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-c2qcd" Feb 16 17:51:28 crc kubenswrapper[4794]: I0216 17:51:28.223353 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1acb8748-d3eb-4984-91a5-2f2b43926abf-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-c2qcd\" (UID: \"1acb8748-d3eb-4984-91a5-2f2b43926abf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-c2qcd" Feb 16 17:51:28 crc kubenswrapper[4794]: I0216 17:51:28.223420 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1acb8748-d3eb-4984-91a5-2f2b43926abf-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-c2qcd\" (UID: \"1acb8748-d3eb-4984-91a5-2f2b43926abf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-c2qcd" Feb 16 17:51:28 crc kubenswrapper[4794]: I0216 17:51:28.229019 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1acb8748-d3eb-4984-91a5-2f2b43926abf-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-c2qcd\" (UID: \"1acb8748-d3eb-4984-91a5-2f2b43926abf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-c2qcd" Feb 16 17:51:28 crc kubenswrapper[4794]: I0216 17:51:28.229381 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1acb8748-d3eb-4984-91a5-2f2b43926abf-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-c2qcd\" (UID: \"1acb8748-d3eb-4984-91a5-2f2b43926abf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-c2qcd" Feb 16 17:51:28 crc kubenswrapper[4794]: I0216 17:51:28.246173 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qx8d\" (UniqueName: \"kubernetes.io/projected/1acb8748-d3eb-4984-91a5-2f2b43926abf-kube-api-access-7qx8d\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-c2qcd\" (UID: \"1acb8748-d3eb-4984-91a5-2f2b43926abf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-c2qcd" Feb 16 17:51:28 crc kubenswrapper[4794]: I0216 17:51:28.366225 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-c2qcd" Feb 16 17:51:28 crc kubenswrapper[4794]: I0216 17:51:28.929641 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-c2qcd"] Feb 16 17:51:28 crc kubenswrapper[4794]: W0216 17:51:28.929753 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1acb8748_d3eb_4984_91a5_2f2b43926abf.slice/crio-1c2449aafb469e40c4e8c71892b9cf210c0d07e70e814f9ea5ea67f7c0f26181 WatchSource:0}: Error finding container 1c2449aafb469e40c4e8c71892b9cf210c0d07e70e814f9ea5ea67f7c0f26181: Status 404 returned error can't find the container with id 1c2449aafb469e40c4e8c71892b9cf210c0d07e70e814f9ea5ea67f7c0f26181 Feb 16 17:51:29 crc kubenswrapper[4794]: I0216 17:51:29.189641 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-c2qcd" event={"ID":"1acb8748-d3eb-4984-91a5-2f2b43926abf","Type":"ContainerStarted","Data":"1c2449aafb469e40c4e8c71892b9cf210c0d07e70e814f9ea5ea67f7c0f26181"} Feb 16 17:51:29 crc kubenswrapper[4794]: E0216 17:51:29.792768 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:51:29 crc kubenswrapper[4794]: E0216 17:51:29.793448 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:51:30 crc kubenswrapper[4794]: I0216 17:51:30.202992 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-c2qcd" event={"ID":"1acb8748-d3eb-4984-91a5-2f2b43926abf","Type":"ContainerStarted","Data":"59acb0ba94ac659c4a5d4ad963a84c3003bc03597835d5c117bad188cac2e8a0"} Feb 16 17:51:36 crc kubenswrapper[4794]: I0216 17:51:36.791127 4794 scope.go:117] "RemoveContainer" containerID="edf1be4d50aed76dbd4d7c50b58efe821ce0164013d44ef706bca2dbdf85d295" Feb 16 17:51:36 crc kubenswrapper[4794]: E0216 17:51:36.791935 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:51:43 crc kubenswrapper[4794]: E0216 17:51:43.793344 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:51:44 crc kubenswrapper[4794]: E0216 17:51:44.793272 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:51:47 crc kubenswrapper[4794]: I0216 17:51:47.791788 4794 scope.go:117] "RemoveContainer" containerID="edf1be4d50aed76dbd4d7c50b58efe821ce0164013d44ef706bca2dbdf85d295" Feb 16 17:51:47 crc kubenswrapper[4794]: E0216 17:51:47.792561 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:51:57 crc kubenswrapper[4794]: E0216 17:51:57.793691 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:51:58 crc kubenswrapper[4794]: E0216 17:51:58.792439 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:52:02 crc kubenswrapper[4794]: I0216 17:52:02.805078 4794 scope.go:117] "RemoveContainer" containerID="edf1be4d50aed76dbd4d7c50b58efe821ce0164013d44ef706bca2dbdf85d295" Feb 16 17:52:02 crc kubenswrapper[4794]: E0216 17:52:02.816255 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:52:08 crc kubenswrapper[4794]: I0216 17:52:08.794243 4794 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 17:52:08 crc kubenswrapper[4794]: E0216 17:52:08.914144 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 17:52:08 crc kubenswrapper[4794]: E0216 17:52:08.914275 4794 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 17:52:08 crc kubenswrapper[4794]: E0216 17:52:08.914636 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2h5l2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-7gcsf_openstack(c695f880-15cb-45b1-8545-60d8437ec631): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 17:52:08 crc kubenswrapper[4794]: E0216 17:52:08.916081 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:52:13 crc kubenswrapper[4794]: E0216 17:52:13.917767 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 17:52:13 crc kubenswrapper[4794]: E0216 17:52:13.918107 4794 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 17:52:13 crc kubenswrapper[4794]: E0216 17:52:13.918291 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59fh58dh6ch557h84h55ch564h5bh58fh5c8h5d4h584h669h667h569h59hd5hdbh9dh67ch5f9h59fh597h96h664h687h66dhfch5ddh5b7h88h59cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9v9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(8981f528-1f74-4d56-a93c-22860725b490): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 17:52:13 crc kubenswrapper[4794]: E0216 17:52:13.919835 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:52:15 crc kubenswrapper[4794]: I0216 17:52:15.794134 4794 scope.go:117] "RemoveContainer" containerID="edf1be4d50aed76dbd4d7c50b58efe821ce0164013d44ef706bca2dbdf85d295" Feb 16 17:52:15 crc kubenswrapper[4794]: E0216 17:52:15.795053 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:52:23 crc kubenswrapper[4794]: E0216 17:52:23.793569 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:52:23 crc kubenswrapper[4794]: I0216 17:52:23.817178 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-c2qcd" podStartSLOduration=55.359955888 podStartE2EDuration="55.817152893s" podCreationTimestamp="2026-02-16 17:51:28 +0000 UTC" firstStartedPulling="2026-02-16 17:51:28.931831849 +0000 UTC m=+3114.879926506" lastFinishedPulling="2026-02-16 17:51:29.389028864 +0000 UTC m=+3115.337123511" observedRunningTime="2026-02-16 17:51:30.220808992 +0000 UTC m=+3116.168903639" watchObservedRunningTime="2026-02-16 17:52:23.817152893 +0000 UTC m=+3169.765247550" Feb 16 17:52:24 crc kubenswrapper[4794]: E0216 17:52:24.807454 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:52:30 crc kubenswrapper[4794]: I0216 17:52:30.792335 4794 scope.go:117] "RemoveContainer" containerID="edf1be4d50aed76dbd4d7c50b58efe821ce0164013d44ef706bca2dbdf85d295" Feb 16 17:52:31 crc kubenswrapper[4794]: I0216 17:52:31.936985 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerStarted","Data":"4ab17da36de9edf518efb441493a1cb12486c35845421f7700131462301c3174"} Feb 16 17:52:34 crc kubenswrapper[4794]: E0216 17:52:34.802051 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:52:35 crc kubenswrapper[4794]: E0216 17:52:35.794161 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:52:46 crc kubenswrapper[4794]: E0216 17:52:46.794590 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:52:49 crc kubenswrapper[4794]: E0216 17:52:49.794736 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:52:57 crc kubenswrapper[4794]: E0216 17:52:57.794292 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:53:03 crc kubenswrapper[4794]: E0216 17:53:03.793201 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:53:08 crc kubenswrapper[4794]: E0216 17:53:08.794511 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:53:15 crc kubenswrapper[4794]: E0216 17:53:15.792918 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:53:22 crc kubenswrapper[4794]: E0216 17:53:22.794915 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:53:27 crc kubenswrapper[4794]: E0216 17:53:27.794584 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:53:36 crc kubenswrapper[4794]: E0216 17:53:36.794035 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:53:40 crc kubenswrapper[4794]: E0216 17:53:40.793387 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:53:51 crc kubenswrapper[4794]: E0216 17:53:51.794579 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:53:52 crc kubenswrapper[4794]: E0216 17:53:52.793866 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:54:05 crc kubenswrapper[4794]: E0216 17:54:05.794168 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:54:05 crc kubenswrapper[4794]: E0216 17:54:05.794375 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:54:20 crc kubenswrapper[4794]: E0216 17:54:20.797752 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:54:20 crc kubenswrapper[4794]: E0216 17:54:20.798479 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:54:31 crc kubenswrapper[4794]: E0216 17:54:31.793713 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:54:34 crc kubenswrapper[4794]: E0216 17:54:34.800404 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:54:42 crc kubenswrapper[4794]: E0216 17:54:42.795589 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:54:49 crc kubenswrapper[4794]: E0216 17:54:49.794887 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:54:50 crc kubenswrapper[4794]: I0216 17:54:50.140812 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:54:50 crc kubenswrapper[4794]: I0216 17:54:50.140900 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:54:57 crc kubenswrapper[4794]: E0216 17:54:57.794022 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:55:02 crc kubenswrapper[4794]: E0216 17:55:02.792868 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:55:11 crc kubenswrapper[4794]: E0216 17:55:11.798449 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:55:16 crc kubenswrapper[4794]: E0216 17:55:16.795285 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:55:20 crc kubenswrapper[4794]: I0216 17:55:20.141222 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:55:20 crc kubenswrapper[4794]: I0216 17:55:20.141612 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:55:25 crc kubenswrapper[4794]: E0216 17:55:25.794865 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:55:30 crc kubenswrapper[4794]: E0216 17:55:30.793428 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:55:37 crc kubenswrapper[4794]: E0216 17:55:37.794085 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:55:44 crc kubenswrapper[4794]: E0216 17:55:44.809940 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:55:50 crc kubenswrapper[4794]: I0216 17:55:50.140657 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:55:50 crc kubenswrapper[4794]: I0216 17:55:50.141281 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:55:50 crc kubenswrapper[4794]: I0216 17:55:50.141367 4794 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" Feb 16 17:55:50 crc kubenswrapper[4794]: I0216 17:55:50.142426 4794 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4ab17da36de9edf518efb441493a1cb12486c35845421f7700131462301c3174"} pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 17:55:50 crc kubenswrapper[4794]: I0216 17:55:50.142487 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" containerID="cri-o://4ab17da36de9edf518efb441493a1cb12486c35845421f7700131462301c3174" gracePeriod=600 Feb 16 17:55:51 crc kubenswrapper[4794]: I0216 17:55:51.186648 4794 generic.go:334] "Generic (PLEG): container finished" podID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerID="4ab17da36de9edf518efb441493a1cb12486c35845421f7700131462301c3174" exitCode=0 Feb 16 17:55:51 crc kubenswrapper[4794]: I0216 17:55:51.186743 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerDied","Data":"4ab17da36de9edf518efb441493a1cb12486c35845421f7700131462301c3174"} Feb 16 17:55:51 crc kubenswrapper[4794]: I0216 17:55:51.187189 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerStarted","Data":"b9dbd3fa18ebcb3d8888e3b92e151f642273c001c7a0d501f50a4853b9834121"} Feb 16 17:55:51 crc kubenswrapper[4794]: I0216 17:55:51.187212 4794 scope.go:117] "RemoveContainer" containerID="edf1be4d50aed76dbd4d7c50b58efe821ce0164013d44ef706bca2dbdf85d295" Feb 16 17:55:51 crc kubenswrapper[4794]: E0216 17:55:51.794282 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:55:55 crc kubenswrapper[4794]: E0216 17:55:55.794798 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:56:05 crc kubenswrapper[4794]: E0216 17:56:05.795592 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:56:06 crc kubenswrapper[4794]: E0216 17:56:06.796855 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:56:19 crc kubenswrapper[4794]: E0216 17:56:19.794629 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:56:20 crc kubenswrapper[4794]: E0216 17:56:20.793095 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:56:24 crc kubenswrapper[4794]: I0216 17:56:24.187428 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7tmsp"] Feb 16 17:56:24 crc kubenswrapper[4794]: I0216 17:56:24.192123 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7tmsp" Feb 16 17:56:24 crc kubenswrapper[4794]: I0216 17:56:24.193592 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f226a3a9-f3ac-40d7-8e48-e07fd5dff619-utilities\") pod \"redhat-marketplace-7tmsp\" (UID: \"f226a3a9-f3ac-40d7-8e48-e07fd5dff619\") " pod="openshift-marketplace/redhat-marketplace-7tmsp" Feb 16 17:56:24 crc kubenswrapper[4794]: I0216 17:56:24.193672 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f226a3a9-f3ac-40d7-8e48-e07fd5dff619-catalog-content\") pod \"redhat-marketplace-7tmsp\" (UID: \"f226a3a9-f3ac-40d7-8e48-e07fd5dff619\") " pod="openshift-marketplace/redhat-marketplace-7tmsp" Feb 16 17:56:24 crc kubenswrapper[4794]: I0216 17:56:24.194189 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pmpt\" (UniqueName: \"kubernetes.io/projected/f226a3a9-f3ac-40d7-8e48-e07fd5dff619-kube-api-access-5pmpt\") pod \"redhat-marketplace-7tmsp\" (UID: \"f226a3a9-f3ac-40d7-8e48-e07fd5dff619\") " pod="openshift-marketplace/redhat-marketplace-7tmsp" Feb 16 17:56:24 crc kubenswrapper[4794]: I0216 17:56:24.217249 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7tmsp"] Feb 16 17:56:24 crc kubenswrapper[4794]: I0216 17:56:24.296860 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pmpt\" (UniqueName: \"kubernetes.io/projected/f226a3a9-f3ac-40d7-8e48-e07fd5dff619-kube-api-access-5pmpt\") pod \"redhat-marketplace-7tmsp\" (UID: \"f226a3a9-f3ac-40d7-8e48-e07fd5dff619\") " pod="openshift-marketplace/redhat-marketplace-7tmsp" Feb 16 17:56:24 crc kubenswrapper[4794]: I0216 17:56:24.296983 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f226a3a9-f3ac-40d7-8e48-e07fd5dff619-utilities\") pod \"redhat-marketplace-7tmsp\" (UID: \"f226a3a9-f3ac-40d7-8e48-e07fd5dff619\") " pod="openshift-marketplace/redhat-marketplace-7tmsp" Feb 16 17:56:24 crc kubenswrapper[4794]: I0216 17:56:24.297017 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f226a3a9-f3ac-40d7-8e48-e07fd5dff619-catalog-content\") pod \"redhat-marketplace-7tmsp\" (UID: \"f226a3a9-f3ac-40d7-8e48-e07fd5dff619\") " pod="openshift-marketplace/redhat-marketplace-7tmsp" Feb 16 17:56:24 crc kubenswrapper[4794]: I0216 17:56:24.297607 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f226a3a9-f3ac-40d7-8e48-e07fd5dff619-utilities\") pod \"redhat-marketplace-7tmsp\" (UID: \"f226a3a9-f3ac-40d7-8e48-e07fd5dff619\") " pod="openshift-marketplace/redhat-marketplace-7tmsp" Feb 16 17:56:24 crc kubenswrapper[4794]: I0216 17:56:24.297649 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f226a3a9-f3ac-40d7-8e48-e07fd5dff619-catalog-content\") pod \"redhat-marketplace-7tmsp\" (UID: \"f226a3a9-f3ac-40d7-8e48-e07fd5dff619\") " pod="openshift-marketplace/redhat-marketplace-7tmsp" Feb 16 17:56:24 crc kubenswrapper[4794]: I0216 17:56:24.314336 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pmpt\" (UniqueName: \"kubernetes.io/projected/f226a3a9-f3ac-40d7-8e48-e07fd5dff619-kube-api-access-5pmpt\") pod \"redhat-marketplace-7tmsp\" (UID: \"f226a3a9-f3ac-40d7-8e48-e07fd5dff619\") " pod="openshift-marketplace/redhat-marketplace-7tmsp" Feb 16 17:56:24 crc kubenswrapper[4794]: I0216 17:56:24.536594 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7tmsp" Feb 16 17:56:25 crc kubenswrapper[4794]: I0216 17:56:25.063975 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7tmsp"] Feb 16 17:56:25 crc kubenswrapper[4794]: I0216 17:56:25.560209 4794 generic.go:334] "Generic (PLEG): container finished" podID="f226a3a9-f3ac-40d7-8e48-e07fd5dff619" containerID="645bf7eb551f040b10300d8d2c5387bfe87446a19a515653c96df8e1bf313c65" exitCode=0 Feb 16 17:56:25 crc kubenswrapper[4794]: I0216 17:56:25.560362 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7tmsp" event={"ID":"f226a3a9-f3ac-40d7-8e48-e07fd5dff619","Type":"ContainerDied","Data":"645bf7eb551f040b10300d8d2c5387bfe87446a19a515653c96df8e1bf313c65"} Feb 16 17:56:25 crc kubenswrapper[4794]: I0216 17:56:25.560729 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7tmsp" event={"ID":"f226a3a9-f3ac-40d7-8e48-e07fd5dff619","Type":"ContainerStarted","Data":"11b58a89703b9a49de45a9dfaeeaa66402a7d1f5bea2a32609afc797f94a6e95"} Feb 16 17:56:26 crc kubenswrapper[4794]: I0216 17:56:26.573451 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7tmsp" event={"ID":"f226a3a9-f3ac-40d7-8e48-e07fd5dff619","Type":"ContainerStarted","Data":"d1995a4914df703b8d8550e3e3aba4f21938173e730b2ace78e6fa94c62a8edc"} Feb 16 17:56:27 crc kubenswrapper[4794]: I0216 17:56:27.584301 4794 generic.go:334] "Generic (PLEG): container finished" podID="f226a3a9-f3ac-40d7-8e48-e07fd5dff619" containerID="d1995a4914df703b8d8550e3e3aba4f21938173e730b2ace78e6fa94c62a8edc" exitCode=0 Feb 16 17:56:27 crc kubenswrapper[4794]: I0216 17:56:27.584489 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7tmsp" event={"ID":"f226a3a9-f3ac-40d7-8e48-e07fd5dff619","Type":"ContainerDied","Data":"d1995a4914df703b8d8550e3e3aba4f21938173e730b2ace78e6fa94c62a8edc"} Feb 16 17:56:29 crc kubenswrapper[4794]: I0216 17:56:29.609807 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7tmsp" event={"ID":"f226a3a9-f3ac-40d7-8e48-e07fd5dff619","Type":"ContainerStarted","Data":"54e1ad634a848eea6b947301dd9e14190bd9324c40339fa44e587bf8f444f3e0"} Feb 16 17:56:29 crc kubenswrapper[4794]: I0216 17:56:29.644229 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7tmsp" podStartSLOduration=2.287748677 podStartE2EDuration="5.644211472s" podCreationTimestamp="2026-02-16 17:56:24 +0000 UTC" firstStartedPulling="2026-02-16 17:56:25.562737029 +0000 UTC m=+3411.510831686" lastFinishedPulling="2026-02-16 17:56:28.919199834 +0000 UTC m=+3414.867294481" observedRunningTime="2026-02-16 17:56:29.635038313 +0000 UTC m=+3415.583132980" watchObservedRunningTime="2026-02-16 17:56:29.644211472 +0000 UTC m=+3415.592306119" Feb 16 17:56:32 crc kubenswrapper[4794]: E0216 17:56:32.794138 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:56:33 crc kubenswrapper[4794]: E0216 17:56:33.792636 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:56:34 crc kubenswrapper[4794]: I0216 17:56:34.536850 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7tmsp" Feb 16 17:56:34 crc kubenswrapper[4794]: I0216 17:56:34.537200 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7tmsp" Feb 16 17:56:34 crc kubenswrapper[4794]: I0216 17:56:34.588050 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7tmsp" Feb 16 17:56:34 crc kubenswrapper[4794]: I0216 17:56:34.703522 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7tmsp" Feb 16 17:56:34 crc kubenswrapper[4794]: I0216 17:56:34.830240 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7tmsp"] Feb 16 17:56:36 crc kubenswrapper[4794]: I0216 17:56:36.680603 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7tmsp" podUID="f226a3a9-f3ac-40d7-8e48-e07fd5dff619" containerName="registry-server" containerID="cri-o://54e1ad634a848eea6b947301dd9e14190bd9324c40339fa44e587bf8f444f3e0" gracePeriod=2 Feb 16 17:56:37 crc kubenswrapper[4794]: I0216 17:56:37.248759 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7tmsp" Feb 16 17:56:37 crc kubenswrapper[4794]: I0216 17:56:37.424854 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f226a3a9-f3ac-40d7-8e48-e07fd5dff619-catalog-content\") pod \"f226a3a9-f3ac-40d7-8e48-e07fd5dff619\" (UID: \"f226a3a9-f3ac-40d7-8e48-e07fd5dff619\") " Feb 16 17:56:37 crc kubenswrapper[4794]: I0216 17:56:37.425091 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f226a3a9-f3ac-40d7-8e48-e07fd5dff619-utilities\") pod \"f226a3a9-f3ac-40d7-8e48-e07fd5dff619\" (UID: \"f226a3a9-f3ac-40d7-8e48-e07fd5dff619\") " Feb 16 17:56:37 crc kubenswrapper[4794]: I0216 17:56:37.425289 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pmpt\" (UniqueName: \"kubernetes.io/projected/f226a3a9-f3ac-40d7-8e48-e07fd5dff619-kube-api-access-5pmpt\") pod \"f226a3a9-f3ac-40d7-8e48-e07fd5dff619\" (UID: \"f226a3a9-f3ac-40d7-8e48-e07fd5dff619\") " Feb 16 17:56:37 crc kubenswrapper[4794]: I0216 17:56:37.425910 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f226a3a9-f3ac-40d7-8e48-e07fd5dff619-utilities" (OuterVolumeSpecName: "utilities") pod "f226a3a9-f3ac-40d7-8e48-e07fd5dff619" (UID: "f226a3a9-f3ac-40d7-8e48-e07fd5dff619"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:56:37 crc kubenswrapper[4794]: I0216 17:56:37.426560 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f226a3a9-f3ac-40d7-8e48-e07fd5dff619-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:56:37 crc kubenswrapper[4794]: I0216 17:56:37.437603 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f226a3a9-f3ac-40d7-8e48-e07fd5dff619-kube-api-access-5pmpt" (OuterVolumeSpecName: "kube-api-access-5pmpt") pod "f226a3a9-f3ac-40d7-8e48-e07fd5dff619" (UID: "f226a3a9-f3ac-40d7-8e48-e07fd5dff619"). InnerVolumeSpecName "kube-api-access-5pmpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:56:37 crc kubenswrapper[4794]: I0216 17:56:37.455462 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f226a3a9-f3ac-40d7-8e48-e07fd5dff619-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f226a3a9-f3ac-40d7-8e48-e07fd5dff619" (UID: "f226a3a9-f3ac-40d7-8e48-e07fd5dff619"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:56:37 crc kubenswrapper[4794]: I0216 17:56:37.528190 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f226a3a9-f3ac-40d7-8e48-e07fd5dff619-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:56:37 crc kubenswrapper[4794]: I0216 17:56:37.528224 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pmpt\" (UniqueName: \"kubernetes.io/projected/f226a3a9-f3ac-40d7-8e48-e07fd5dff619-kube-api-access-5pmpt\") on node \"crc\" DevicePath \"\"" Feb 16 17:56:37 crc kubenswrapper[4794]: I0216 17:56:37.689415 4794 generic.go:334] "Generic (PLEG): container finished" podID="f226a3a9-f3ac-40d7-8e48-e07fd5dff619" containerID="54e1ad634a848eea6b947301dd9e14190bd9324c40339fa44e587bf8f444f3e0" exitCode=0 Feb 16 17:56:37 crc kubenswrapper[4794]: I0216 17:56:37.689460 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7tmsp" event={"ID":"f226a3a9-f3ac-40d7-8e48-e07fd5dff619","Type":"ContainerDied","Data":"54e1ad634a848eea6b947301dd9e14190bd9324c40339fa44e587bf8f444f3e0"} Feb 16 17:56:37 crc kubenswrapper[4794]: I0216 17:56:37.689666 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7tmsp" event={"ID":"f226a3a9-f3ac-40d7-8e48-e07fd5dff619","Type":"ContainerDied","Data":"11b58a89703b9a49de45a9dfaeeaa66402a7d1f5bea2a32609afc797f94a6e95"} Feb 16 17:56:37 crc kubenswrapper[4794]: I0216 17:56:37.689689 4794 scope.go:117] "RemoveContainer" containerID="54e1ad634a848eea6b947301dd9e14190bd9324c40339fa44e587bf8f444f3e0" Feb 16 17:56:37 crc kubenswrapper[4794]: I0216 17:56:37.689531 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7tmsp" Feb 16 17:56:37 crc kubenswrapper[4794]: I0216 17:56:37.719814 4794 scope.go:117] "RemoveContainer" containerID="d1995a4914df703b8d8550e3e3aba4f21938173e730b2ace78e6fa94c62a8edc" Feb 16 17:56:37 crc kubenswrapper[4794]: I0216 17:56:37.738523 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7tmsp"] Feb 16 17:56:37 crc kubenswrapper[4794]: I0216 17:56:37.745910 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7tmsp"] Feb 16 17:56:37 crc kubenswrapper[4794]: I0216 17:56:37.760346 4794 scope.go:117] "RemoveContainer" containerID="645bf7eb551f040b10300d8d2c5387bfe87446a19a515653c96df8e1bf313c65" Feb 16 17:56:37 crc kubenswrapper[4794]: I0216 17:56:37.800085 4794 scope.go:117] "RemoveContainer" containerID="54e1ad634a848eea6b947301dd9e14190bd9324c40339fa44e587bf8f444f3e0" Feb 16 17:56:37 crc kubenswrapper[4794]: E0216 17:56:37.800821 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54e1ad634a848eea6b947301dd9e14190bd9324c40339fa44e587bf8f444f3e0\": container with ID starting with 54e1ad634a848eea6b947301dd9e14190bd9324c40339fa44e587bf8f444f3e0 not found: ID does not exist" containerID="54e1ad634a848eea6b947301dd9e14190bd9324c40339fa44e587bf8f444f3e0" Feb 16 17:56:37 crc kubenswrapper[4794]: I0216 17:56:37.800908 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54e1ad634a848eea6b947301dd9e14190bd9324c40339fa44e587bf8f444f3e0"} err="failed to get container status \"54e1ad634a848eea6b947301dd9e14190bd9324c40339fa44e587bf8f444f3e0\": rpc error: code = NotFound desc = could not find container \"54e1ad634a848eea6b947301dd9e14190bd9324c40339fa44e587bf8f444f3e0\": container with ID starting with 54e1ad634a848eea6b947301dd9e14190bd9324c40339fa44e587bf8f444f3e0 not found: ID does not exist" Feb 16 17:56:37 crc kubenswrapper[4794]: I0216 17:56:37.800993 4794 scope.go:117] "RemoveContainer" containerID="d1995a4914df703b8d8550e3e3aba4f21938173e730b2ace78e6fa94c62a8edc" Feb 16 17:56:37 crc kubenswrapper[4794]: E0216 17:56:37.801267 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1995a4914df703b8d8550e3e3aba4f21938173e730b2ace78e6fa94c62a8edc\": container with ID starting with d1995a4914df703b8d8550e3e3aba4f21938173e730b2ace78e6fa94c62a8edc not found: ID does not exist" containerID="d1995a4914df703b8d8550e3e3aba4f21938173e730b2ace78e6fa94c62a8edc" Feb 16 17:56:37 crc kubenswrapper[4794]: I0216 17:56:37.801388 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1995a4914df703b8d8550e3e3aba4f21938173e730b2ace78e6fa94c62a8edc"} err="failed to get container status \"d1995a4914df703b8d8550e3e3aba4f21938173e730b2ace78e6fa94c62a8edc\": rpc error: code = NotFound desc = could not find container \"d1995a4914df703b8d8550e3e3aba4f21938173e730b2ace78e6fa94c62a8edc\": container with ID starting with d1995a4914df703b8d8550e3e3aba4f21938173e730b2ace78e6fa94c62a8edc not found: ID does not exist" Feb 16 17:56:37 crc kubenswrapper[4794]: I0216 17:56:37.801464 4794 scope.go:117] "RemoveContainer" containerID="645bf7eb551f040b10300d8d2c5387bfe87446a19a515653c96df8e1bf313c65" Feb 16 17:56:37 crc kubenswrapper[4794]: E0216 17:56:37.801794 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"645bf7eb551f040b10300d8d2c5387bfe87446a19a515653c96df8e1bf313c65\": container with ID starting with 645bf7eb551f040b10300d8d2c5387bfe87446a19a515653c96df8e1bf313c65 not found: ID does not exist" containerID="645bf7eb551f040b10300d8d2c5387bfe87446a19a515653c96df8e1bf313c65" Feb 16 17:56:37 crc kubenswrapper[4794]: I0216 17:56:37.801880 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"645bf7eb551f040b10300d8d2c5387bfe87446a19a515653c96df8e1bf313c65"} err="failed to get container status \"645bf7eb551f040b10300d8d2c5387bfe87446a19a515653c96df8e1bf313c65\": rpc error: code = NotFound desc = could not find container \"645bf7eb551f040b10300d8d2c5387bfe87446a19a515653c96df8e1bf313c65\": container with ID starting with 645bf7eb551f040b10300d8d2c5387bfe87446a19a515653c96df8e1bf313c65 not found: ID does not exist" Feb 16 17:56:38 crc kubenswrapper[4794]: I0216 17:56:38.805582 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f226a3a9-f3ac-40d7-8e48-e07fd5dff619" path="/var/lib/kubelet/pods/f226a3a9-f3ac-40d7-8e48-e07fd5dff619/volumes" Feb 16 17:56:44 crc kubenswrapper[4794]: E0216 17:56:44.802091 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:56:44 crc kubenswrapper[4794]: E0216 17:56:44.803607 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:56:55 crc kubenswrapper[4794]: E0216 17:56:55.794410 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:56:58 crc kubenswrapper[4794]: E0216 17:56:58.794057 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:57:10 crc kubenswrapper[4794]: I0216 17:57:10.793073 4794 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 17:57:10 crc kubenswrapper[4794]: E0216 17:57:10.793083 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:57:10 crc kubenswrapper[4794]: E0216 17:57:10.917566 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 17:57:10 crc kubenswrapper[4794]: E0216 17:57:10.917629 4794 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 17:57:10 crc kubenswrapper[4794]: E0216 17:57:10.917750 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2h5l2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-7gcsf_openstack(c695f880-15cb-45b1-8545-60d8437ec631): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 17:57:10 crc kubenswrapper[4794]: E0216 17:57:10.918945 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:57:22 crc kubenswrapper[4794]: E0216 17:57:22.919936 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 17:57:22 crc kubenswrapper[4794]: E0216 17:57:22.920369 4794 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 17:57:22 crc kubenswrapper[4794]: E0216 17:57:22.920492 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59fh58dh6ch557h84h55ch564h5bh58fh5c8h5d4h584h669h667h569h59hd5hdbh9dh67ch5f9h59fh597h96h664h687h66dhfch5ddh5b7h88h59cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9v9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(8981f528-1f74-4d56-a93c-22860725b490): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 17:57:22 crc kubenswrapper[4794]: E0216 17:57:22.921629 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:57:24 crc kubenswrapper[4794]: E0216 17:57:24.801547 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:57:35 crc kubenswrapper[4794]: E0216 17:57:35.794493 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:57:38 crc kubenswrapper[4794]: E0216 17:57:38.793461 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:57:42 crc kubenswrapper[4794]: I0216 17:57:42.437152 4794 generic.go:334] "Generic (PLEG): container finished" podID="1acb8748-d3eb-4984-91a5-2f2b43926abf" containerID="59acb0ba94ac659c4a5d4ad963a84c3003bc03597835d5c117bad188cac2e8a0" exitCode=2 Feb 16 17:57:42 crc kubenswrapper[4794]: I0216 17:57:42.437235 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-c2qcd" event={"ID":"1acb8748-d3eb-4984-91a5-2f2b43926abf","Type":"ContainerDied","Data":"59acb0ba94ac659c4a5d4ad963a84c3003bc03597835d5c117bad188cac2e8a0"} Feb 16 17:57:44 crc kubenswrapper[4794]: I0216 17:57:44.415676 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-c2qcd" Feb 16 17:57:44 crc kubenswrapper[4794]: I0216 17:57:44.458542 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-c2qcd" event={"ID":"1acb8748-d3eb-4984-91a5-2f2b43926abf","Type":"ContainerDied","Data":"1c2449aafb469e40c4e8c71892b9cf210c0d07e70e814f9ea5ea67f7c0f26181"} Feb 16 17:57:44 crc kubenswrapper[4794]: I0216 17:57:44.458579 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c2449aafb469e40c4e8c71892b9cf210c0d07e70e814f9ea5ea67f7c0f26181" Feb 16 17:57:44 crc kubenswrapper[4794]: I0216 17:57:44.458619 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-c2qcd" Feb 16 17:57:44 crc kubenswrapper[4794]: I0216 17:57:44.557406 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1acb8748-d3eb-4984-91a5-2f2b43926abf-ssh-key-openstack-edpm-ipam\") pod \"1acb8748-d3eb-4984-91a5-2f2b43926abf\" (UID: \"1acb8748-d3eb-4984-91a5-2f2b43926abf\") " Feb 16 17:57:44 crc kubenswrapper[4794]: I0216 17:57:44.557611 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qx8d\" (UniqueName: \"kubernetes.io/projected/1acb8748-d3eb-4984-91a5-2f2b43926abf-kube-api-access-7qx8d\") pod \"1acb8748-d3eb-4984-91a5-2f2b43926abf\" (UID: \"1acb8748-d3eb-4984-91a5-2f2b43926abf\") " Feb 16 17:57:44 crc kubenswrapper[4794]: I0216 17:57:44.557880 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1acb8748-d3eb-4984-91a5-2f2b43926abf-inventory\") pod \"1acb8748-d3eb-4984-91a5-2f2b43926abf\" (UID: \"1acb8748-d3eb-4984-91a5-2f2b43926abf\") " Feb 16 17:57:44 crc kubenswrapper[4794]: I0216 17:57:44.562153 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1acb8748-d3eb-4984-91a5-2f2b43926abf-kube-api-access-7qx8d" (OuterVolumeSpecName: "kube-api-access-7qx8d") pod "1acb8748-d3eb-4984-91a5-2f2b43926abf" (UID: "1acb8748-d3eb-4984-91a5-2f2b43926abf"). InnerVolumeSpecName "kube-api-access-7qx8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:57:44 crc kubenswrapper[4794]: I0216 17:57:44.596609 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1acb8748-d3eb-4984-91a5-2f2b43926abf-inventory" (OuterVolumeSpecName: "inventory") pod "1acb8748-d3eb-4984-91a5-2f2b43926abf" (UID: "1acb8748-d3eb-4984-91a5-2f2b43926abf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:57:44 crc kubenswrapper[4794]: I0216 17:57:44.597964 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1acb8748-d3eb-4984-91a5-2f2b43926abf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1acb8748-d3eb-4984-91a5-2f2b43926abf" (UID: "1acb8748-d3eb-4984-91a5-2f2b43926abf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:57:44 crc kubenswrapper[4794]: I0216 17:57:44.662145 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qx8d\" (UniqueName: \"kubernetes.io/projected/1acb8748-d3eb-4984-91a5-2f2b43926abf-kube-api-access-7qx8d\") on node \"crc\" DevicePath \"\"" Feb 16 17:57:44 crc kubenswrapper[4794]: I0216 17:57:44.662195 4794 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1acb8748-d3eb-4984-91a5-2f2b43926abf-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 17:57:44 crc kubenswrapper[4794]: I0216 17:57:44.662214 4794 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1acb8748-d3eb-4984-91a5-2f2b43926abf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 17:57:48 crc kubenswrapper[4794]: E0216 17:57:48.794107 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:57:50 crc kubenswrapper[4794]: I0216 17:57:50.141055 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:57:50 crc kubenswrapper[4794]: I0216 17:57:50.141340 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:57:53 crc kubenswrapper[4794]: E0216 17:57:53.793052 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:57:59 crc kubenswrapper[4794]: E0216 17:57:59.795698 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:58:05 crc kubenswrapper[4794]: E0216 17:58:05.794330 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:58:11 crc kubenswrapper[4794]: E0216 17:58:11.794407 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:58:16 crc kubenswrapper[4794]: E0216 17:58:16.795809 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:58:20 crc kubenswrapper[4794]: I0216 17:58:20.141005 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:58:20 crc kubenswrapper[4794]: I0216 17:58:20.141442 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:58:25 crc kubenswrapper[4794]: E0216 17:58:25.796645 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:58:28 crc kubenswrapper[4794]: E0216 17:58:28.794621 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:58:36 crc kubenswrapper[4794]: E0216 17:58:36.795373 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:58:42 crc kubenswrapper[4794]: E0216 17:58:42.795676 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:58:50 crc kubenswrapper[4794]: I0216 17:58:50.140755 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:58:50 crc kubenswrapper[4794]: I0216 17:58:50.141371 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:58:50 crc kubenswrapper[4794]: I0216 17:58:50.141418 4794 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" Feb 16 17:58:50 crc kubenswrapper[4794]: I0216 17:58:50.142390 4794 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b9dbd3fa18ebcb3d8888e3b92e151f642273c001c7a0d501f50a4853b9834121"} pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 17:58:50 crc kubenswrapper[4794]: I0216 17:58:50.142459 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" containerID="cri-o://b9dbd3fa18ebcb3d8888e3b92e151f642273c001c7a0d501f50a4853b9834121" gracePeriod=600 Feb 16 17:58:50 crc kubenswrapper[4794]: E0216 17:58:50.264105 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:58:50 crc kubenswrapper[4794]: I0216 17:58:50.278400 4794 generic.go:334] "Generic (PLEG): container finished" podID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerID="b9dbd3fa18ebcb3d8888e3b92e151f642273c001c7a0d501f50a4853b9834121" exitCode=0 Feb 16 17:58:50 crc kubenswrapper[4794]: I0216 17:58:50.278451 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerDied","Data":"b9dbd3fa18ebcb3d8888e3b92e151f642273c001c7a0d501f50a4853b9834121"} Feb 16 17:58:50 crc kubenswrapper[4794]: I0216 17:58:50.278531 4794 scope.go:117] "RemoveContainer" containerID="4ab17da36de9edf518efb441493a1cb12486c35845421f7700131462301c3174" Feb 16 17:58:50 crc kubenswrapper[4794]: I0216 17:58:50.279334 4794 scope.go:117] "RemoveContainer" containerID="b9dbd3fa18ebcb3d8888e3b92e151f642273c001c7a0d501f50a4853b9834121" Feb 16 17:58:50 crc kubenswrapper[4794]: E0216 17:58:50.279652 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:58:51 crc kubenswrapper[4794]: E0216 17:58:51.795881 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:58:54 crc kubenswrapper[4794]: E0216 17:58:54.805494 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:59:01 crc kubenswrapper[4794]: I0216 17:59:01.062267 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8s26h"] Feb 16 17:59:01 crc kubenswrapper[4794]: E0216 17:59:01.063485 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f226a3a9-f3ac-40d7-8e48-e07fd5dff619" containerName="registry-server" Feb 16 17:59:01 crc kubenswrapper[4794]: I0216 17:59:01.063504 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="f226a3a9-f3ac-40d7-8e48-e07fd5dff619" containerName="registry-server" Feb 16 17:59:01 crc kubenswrapper[4794]: E0216 17:59:01.063529 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1acb8748-d3eb-4984-91a5-2f2b43926abf" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 17:59:01 crc kubenswrapper[4794]: I0216 17:59:01.063539 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="1acb8748-d3eb-4984-91a5-2f2b43926abf" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 17:59:01 crc kubenswrapper[4794]: E0216 17:59:01.063550 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f226a3a9-f3ac-40d7-8e48-e07fd5dff619" containerName="extract-content" Feb 16 17:59:01 crc kubenswrapper[4794]: I0216 17:59:01.063558 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="f226a3a9-f3ac-40d7-8e48-e07fd5dff619" containerName="extract-content" Feb 16 17:59:01 crc kubenswrapper[4794]: E0216 17:59:01.063581 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f226a3a9-f3ac-40d7-8e48-e07fd5dff619" containerName="extract-utilities" Feb 16 17:59:01 crc kubenswrapper[4794]: I0216 17:59:01.063592 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="f226a3a9-f3ac-40d7-8e48-e07fd5dff619" containerName="extract-utilities" Feb 16 17:59:01 crc kubenswrapper[4794]: I0216 17:59:01.063860 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="1acb8748-d3eb-4984-91a5-2f2b43926abf" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 17:59:01 crc kubenswrapper[4794]: I0216 17:59:01.063873 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="f226a3a9-f3ac-40d7-8e48-e07fd5dff619" containerName="registry-server" Feb 16 17:59:01 crc kubenswrapper[4794]: I0216 17:59:01.064769 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8s26h" Feb 16 17:59:01 crc kubenswrapper[4794]: I0216 17:59:01.073235 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8s26h"] Feb 16 17:59:01 crc kubenswrapper[4794]: I0216 17:59:01.113051 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 17:59:01 crc kubenswrapper[4794]: I0216 17:59:01.113285 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 17:59:01 crc kubenswrapper[4794]: I0216 17:59:01.113469 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 17:59:01 crc kubenswrapper[4794]: I0216 17:59:01.114529 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kshzw" Feb 16 17:59:01 crc kubenswrapper[4794]: I0216 17:59:01.217250 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7566f2a1-be5c-4ab7-8639-e162712a8ea4-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8s26h\" (UID: \"7566f2a1-be5c-4ab7-8639-e162712a8ea4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8s26h" Feb 16 17:59:01 crc kubenswrapper[4794]: I0216 17:59:01.217602 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7566f2a1-be5c-4ab7-8639-e162712a8ea4-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8s26h\" (UID: \"7566f2a1-be5c-4ab7-8639-e162712a8ea4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8s26h" Feb 16 17:59:01 crc kubenswrapper[4794]: I0216 17:59:01.217666 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xqph\" (UniqueName: \"kubernetes.io/projected/7566f2a1-be5c-4ab7-8639-e162712a8ea4-kube-api-access-5xqph\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8s26h\" (UID: \"7566f2a1-be5c-4ab7-8639-e162712a8ea4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8s26h" Feb 16 17:59:01 crc kubenswrapper[4794]: I0216 17:59:01.320866 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7566f2a1-be5c-4ab7-8639-e162712a8ea4-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8s26h\" (UID: \"7566f2a1-be5c-4ab7-8639-e162712a8ea4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8s26h" Feb 16 17:59:01 crc kubenswrapper[4794]: I0216 17:59:01.321014 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xqph\" (UniqueName: \"kubernetes.io/projected/7566f2a1-be5c-4ab7-8639-e162712a8ea4-kube-api-access-5xqph\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8s26h\" (UID: \"7566f2a1-be5c-4ab7-8639-e162712a8ea4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8s26h" Feb 16 17:59:01 crc kubenswrapper[4794]: I0216 17:59:01.321191 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7566f2a1-be5c-4ab7-8639-e162712a8ea4-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8s26h\" (UID: \"7566f2a1-be5c-4ab7-8639-e162712a8ea4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8s26h" Feb 16 17:59:01 crc kubenswrapper[4794]: I0216 17:59:01.327950 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7566f2a1-be5c-4ab7-8639-e162712a8ea4-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8s26h\" (UID: \"7566f2a1-be5c-4ab7-8639-e162712a8ea4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8s26h" Feb 16 17:59:01 crc kubenswrapper[4794]: I0216 17:59:01.327980 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7566f2a1-be5c-4ab7-8639-e162712a8ea4-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8s26h\" (UID: \"7566f2a1-be5c-4ab7-8639-e162712a8ea4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8s26h" Feb 16 17:59:01 crc kubenswrapper[4794]: I0216 17:59:01.341831 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xqph\" (UniqueName: \"kubernetes.io/projected/7566f2a1-be5c-4ab7-8639-e162712a8ea4-kube-api-access-5xqph\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8s26h\" (UID: \"7566f2a1-be5c-4ab7-8639-e162712a8ea4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8s26h" Feb 16 17:59:01 crc kubenswrapper[4794]: I0216 17:59:01.434282 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8s26h" Feb 16 17:59:02 crc kubenswrapper[4794]: I0216 17:59:02.025326 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8s26h"] Feb 16 17:59:02 crc kubenswrapper[4794]: I0216 17:59:02.433592 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8s26h" event={"ID":"7566f2a1-be5c-4ab7-8639-e162712a8ea4","Type":"ContainerStarted","Data":"7cd9d8774619fcc90573bc3afd7bc3960052f1d0f47a2792326c1e7acfc42a65"} Feb 16 17:59:02 crc kubenswrapper[4794]: E0216 17:59:02.797076 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:59:03 crc kubenswrapper[4794]: I0216 17:59:03.445897 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8s26h" event={"ID":"7566f2a1-be5c-4ab7-8639-e162712a8ea4","Type":"ContainerStarted","Data":"e5f2b42706e76bb61cf97a726f1ee07ee60a7d6ccb4e89d107caee62ba4d8189"} Feb 16 17:59:03 crc kubenswrapper[4794]: I0216 17:59:03.466475 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8s26h" podStartSLOduration=2.034307471 podStartE2EDuration="2.466455186s" podCreationTimestamp="2026-02-16 17:59:01 +0000 UTC" firstStartedPulling="2026-02-16 17:59:02.026360439 +0000 UTC m=+3567.974455086" lastFinishedPulling="2026-02-16 17:59:02.458508154 +0000 UTC m=+3568.406602801" observedRunningTime="2026-02-16 17:59:03.461215128 +0000 UTC m=+3569.409309775" watchObservedRunningTime="2026-02-16 17:59:03.466455186 +0000 UTC m=+3569.414549833" Feb 16 17:59:05 crc kubenswrapper[4794]: I0216 17:59:05.791960 4794 scope.go:117] "RemoveContainer" containerID="b9dbd3fa18ebcb3d8888e3b92e151f642273c001c7a0d501f50a4853b9834121" Feb 16 17:59:05 crc kubenswrapper[4794]: E0216 17:59:05.792879 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:59:06 crc kubenswrapper[4794]: E0216 17:59:06.792976 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:59:15 crc kubenswrapper[4794]: E0216 17:59:15.793417 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:59:17 crc kubenswrapper[4794]: I0216 17:59:17.792530 4794 scope.go:117] "RemoveContainer" containerID="b9dbd3fa18ebcb3d8888e3b92e151f642273c001c7a0d501f50a4853b9834121" Feb 16 17:59:17 crc kubenswrapper[4794]: E0216 17:59:17.793478 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:59:18 crc kubenswrapper[4794]: E0216 17:59:18.793831 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:59:28 crc kubenswrapper[4794]: E0216 17:59:28.794362 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:59:31 crc kubenswrapper[4794]: I0216 17:59:31.791570 4794 scope.go:117] "RemoveContainer" containerID="b9dbd3fa18ebcb3d8888e3b92e151f642273c001c7a0d501f50a4853b9834121" Feb 16 17:59:31 crc kubenswrapper[4794]: E0216 17:59:31.792427 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:59:31 crc kubenswrapper[4794]: E0216 17:59:31.794169 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:59:43 crc kubenswrapper[4794]: I0216 17:59:43.792139 4794 scope.go:117] "RemoveContainer" containerID="b9dbd3fa18ebcb3d8888e3b92e151f642273c001c7a0d501f50a4853b9834121" Feb 16 17:59:43 crc kubenswrapper[4794]: E0216 17:59:43.792823 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:59:43 crc kubenswrapper[4794]: E0216 17:59:43.794163 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 17:59:44 crc kubenswrapper[4794]: E0216 17:59:44.805278 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:59:55 crc kubenswrapper[4794]: I0216 17:59:55.791635 4794 scope.go:117] "RemoveContainer" containerID="b9dbd3fa18ebcb3d8888e3b92e151f642273c001c7a0d501f50a4853b9834121" Feb 16 17:59:55 crc kubenswrapper[4794]: E0216 17:59:55.792650 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 17:59:55 crc kubenswrapper[4794]: E0216 17:59:55.793659 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 17:59:57 crc kubenswrapper[4794]: E0216 17:59:57.793249 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:00:00 crc kubenswrapper[4794]: I0216 18:00:00.151066 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521080-sr2cs"] Feb 16 18:00:00 crc kubenswrapper[4794]: I0216 18:00:00.153255 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-sr2cs" Feb 16 18:00:00 crc kubenswrapper[4794]: I0216 18:00:00.157065 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 18:00:00 crc kubenswrapper[4794]: I0216 18:00:00.157273 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 18:00:00 crc kubenswrapper[4794]: I0216 18:00:00.165771 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521080-sr2cs"] Feb 16 18:00:00 crc kubenswrapper[4794]: I0216 18:00:00.239041 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d49daba7-3264-433e-95f1-ad5a80aef3dd-config-volume\") pod \"collect-profiles-29521080-sr2cs\" (UID: \"d49daba7-3264-433e-95f1-ad5a80aef3dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-sr2cs" Feb 16 18:00:00 crc kubenswrapper[4794]: I0216 18:00:00.239290 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dm6g\" (UniqueName: \"kubernetes.io/projected/d49daba7-3264-433e-95f1-ad5a80aef3dd-kube-api-access-2dm6g\") pod \"collect-profiles-29521080-sr2cs\" (UID: \"d49daba7-3264-433e-95f1-ad5a80aef3dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-sr2cs" Feb 16 18:00:00 crc kubenswrapper[4794]: I0216 18:00:00.239420 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d49daba7-3264-433e-95f1-ad5a80aef3dd-secret-volume\") pod \"collect-profiles-29521080-sr2cs\" (UID: \"d49daba7-3264-433e-95f1-ad5a80aef3dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-sr2cs" Feb 16 18:00:00 crc kubenswrapper[4794]: I0216 18:00:00.341244 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d49daba7-3264-433e-95f1-ad5a80aef3dd-config-volume\") pod \"collect-profiles-29521080-sr2cs\" (UID: \"d49daba7-3264-433e-95f1-ad5a80aef3dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-sr2cs" Feb 16 18:00:00 crc kubenswrapper[4794]: I0216 18:00:00.342027 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d49daba7-3264-433e-95f1-ad5a80aef3dd-config-volume\") pod \"collect-profiles-29521080-sr2cs\" (UID: \"d49daba7-3264-433e-95f1-ad5a80aef3dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-sr2cs" Feb 16 18:00:00 crc kubenswrapper[4794]: I0216 18:00:00.342235 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dm6g\" (UniqueName: \"kubernetes.io/projected/d49daba7-3264-433e-95f1-ad5a80aef3dd-kube-api-access-2dm6g\") pod \"collect-profiles-29521080-sr2cs\" (UID: \"d49daba7-3264-433e-95f1-ad5a80aef3dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-sr2cs" Feb 16 18:00:00 crc kubenswrapper[4794]: I0216 18:00:00.342573 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d49daba7-3264-433e-95f1-ad5a80aef3dd-secret-volume\") pod \"collect-profiles-29521080-sr2cs\" (UID: \"d49daba7-3264-433e-95f1-ad5a80aef3dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-sr2cs" Feb 16 18:00:00 crc kubenswrapper[4794]: I0216 18:00:00.349550 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d49daba7-3264-433e-95f1-ad5a80aef3dd-secret-volume\") pod \"collect-profiles-29521080-sr2cs\" (UID: \"d49daba7-3264-433e-95f1-ad5a80aef3dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-sr2cs" Feb 16 18:00:00 crc kubenswrapper[4794]: I0216 18:00:00.359681 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dm6g\" (UniqueName: \"kubernetes.io/projected/d49daba7-3264-433e-95f1-ad5a80aef3dd-kube-api-access-2dm6g\") pod \"collect-profiles-29521080-sr2cs\" (UID: \"d49daba7-3264-433e-95f1-ad5a80aef3dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-sr2cs" Feb 16 18:00:00 crc kubenswrapper[4794]: I0216 18:00:00.476206 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-sr2cs" Feb 16 18:00:00 crc kubenswrapper[4794]: I0216 18:00:00.946837 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521080-sr2cs"] Feb 16 18:00:01 crc kubenswrapper[4794]: I0216 18:00:01.123949 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-sr2cs" event={"ID":"d49daba7-3264-433e-95f1-ad5a80aef3dd","Type":"ContainerStarted","Data":"5271ab8e51daa7febc28c08114b03047a469523d3b00168455be78183db2cc71"} Feb 16 18:00:02 crc kubenswrapper[4794]: I0216 18:00:02.135043 4794 generic.go:334] "Generic (PLEG): container finished" podID="d49daba7-3264-433e-95f1-ad5a80aef3dd" containerID="cbaa3a857471aac472391653a53f7f15a109b0f0345060f234ec0355726661dd" exitCode=0 Feb 16 18:00:02 crc kubenswrapper[4794]: I0216 18:00:02.135101 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-sr2cs" event={"ID":"d49daba7-3264-433e-95f1-ad5a80aef3dd","Type":"ContainerDied","Data":"cbaa3a857471aac472391653a53f7f15a109b0f0345060f234ec0355726661dd"} Feb 16 18:00:03 crc kubenswrapper[4794]: I0216 18:00:03.522048 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-sr2cs" Feb 16 18:00:03 crc kubenswrapper[4794]: I0216 18:00:03.543424 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dm6g\" (UniqueName: \"kubernetes.io/projected/d49daba7-3264-433e-95f1-ad5a80aef3dd-kube-api-access-2dm6g\") pod \"d49daba7-3264-433e-95f1-ad5a80aef3dd\" (UID: \"d49daba7-3264-433e-95f1-ad5a80aef3dd\") " Feb 16 18:00:03 crc kubenswrapper[4794]: I0216 18:00:03.543883 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d49daba7-3264-433e-95f1-ad5a80aef3dd-secret-volume\") pod \"d49daba7-3264-433e-95f1-ad5a80aef3dd\" (UID: \"d49daba7-3264-433e-95f1-ad5a80aef3dd\") " Feb 16 18:00:03 crc kubenswrapper[4794]: I0216 18:00:03.543957 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d49daba7-3264-433e-95f1-ad5a80aef3dd-config-volume\") pod \"d49daba7-3264-433e-95f1-ad5a80aef3dd\" (UID: \"d49daba7-3264-433e-95f1-ad5a80aef3dd\") " Feb 16 18:00:03 crc kubenswrapper[4794]: I0216 18:00:03.545421 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d49daba7-3264-433e-95f1-ad5a80aef3dd-config-volume" (OuterVolumeSpecName: "config-volume") pod "d49daba7-3264-433e-95f1-ad5a80aef3dd" (UID: "d49daba7-3264-433e-95f1-ad5a80aef3dd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 18:00:03 crc kubenswrapper[4794]: I0216 18:00:03.580061 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d49daba7-3264-433e-95f1-ad5a80aef3dd-kube-api-access-2dm6g" (OuterVolumeSpecName: "kube-api-access-2dm6g") pod "d49daba7-3264-433e-95f1-ad5a80aef3dd" (UID: "d49daba7-3264-433e-95f1-ad5a80aef3dd"). InnerVolumeSpecName "kube-api-access-2dm6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 18:00:03 crc kubenswrapper[4794]: I0216 18:00:03.581462 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d49daba7-3264-433e-95f1-ad5a80aef3dd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d49daba7-3264-433e-95f1-ad5a80aef3dd" (UID: "d49daba7-3264-433e-95f1-ad5a80aef3dd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 18:00:03 crc kubenswrapper[4794]: I0216 18:00:03.646371 4794 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d49daba7-3264-433e-95f1-ad5a80aef3dd-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 18:00:03 crc kubenswrapper[4794]: I0216 18:00:03.646423 4794 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d49daba7-3264-433e-95f1-ad5a80aef3dd-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 18:00:03 crc kubenswrapper[4794]: I0216 18:00:03.646443 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dm6g\" (UniqueName: \"kubernetes.io/projected/d49daba7-3264-433e-95f1-ad5a80aef3dd-kube-api-access-2dm6g\") on node \"crc\" DevicePath \"\"" Feb 16 18:00:04 crc kubenswrapper[4794]: I0216 18:00:04.169030 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-sr2cs" event={"ID":"d49daba7-3264-433e-95f1-ad5a80aef3dd","Type":"ContainerDied","Data":"5271ab8e51daa7febc28c08114b03047a469523d3b00168455be78183db2cc71"} Feb 16 18:00:04 crc kubenswrapper[4794]: I0216 18:00:04.169491 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5271ab8e51daa7febc28c08114b03047a469523d3b00168455be78183db2cc71" Feb 16 18:00:04 crc kubenswrapper[4794]: I0216 18:00:04.169077 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-sr2cs" Feb 16 18:00:04 crc kubenswrapper[4794]: I0216 18:00:04.596563 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521035-7bfvg"] Feb 16 18:00:04 crc kubenswrapper[4794]: I0216 18:00:04.608133 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521035-7bfvg"] Feb 16 18:00:04 crc kubenswrapper[4794]: I0216 18:00:04.806069 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27cfc9a8-5cbe-4493-865a-115bf389ec3b" path="/var/lib/kubelet/pods/27cfc9a8-5cbe-4493-865a-115bf389ec3b/volumes" Feb 16 18:00:07 crc kubenswrapper[4794]: E0216 18:00:07.796479 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:00:08 crc kubenswrapper[4794]: I0216 18:00:08.796922 4794 scope.go:117] "RemoveContainer" containerID="b9dbd3fa18ebcb3d8888e3b92e151f642273c001c7a0d501f50a4853b9834121" Feb 16 18:00:08 crc kubenswrapper[4794]: E0216 18:00:08.798046 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:00:11 crc kubenswrapper[4794]: E0216 18:00:11.793691 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:00:19 crc kubenswrapper[4794]: I0216 18:00:19.791683 4794 scope.go:117] "RemoveContainer" containerID="b9dbd3fa18ebcb3d8888e3b92e151f642273c001c7a0d501f50a4853b9834121" Feb 16 18:00:19 crc kubenswrapper[4794]: E0216 18:00:19.793049 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:00:22 crc kubenswrapper[4794]: E0216 18:00:22.797215 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:00:24 crc kubenswrapper[4794]: E0216 18:00:24.804297 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:00:25 crc kubenswrapper[4794]: I0216 18:00:25.404425 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9hss6"] Feb 16 18:00:25 crc kubenswrapper[4794]: E0216 18:00:25.404935 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49daba7-3264-433e-95f1-ad5a80aef3dd" containerName="collect-profiles" Feb 16 18:00:25 crc kubenswrapper[4794]: I0216 18:00:25.404957 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49daba7-3264-433e-95f1-ad5a80aef3dd" containerName="collect-profiles" Feb 16 18:00:25 crc kubenswrapper[4794]: I0216 18:00:25.405409 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49daba7-3264-433e-95f1-ad5a80aef3dd" containerName="collect-profiles" Feb 16 18:00:25 crc kubenswrapper[4794]: I0216 18:00:25.407476 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9hss6" Feb 16 18:00:25 crc kubenswrapper[4794]: I0216 18:00:25.416097 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3267ead2-ac39-4131-82f6-2c9aec60fd65-utilities\") pod \"community-operators-9hss6\" (UID: \"3267ead2-ac39-4131-82f6-2c9aec60fd65\") " pod="openshift-marketplace/community-operators-9hss6" Feb 16 18:00:25 crc kubenswrapper[4794]: I0216 18:00:25.416420 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3267ead2-ac39-4131-82f6-2c9aec60fd65-catalog-content\") pod \"community-operators-9hss6\" (UID: \"3267ead2-ac39-4131-82f6-2c9aec60fd65\") " pod="openshift-marketplace/community-operators-9hss6" Feb 16 18:00:25 crc kubenswrapper[4794]: I0216 18:00:25.416693 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb2gl\" (UniqueName: \"kubernetes.io/projected/3267ead2-ac39-4131-82f6-2c9aec60fd65-kube-api-access-nb2gl\") pod \"community-operators-9hss6\" (UID: \"3267ead2-ac39-4131-82f6-2c9aec60fd65\") " pod="openshift-marketplace/community-operators-9hss6" Feb 16 18:00:25 crc kubenswrapper[4794]: I0216 18:00:25.450876 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9hss6"] Feb 16 18:00:25 crc kubenswrapper[4794]: I0216 18:00:25.518987 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3267ead2-ac39-4131-82f6-2c9aec60fd65-catalog-content\") pod \"community-operators-9hss6\" (UID: \"3267ead2-ac39-4131-82f6-2c9aec60fd65\") " pod="openshift-marketplace/community-operators-9hss6" Feb 16 18:00:25 crc kubenswrapper[4794]: I0216 18:00:25.519079 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb2gl\" (UniqueName: \"kubernetes.io/projected/3267ead2-ac39-4131-82f6-2c9aec60fd65-kube-api-access-nb2gl\") pod \"community-operators-9hss6\" (UID: \"3267ead2-ac39-4131-82f6-2c9aec60fd65\") " pod="openshift-marketplace/community-operators-9hss6" Feb 16 18:00:25 crc kubenswrapper[4794]: I0216 18:00:25.519220 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3267ead2-ac39-4131-82f6-2c9aec60fd65-utilities\") pod \"community-operators-9hss6\" (UID: \"3267ead2-ac39-4131-82f6-2c9aec60fd65\") " pod="openshift-marketplace/community-operators-9hss6" Feb 16 18:00:25 crc kubenswrapper[4794]: I0216 18:00:25.519704 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3267ead2-ac39-4131-82f6-2c9aec60fd65-utilities\") pod \"community-operators-9hss6\" (UID: \"3267ead2-ac39-4131-82f6-2c9aec60fd65\") " pod="openshift-marketplace/community-operators-9hss6" Feb 16 18:00:25 crc kubenswrapper[4794]: I0216 18:00:25.519995 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3267ead2-ac39-4131-82f6-2c9aec60fd65-catalog-content\") pod \"community-operators-9hss6\" (UID: \"3267ead2-ac39-4131-82f6-2c9aec60fd65\") " pod="openshift-marketplace/community-operators-9hss6" Feb 16 18:00:25 crc kubenswrapper[4794]: I0216 18:00:25.558889 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb2gl\" (UniqueName: \"kubernetes.io/projected/3267ead2-ac39-4131-82f6-2c9aec60fd65-kube-api-access-nb2gl\") pod \"community-operators-9hss6\" (UID: \"3267ead2-ac39-4131-82f6-2c9aec60fd65\") " pod="openshift-marketplace/community-operators-9hss6" Feb 16 18:00:25 crc kubenswrapper[4794]: I0216 18:00:25.751690 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9hss6" Feb 16 18:00:26 crc kubenswrapper[4794]: I0216 18:00:26.296111 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9hss6"] Feb 16 18:00:26 crc kubenswrapper[4794]: I0216 18:00:26.422904 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hss6" event={"ID":"3267ead2-ac39-4131-82f6-2c9aec60fd65","Type":"ContainerStarted","Data":"41ee96b76e461ff89489ba9d21618e64e20857827deab9c3aa28f12e41dba591"} Feb 16 18:00:27 crc kubenswrapper[4794]: I0216 18:00:27.432992 4794 generic.go:334] "Generic (PLEG): container finished" podID="3267ead2-ac39-4131-82f6-2c9aec60fd65" containerID="c3303d12cfc91db2548f65cc6c5185713c7d0a31f58622d7a5ee15ca40ddb9f0" exitCode=0 Feb 16 18:00:27 crc kubenswrapper[4794]: I0216 18:00:27.433025 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hss6" event={"ID":"3267ead2-ac39-4131-82f6-2c9aec60fd65","Type":"ContainerDied","Data":"c3303d12cfc91db2548f65cc6c5185713c7d0a31f58622d7a5ee15ca40ddb9f0"} Feb 16 18:00:29 crc kubenswrapper[4794]: I0216 18:00:29.465008 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hss6" event={"ID":"3267ead2-ac39-4131-82f6-2c9aec60fd65","Type":"ContainerStarted","Data":"43a268c05d06ab8157c90628b9d2ed793e38d33a93351ace4950e9b9964c2d96"} Feb 16 18:00:30 crc kubenswrapper[4794]: I0216 18:00:30.481344 4794 generic.go:334] "Generic (PLEG): container finished" podID="3267ead2-ac39-4131-82f6-2c9aec60fd65" containerID="43a268c05d06ab8157c90628b9d2ed793e38d33a93351ace4950e9b9964c2d96" exitCode=0 Feb 16 18:00:30 crc kubenswrapper[4794]: I0216 18:00:30.481492 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hss6" event={"ID":"3267ead2-ac39-4131-82f6-2c9aec60fd65","Type":"ContainerDied","Data":"43a268c05d06ab8157c90628b9d2ed793e38d33a93351ace4950e9b9964c2d96"} Feb 16 18:00:31 crc kubenswrapper[4794]: I0216 18:00:31.497286 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hss6" event={"ID":"3267ead2-ac39-4131-82f6-2c9aec60fd65","Type":"ContainerStarted","Data":"d1949a12aef2e538c7866a90b02ab04242c8272707f858efd33caf7a27d1e12c"} Feb 16 18:00:31 crc kubenswrapper[4794]: I0216 18:00:31.522995 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9hss6" podStartSLOduration=3.085761735 podStartE2EDuration="6.522975674s" podCreationTimestamp="2026-02-16 18:00:25 +0000 UTC" firstStartedPulling="2026-02-16 18:00:27.436037235 +0000 UTC m=+3653.384131882" lastFinishedPulling="2026-02-16 18:00:30.873251174 +0000 UTC m=+3656.821345821" observedRunningTime="2026-02-16 18:00:31.512516818 +0000 UTC m=+3657.460611465" watchObservedRunningTime="2026-02-16 18:00:31.522975674 +0000 UTC m=+3657.471070321" Feb 16 18:00:31 crc kubenswrapper[4794]: I0216 18:00:31.791625 4794 scope.go:117] "RemoveContainer" containerID="b9dbd3fa18ebcb3d8888e3b92e151f642273c001c7a0d501f50a4853b9834121" Feb 16 18:00:31 crc kubenswrapper[4794]: E0216 18:00:31.791999 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:00:34 crc kubenswrapper[4794]: E0216 18:00:34.800263 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:00:35 crc kubenswrapper[4794]: I0216 18:00:35.753209 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9hss6" Feb 16 18:00:35 crc kubenswrapper[4794]: I0216 18:00:35.753276 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9hss6" Feb 16 18:00:35 crc kubenswrapper[4794]: E0216 18:00:35.793740 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:00:35 crc kubenswrapper[4794]: I0216 18:00:35.812160 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9hss6" Feb 16 18:00:36 crc kubenswrapper[4794]: I0216 18:00:36.591211 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9hss6" Feb 16 18:00:36 crc kubenswrapper[4794]: I0216 18:00:36.644937 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9hss6"] Feb 16 18:00:38 crc kubenswrapper[4794]: I0216 18:00:38.559727 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9hss6" podUID="3267ead2-ac39-4131-82f6-2c9aec60fd65" containerName="registry-server" containerID="cri-o://d1949a12aef2e538c7866a90b02ab04242c8272707f858efd33caf7a27d1e12c" gracePeriod=2 Feb 16 18:00:39 crc kubenswrapper[4794]: I0216 18:00:39.083458 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9hss6" Feb 16 18:00:39 crc kubenswrapper[4794]: I0216 18:00:39.147627 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3267ead2-ac39-4131-82f6-2c9aec60fd65-catalog-content\") pod \"3267ead2-ac39-4131-82f6-2c9aec60fd65\" (UID: \"3267ead2-ac39-4131-82f6-2c9aec60fd65\") " Feb 16 18:00:39 crc kubenswrapper[4794]: I0216 18:00:39.147940 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3267ead2-ac39-4131-82f6-2c9aec60fd65-utilities\") pod \"3267ead2-ac39-4131-82f6-2c9aec60fd65\" (UID: \"3267ead2-ac39-4131-82f6-2c9aec60fd65\") " Feb 16 18:00:39 crc kubenswrapper[4794]: I0216 18:00:39.147988 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb2gl\" (UniqueName: \"kubernetes.io/projected/3267ead2-ac39-4131-82f6-2c9aec60fd65-kube-api-access-nb2gl\") pod \"3267ead2-ac39-4131-82f6-2c9aec60fd65\" (UID: \"3267ead2-ac39-4131-82f6-2c9aec60fd65\") " Feb 16 18:00:39 crc kubenswrapper[4794]: I0216 18:00:39.148541 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3267ead2-ac39-4131-82f6-2c9aec60fd65-utilities" (OuterVolumeSpecName: "utilities") pod "3267ead2-ac39-4131-82f6-2c9aec60fd65" (UID: "3267ead2-ac39-4131-82f6-2c9aec60fd65"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 18:00:39 crc kubenswrapper[4794]: I0216 18:00:39.148902 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3267ead2-ac39-4131-82f6-2c9aec60fd65-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 18:00:39 crc kubenswrapper[4794]: I0216 18:00:39.153063 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3267ead2-ac39-4131-82f6-2c9aec60fd65-kube-api-access-nb2gl" (OuterVolumeSpecName: "kube-api-access-nb2gl") pod "3267ead2-ac39-4131-82f6-2c9aec60fd65" (UID: "3267ead2-ac39-4131-82f6-2c9aec60fd65"). InnerVolumeSpecName "kube-api-access-nb2gl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 18:00:39 crc kubenswrapper[4794]: I0216 18:00:39.206630 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3267ead2-ac39-4131-82f6-2c9aec60fd65-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3267ead2-ac39-4131-82f6-2c9aec60fd65" (UID: "3267ead2-ac39-4131-82f6-2c9aec60fd65"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 18:00:39 crc kubenswrapper[4794]: I0216 18:00:39.251374 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3267ead2-ac39-4131-82f6-2c9aec60fd65-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 18:00:39 crc kubenswrapper[4794]: I0216 18:00:39.251411 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nb2gl\" (UniqueName: \"kubernetes.io/projected/3267ead2-ac39-4131-82f6-2c9aec60fd65-kube-api-access-nb2gl\") on node \"crc\" DevicePath \"\"" Feb 16 18:00:39 crc kubenswrapper[4794]: I0216 18:00:39.572697 4794 generic.go:334] "Generic (PLEG): container finished" podID="3267ead2-ac39-4131-82f6-2c9aec60fd65" containerID="d1949a12aef2e538c7866a90b02ab04242c8272707f858efd33caf7a27d1e12c" exitCode=0 Feb 16 18:00:39 crc kubenswrapper[4794]: I0216 18:00:39.572741 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hss6" event={"ID":"3267ead2-ac39-4131-82f6-2c9aec60fd65","Type":"ContainerDied","Data":"d1949a12aef2e538c7866a90b02ab04242c8272707f858efd33caf7a27d1e12c"} Feb 16 18:00:39 crc kubenswrapper[4794]: I0216 18:00:39.572764 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hss6" event={"ID":"3267ead2-ac39-4131-82f6-2c9aec60fd65","Type":"ContainerDied","Data":"41ee96b76e461ff89489ba9d21618e64e20857827deab9c3aa28f12e41dba591"} Feb 16 18:00:39 crc kubenswrapper[4794]: I0216 18:00:39.572770 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9hss6" Feb 16 18:00:39 crc kubenswrapper[4794]: I0216 18:00:39.572780 4794 scope.go:117] "RemoveContainer" containerID="d1949a12aef2e538c7866a90b02ab04242c8272707f858efd33caf7a27d1e12c" Feb 16 18:00:39 crc kubenswrapper[4794]: I0216 18:00:39.614202 4794 scope.go:117] "RemoveContainer" containerID="43a268c05d06ab8157c90628b9d2ed793e38d33a93351ace4950e9b9964c2d96" Feb 16 18:00:39 crc kubenswrapper[4794]: I0216 18:00:39.625925 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9hss6"] Feb 16 18:00:39 crc kubenswrapper[4794]: I0216 18:00:39.644106 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9hss6"] Feb 16 18:00:39 crc kubenswrapper[4794]: I0216 18:00:39.649202 4794 scope.go:117] "RemoveContainer" containerID="c3303d12cfc91db2548f65cc6c5185713c7d0a31f58622d7a5ee15ca40ddb9f0" Feb 16 18:00:39 crc kubenswrapper[4794]: I0216 18:00:39.707422 4794 scope.go:117] "RemoveContainer" containerID="d1949a12aef2e538c7866a90b02ab04242c8272707f858efd33caf7a27d1e12c" Feb 16 18:00:39 crc kubenswrapper[4794]: E0216 18:00:39.707939 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1949a12aef2e538c7866a90b02ab04242c8272707f858efd33caf7a27d1e12c\": container with ID starting with d1949a12aef2e538c7866a90b02ab04242c8272707f858efd33caf7a27d1e12c not found: ID does not exist" containerID="d1949a12aef2e538c7866a90b02ab04242c8272707f858efd33caf7a27d1e12c" Feb 16 18:00:39 crc kubenswrapper[4794]: I0216 18:00:39.707988 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1949a12aef2e538c7866a90b02ab04242c8272707f858efd33caf7a27d1e12c"} err="failed to get container status \"d1949a12aef2e538c7866a90b02ab04242c8272707f858efd33caf7a27d1e12c\": rpc error: code = NotFound desc = could not find container \"d1949a12aef2e538c7866a90b02ab04242c8272707f858efd33caf7a27d1e12c\": container with ID starting with d1949a12aef2e538c7866a90b02ab04242c8272707f858efd33caf7a27d1e12c not found: ID does not exist" Feb 16 18:00:39 crc kubenswrapper[4794]: I0216 18:00:39.708014 4794 scope.go:117] "RemoveContainer" containerID="43a268c05d06ab8157c90628b9d2ed793e38d33a93351ace4950e9b9964c2d96" Feb 16 18:00:39 crc kubenswrapper[4794]: E0216 18:00:39.708424 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43a268c05d06ab8157c90628b9d2ed793e38d33a93351ace4950e9b9964c2d96\": container with ID starting with 43a268c05d06ab8157c90628b9d2ed793e38d33a93351ace4950e9b9964c2d96 not found: ID does not exist" containerID="43a268c05d06ab8157c90628b9d2ed793e38d33a93351ace4950e9b9964c2d96" Feb 16 18:00:39 crc kubenswrapper[4794]: I0216 18:00:39.708454 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43a268c05d06ab8157c90628b9d2ed793e38d33a93351ace4950e9b9964c2d96"} err="failed to get container status \"43a268c05d06ab8157c90628b9d2ed793e38d33a93351ace4950e9b9964c2d96\": rpc error: code = NotFound desc = could not find container \"43a268c05d06ab8157c90628b9d2ed793e38d33a93351ace4950e9b9964c2d96\": container with ID starting with 43a268c05d06ab8157c90628b9d2ed793e38d33a93351ace4950e9b9964c2d96 not found: ID does not exist" Feb 16 18:00:39 crc kubenswrapper[4794]: I0216 18:00:39.708473 4794 scope.go:117] "RemoveContainer" containerID="c3303d12cfc91db2548f65cc6c5185713c7d0a31f58622d7a5ee15ca40ddb9f0" Feb 16 18:00:39 crc kubenswrapper[4794]: E0216 18:00:39.709100 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3303d12cfc91db2548f65cc6c5185713c7d0a31f58622d7a5ee15ca40ddb9f0\": container with ID starting with c3303d12cfc91db2548f65cc6c5185713c7d0a31f58622d7a5ee15ca40ddb9f0 not found: ID does not exist" containerID="c3303d12cfc91db2548f65cc6c5185713c7d0a31f58622d7a5ee15ca40ddb9f0" Feb 16 18:00:39 crc kubenswrapper[4794]: I0216 18:00:39.709142 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3303d12cfc91db2548f65cc6c5185713c7d0a31f58622d7a5ee15ca40ddb9f0"} err="failed to get container status \"c3303d12cfc91db2548f65cc6c5185713c7d0a31f58622d7a5ee15ca40ddb9f0\": rpc error: code = NotFound desc = could not find container \"c3303d12cfc91db2548f65cc6c5185713c7d0a31f58622d7a5ee15ca40ddb9f0\": container with ID starting with c3303d12cfc91db2548f65cc6c5185713c7d0a31f58622d7a5ee15ca40ddb9f0 not found: ID does not exist" Feb 16 18:00:40 crc kubenswrapper[4794]: I0216 18:00:40.805410 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3267ead2-ac39-4131-82f6-2c9aec60fd65" path="/var/lib/kubelet/pods/3267ead2-ac39-4131-82f6-2c9aec60fd65/volumes" Feb 16 18:00:42 crc kubenswrapper[4794]: I0216 18:00:42.366719 4794 scope.go:117] "RemoveContainer" containerID="cf9c9fac47fe6665514641843d214a2cfeed9f0c06f7e93bc1645127a7883c2b" Feb 16 18:00:45 crc kubenswrapper[4794]: I0216 18:00:45.792611 4794 scope.go:117] "RemoveContainer" containerID="b9dbd3fa18ebcb3d8888e3b92e151f642273c001c7a0d501f50a4853b9834121" Feb 16 18:00:45 crc kubenswrapper[4794]: E0216 18:00:45.793885 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:00:45 crc kubenswrapper[4794]: E0216 18:00:45.794557 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:00:48 crc kubenswrapper[4794]: E0216 18:00:48.794166 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:00:50 crc kubenswrapper[4794]: I0216 18:00:50.769414 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mrn2w"] Feb 16 18:00:50 crc kubenswrapper[4794]: E0216 18:00:50.770324 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3267ead2-ac39-4131-82f6-2c9aec60fd65" containerName="extract-content" Feb 16 18:00:50 crc kubenswrapper[4794]: I0216 18:00:50.770337 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="3267ead2-ac39-4131-82f6-2c9aec60fd65" containerName="extract-content" Feb 16 18:00:50 crc kubenswrapper[4794]: E0216 18:00:50.770368 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3267ead2-ac39-4131-82f6-2c9aec60fd65" containerName="registry-server" Feb 16 18:00:50 crc kubenswrapper[4794]: I0216 18:00:50.770374 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="3267ead2-ac39-4131-82f6-2c9aec60fd65" containerName="registry-server" Feb 16 18:00:50 crc kubenswrapper[4794]: E0216 18:00:50.770388 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3267ead2-ac39-4131-82f6-2c9aec60fd65" containerName="extract-utilities" Feb 16 18:00:50 crc kubenswrapper[4794]: I0216 18:00:50.770395 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="3267ead2-ac39-4131-82f6-2c9aec60fd65" containerName="extract-utilities" Feb 16 18:00:50 crc kubenswrapper[4794]: I0216 18:00:50.770620 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="3267ead2-ac39-4131-82f6-2c9aec60fd65" containerName="registry-server" Feb 16 18:00:50 crc kubenswrapper[4794]: I0216 18:00:50.773192 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mrn2w" Feb 16 18:00:50 crc kubenswrapper[4794]: I0216 18:00:50.856108 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mrn2w"] Feb 16 18:00:50 crc kubenswrapper[4794]: I0216 18:00:50.954091 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7xh2\" (UniqueName: \"kubernetes.io/projected/a465458d-7515-43af-9220-6bd07e2a08ea-kube-api-access-t7xh2\") pod \"redhat-operators-mrn2w\" (UID: \"a465458d-7515-43af-9220-6bd07e2a08ea\") " pod="openshift-marketplace/redhat-operators-mrn2w" Feb 16 18:00:50 crc kubenswrapper[4794]: I0216 18:00:50.954583 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a465458d-7515-43af-9220-6bd07e2a08ea-catalog-content\") pod \"redhat-operators-mrn2w\" (UID: \"a465458d-7515-43af-9220-6bd07e2a08ea\") " pod="openshift-marketplace/redhat-operators-mrn2w" Feb 16 18:00:50 crc kubenswrapper[4794]: I0216 18:00:50.954652 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a465458d-7515-43af-9220-6bd07e2a08ea-utilities\") pod \"redhat-operators-mrn2w\" (UID: \"a465458d-7515-43af-9220-6bd07e2a08ea\") " pod="openshift-marketplace/redhat-operators-mrn2w" Feb 16 18:00:51 crc kubenswrapper[4794]: I0216 18:00:51.056860 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7xh2\" (UniqueName: \"kubernetes.io/projected/a465458d-7515-43af-9220-6bd07e2a08ea-kube-api-access-t7xh2\") pod \"redhat-operators-mrn2w\" (UID: \"a465458d-7515-43af-9220-6bd07e2a08ea\") " pod="openshift-marketplace/redhat-operators-mrn2w" Feb 16 18:00:51 crc kubenswrapper[4794]: I0216 18:00:51.057003 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a465458d-7515-43af-9220-6bd07e2a08ea-catalog-content\") pod \"redhat-operators-mrn2w\" (UID: \"a465458d-7515-43af-9220-6bd07e2a08ea\") " pod="openshift-marketplace/redhat-operators-mrn2w" Feb 16 18:00:51 crc kubenswrapper[4794]: I0216 18:00:51.057050 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a465458d-7515-43af-9220-6bd07e2a08ea-utilities\") pod \"redhat-operators-mrn2w\" (UID: \"a465458d-7515-43af-9220-6bd07e2a08ea\") " pod="openshift-marketplace/redhat-operators-mrn2w" Feb 16 18:00:51 crc kubenswrapper[4794]: I0216 18:00:51.058193 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a465458d-7515-43af-9220-6bd07e2a08ea-utilities\") pod \"redhat-operators-mrn2w\" (UID: \"a465458d-7515-43af-9220-6bd07e2a08ea\") " pod="openshift-marketplace/redhat-operators-mrn2w" Feb 16 18:00:51 crc kubenswrapper[4794]: I0216 18:00:51.058327 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a465458d-7515-43af-9220-6bd07e2a08ea-catalog-content\") pod \"redhat-operators-mrn2w\" (UID: \"a465458d-7515-43af-9220-6bd07e2a08ea\") " pod="openshift-marketplace/redhat-operators-mrn2w" Feb 16 18:00:51 crc kubenswrapper[4794]: I0216 18:00:51.076822 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7xh2\" (UniqueName: \"kubernetes.io/projected/a465458d-7515-43af-9220-6bd07e2a08ea-kube-api-access-t7xh2\") pod \"redhat-operators-mrn2w\" (UID: \"a465458d-7515-43af-9220-6bd07e2a08ea\") " pod="openshift-marketplace/redhat-operators-mrn2w" Feb 16 18:00:51 crc kubenswrapper[4794]: I0216 18:00:51.138814 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mrn2w" Feb 16 18:00:51 crc kubenswrapper[4794]: I0216 18:00:51.764208 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mrn2w"] Feb 16 18:00:52 crc kubenswrapper[4794]: I0216 18:00:52.708887 4794 generic.go:334] "Generic (PLEG): container finished" podID="a465458d-7515-43af-9220-6bd07e2a08ea" containerID="e1f9d18981484f9166134c8dcdfa200895ec7769bd76ed99b375201ed689060f" exitCode=0 Feb 16 18:00:52 crc kubenswrapper[4794]: I0216 18:00:52.708948 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrn2w" event={"ID":"a465458d-7515-43af-9220-6bd07e2a08ea","Type":"ContainerDied","Data":"e1f9d18981484f9166134c8dcdfa200895ec7769bd76ed99b375201ed689060f"} Feb 16 18:00:52 crc kubenswrapper[4794]: I0216 18:00:52.709001 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrn2w" event={"ID":"a465458d-7515-43af-9220-6bd07e2a08ea","Type":"ContainerStarted","Data":"5fdd871548095e1dd134b0fda086f8b75ce0abedf127651d00d2ae5d6951b8b8"} Feb 16 18:00:53 crc kubenswrapper[4794]: I0216 18:00:53.725853 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrn2w" event={"ID":"a465458d-7515-43af-9220-6bd07e2a08ea","Type":"ContainerStarted","Data":"77c83bf9c83d18cf9f3d7f6147ae3ef5f8b4637c24d74671147c30dc1ae58f3d"} Feb 16 18:00:57 crc kubenswrapper[4794]: I0216 18:00:57.791780 4794 scope.go:117] "RemoveContainer" containerID="b9dbd3fa18ebcb3d8888e3b92e151f642273c001c7a0d501f50a4853b9834121" Feb 16 18:00:57 crc kubenswrapper[4794]: E0216 18:00:57.793719 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:00:57 crc kubenswrapper[4794]: E0216 18:00:57.794558 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:00:57 crc kubenswrapper[4794]: E0216 18:00:57.923826 4794 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda465458d_7515_43af_9220_6bd07e2a08ea.slice/crio-conmon-77c83bf9c83d18cf9f3d7f6147ae3ef5f8b4637c24d74671147c30dc1ae58f3d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda465458d_7515_43af_9220_6bd07e2a08ea.slice/crio-77c83bf9c83d18cf9f3d7f6147ae3ef5f8b4637c24d74671147c30dc1ae58f3d.scope\": RecentStats: unable to find data in memory cache]" Feb 16 18:00:58 crc kubenswrapper[4794]: I0216 18:00:58.791518 4794 generic.go:334] "Generic (PLEG): container finished" podID="a465458d-7515-43af-9220-6bd07e2a08ea" containerID="77c83bf9c83d18cf9f3d7f6147ae3ef5f8b4637c24d74671147c30dc1ae58f3d" exitCode=0 Feb 16 18:00:58 crc kubenswrapper[4794]: I0216 18:00:58.803294 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrn2w" event={"ID":"a465458d-7515-43af-9220-6bd07e2a08ea","Type":"ContainerDied","Data":"77c83bf9c83d18cf9f3d7f6147ae3ef5f8b4637c24d74671147c30dc1ae58f3d"} Feb 16 18:00:59 crc kubenswrapper[4794]: I0216 18:00:59.802388 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrn2w" event={"ID":"a465458d-7515-43af-9220-6bd07e2a08ea","Type":"ContainerStarted","Data":"e9fcc9313a6315c523f6e20244d4bc7100f4580443f356bc695afcd27033a6ab"} Feb 16 18:00:59 crc kubenswrapper[4794]: I0216 18:00:59.825629 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mrn2w" podStartSLOduration=3.235894159 podStartE2EDuration="9.825609033s" podCreationTimestamp="2026-02-16 18:00:50 +0000 UTC" firstStartedPulling="2026-02-16 18:00:52.712537815 +0000 UTC m=+3678.660632502" lastFinishedPulling="2026-02-16 18:00:59.302252729 +0000 UTC m=+3685.250347376" observedRunningTime="2026-02-16 18:00:59.823157674 +0000 UTC m=+3685.771252321" watchObservedRunningTime="2026-02-16 18:00:59.825609033 +0000 UTC m=+3685.773703680" Feb 16 18:01:00 crc kubenswrapper[4794]: I0216 18:01:00.178199 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29521081-8bkn4"] Feb 16 18:01:00 crc kubenswrapper[4794]: I0216 18:01:00.179958 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29521081-8bkn4" Feb 16 18:01:00 crc kubenswrapper[4794]: I0216 18:01:00.198100 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls795\" (UniqueName: \"kubernetes.io/projected/4e46afa6-6711-47b2-88ca-b2b185d690e7-kube-api-access-ls795\") pod \"keystone-cron-29521081-8bkn4\" (UID: \"4e46afa6-6711-47b2-88ca-b2b185d690e7\") " pod="openstack/keystone-cron-29521081-8bkn4" Feb 16 18:01:00 crc kubenswrapper[4794]: I0216 18:01:00.198176 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e46afa6-6711-47b2-88ca-b2b185d690e7-combined-ca-bundle\") pod \"keystone-cron-29521081-8bkn4\" (UID: \"4e46afa6-6711-47b2-88ca-b2b185d690e7\") " pod="openstack/keystone-cron-29521081-8bkn4" Feb 16 18:01:00 crc kubenswrapper[4794]: I0216 18:01:00.198224 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e46afa6-6711-47b2-88ca-b2b185d690e7-config-data\") pod \"keystone-cron-29521081-8bkn4\" (UID: \"4e46afa6-6711-47b2-88ca-b2b185d690e7\") " pod="openstack/keystone-cron-29521081-8bkn4" Feb 16 18:01:00 crc kubenswrapper[4794]: I0216 18:01:00.198747 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e46afa6-6711-47b2-88ca-b2b185d690e7-fernet-keys\") pod \"keystone-cron-29521081-8bkn4\" (UID: \"4e46afa6-6711-47b2-88ca-b2b185d690e7\") " pod="openstack/keystone-cron-29521081-8bkn4" Feb 16 18:01:00 crc kubenswrapper[4794]: I0216 18:01:00.209029 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29521081-8bkn4"] Feb 16 18:01:00 crc kubenswrapper[4794]: I0216 18:01:00.300961 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls795\" (UniqueName: \"kubernetes.io/projected/4e46afa6-6711-47b2-88ca-b2b185d690e7-kube-api-access-ls795\") pod \"keystone-cron-29521081-8bkn4\" (UID: \"4e46afa6-6711-47b2-88ca-b2b185d690e7\") " pod="openstack/keystone-cron-29521081-8bkn4" Feb 16 18:01:00 crc kubenswrapper[4794]: I0216 18:01:00.301039 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e46afa6-6711-47b2-88ca-b2b185d690e7-combined-ca-bundle\") pod \"keystone-cron-29521081-8bkn4\" (UID: \"4e46afa6-6711-47b2-88ca-b2b185d690e7\") " pod="openstack/keystone-cron-29521081-8bkn4" Feb 16 18:01:00 crc kubenswrapper[4794]: I0216 18:01:00.301095 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e46afa6-6711-47b2-88ca-b2b185d690e7-config-data\") pod \"keystone-cron-29521081-8bkn4\" (UID: \"4e46afa6-6711-47b2-88ca-b2b185d690e7\") " pod="openstack/keystone-cron-29521081-8bkn4" Feb 16 18:01:00 crc kubenswrapper[4794]: I0216 18:01:00.301267 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e46afa6-6711-47b2-88ca-b2b185d690e7-fernet-keys\") pod \"keystone-cron-29521081-8bkn4\" (UID: \"4e46afa6-6711-47b2-88ca-b2b185d690e7\") " pod="openstack/keystone-cron-29521081-8bkn4" Feb 16 18:01:00 crc kubenswrapper[4794]: I0216 18:01:00.313740 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e46afa6-6711-47b2-88ca-b2b185d690e7-fernet-keys\") pod \"keystone-cron-29521081-8bkn4\" (UID: \"4e46afa6-6711-47b2-88ca-b2b185d690e7\") " pod="openstack/keystone-cron-29521081-8bkn4" Feb 16 18:01:00 crc kubenswrapper[4794]: I0216 18:01:00.316206 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e46afa6-6711-47b2-88ca-b2b185d690e7-combined-ca-bundle\") pod \"keystone-cron-29521081-8bkn4\" (UID: \"4e46afa6-6711-47b2-88ca-b2b185d690e7\") " pod="openstack/keystone-cron-29521081-8bkn4" Feb 16 18:01:00 crc kubenswrapper[4794]: I0216 18:01:00.329876 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e46afa6-6711-47b2-88ca-b2b185d690e7-config-data\") pod \"keystone-cron-29521081-8bkn4\" (UID: \"4e46afa6-6711-47b2-88ca-b2b185d690e7\") " pod="openstack/keystone-cron-29521081-8bkn4" Feb 16 18:01:00 crc kubenswrapper[4794]: I0216 18:01:00.356194 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls795\" (UniqueName: \"kubernetes.io/projected/4e46afa6-6711-47b2-88ca-b2b185d690e7-kube-api-access-ls795\") pod \"keystone-cron-29521081-8bkn4\" (UID: \"4e46afa6-6711-47b2-88ca-b2b185d690e7\") " pod="openstack/keystone-cron-29521081-8bkn4" Feb 16 18:01:00 crc kubenswrapper[4794]: I0216 18:01:00.508140 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29521081-8bkn4" Feb 16 18:01:00 crc kubenswrapper[4794]: I0216 18:01:00.997234 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29521081-8bkn4"] Feb 16 18:01:01 crc kubenswrapper[4794]: I0216 18:01:01.139111 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mrn2w" Feb 16 18:01:01 crc kubenswrapper[4794]: I0216 18:01:01.139183 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mrn2w" Feb 16 18:01:01 crc kubenswrapper[4794]: E0216 18:01:01.794392 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:01:01 crc kubenswrapper[4794]: I0216 18:01:01.830752 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29521081-8bkn4" event={"ID":"4e46afa6-6711-47b2-88ca-b2b185d690e7","Type":"ContainerStarted","Data":"4c8bc7cc03cbbcfddcf62856af33f79f9f90791fa4a29819af61c3d7276e5be6"} Feb 16 18:01:01 crc kubenswrapper[4794]: I0216 18:01:01.830812 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29521081-8bkn4" event={"ID":"4e46afa6-6711-47b2-88ca-b2b185d690e7","Type":"ContainerStarted","Data":"6e3ac97fa0518d76c515968ed8075742b5c8fe1a0047b3457c8dd3620b72e2e2"} Feb 16 18:01:01 crc kubenswrapper[4794]: I0216 18:01:01.855467 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29521081-8bkn4" podStartSLOduration=1.855446572 podStartE2EDuration="1.855446572s" podCreationTimestamp="2026-02-16 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 18:01:01.85114492 +0000 UTC m=+3687.799239577" watchObservedRunningTime="2026-02-16 18:01:01.855446572 +0000 UTC m=+3687.803541219" Feb 16 18:01:02 crc kubenswrapper[4794]: I0216 18:01:02.226645 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mrn2w" podUID="a465458d-7515-43af-9220-6bd07e2a08ea" containerName="registry-server" probeResult="failure" output=< Feb 16 18:01:02 crc kubenswrapper[4794]: timeout: failed to connect service ":50051" within 1s Feb 16 18:01:02 crc kubenswrapper[4794]: > Feb 16 18:01:05 crc kubenswrapper[4794]: I0216 18:01:05.875971 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29521081-8bkn4" event={"ID":"4e46afa6-6711-47b2-88ca-b2b185d690e7","Type":"ContainerDied","Data":"4c8bc7cc03cbbcfddcf62856af33f79f9f90791fa4a29819af61c3d7276e5be6"} Feb 16 18:01:05 crc kubenswrapper[4794]: I0216 18:01:05.877669 4794 generic.go:334] "Generic (PLEG): container finished" podID="4e46afa6-6711-47b2-88ca-b2b185d690e7" containerID="4c8bc7cc03cbbcfddcf62856af33f79f9f90791fa4a29819af61c3d7276e5be6" exitCode=0 Feb 16 18:01:07 crc kubenswrapper[4794]: I0216 18:01:07.357738 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29521081-8bkn4" Feb 16 18:01:07 crc kubenswrapper[4794]: I0216 18:01:07.409709 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e46afa6-6711-47b2-88ca-b2b185d690e7-config-data\") pod \"4e46afa6-6711-47b2-88ca-b2b185d690e7\" (UID: \"4e46afa6-6711-47b2-88ca-b2b185d690e7\") " Feb 16 18:01:07 crc kubenswrapper[4794]: I0216 18:01:07.410463 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e46afa6-6711-47b2-88ca-b2b185d690e7-fernet-keys\") pod \"4e46afa6-6711-47b2-88ca-b2b185d690e7\" (UID: \"4e46afa6-6711-47b2-88ca-b2b185d690e7\") " Feb 16 18:01:07 crc kubenswrapper[4794]: I0216 18:01:07.410574 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ls795\" (UniqueName: \"kubernetes.io/projected/4e46afa6-6711-47b2-88ca-b2b185d690e7-kube-api-access-ls795\") pod \"4e46afa6-6711-47b2-88ca-b2b185d690e7\" (UID: \"4e46afa6-6711-47b2-88ca-b2b185d690e7\") " Feb 16 18:01:07 crc kubenswrapper[4794]: I0216 18:01:07.410621 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e46afa6-6711-47b2-88ca-b2b185d690e7-combined-ca-bundle\") pod \"4e46afa6-6711-47b2-88ca-b2b185d690e7\" (UID: \"4e46afa6-6711-47b2-88ca-b2b185d690e7\") " Feb 16 18:01:07 crc kubenswrapper[4794]: I0216 18:01:07.417775 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e46afa6-6711-47b2-88ca-b2b185d690e7-kube-api-access-ls795" (OuterVolumeSpecName: "kube-api-access-ls795") pod "4e46afa6-6711-47b2-88ca-b2b185d690e7" (UID: "4e46afa6-6711-47b2-88ca-b2b185d690e7"). InnerVolumeSpecName "kube-api-access-ls795". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 18:01:07 crc kubenswrapper[4794]: I0216 18:01:07.420063 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e46afa6-6711-47b2-88ca-b2b185d690e7-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "4e46afa6-6711-47b2-88ca-b2b185d690e7" (UID: "4e46afa6-6711-47b2-88ca-b2b185d690e7"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 18:01:07 crc kubenswrapper[4794]: I0216 18:01:07.450097 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e46afa6-6711-47b2-88ca-b2b185d690e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e46afa6-6711-47b2-88ca-b2b185d690e7" (UID: "4e46afa6-6711-47b2-88ca-b2b185d690e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 18:01:07 crc kubenswrapper[4794]: I0216 18:01:07.496983 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e46afa6-6711-47b2-88ca-b2b185d690e7-config-data" (OuterVolumeSpecName: "config-data") pod "4e46afa6-6711-47b2-88ca-b2b185d690e7" (UID: "4e46afa6-6711-47b2-88ca-b2b185d690e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 18:01:07 crc kubenswrapper[4794]: I0216 18:01:07.514374 4794 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e46afa6-6711-47b2-88ca-b2b185d690e7-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 18:01:07 crc kubenswrapper[4794]: I0216 18:01:07.514421 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ls795\" (UniqueName: \"kubernetes.io/projected/4e46afa6-6711-47b2-88ca-b2b185d690e7-kube-api-access-ls795\") on node \"crc\" DevicePath \"\"" Feb 16 18:01:07 crc kubenswrapper[4794]: I0216 18:01:07.514436 4794 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e46afa6-6711-47b2-88ca-b2b185d690e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 18:01:07 crc kubenswrapper[4794]: I0216 18:01:07.514457 4794 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e46afa6-6711-47b2-88ca-b2b185d690e7-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 18:01:07 crc kubenswrapper[4794]: I0216 18:01:07.901590 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29521081-8bkn4" event={"ID":"4e46afa6-6711-47b2-88ca-b2b185d690e7","Type":"ContainerDied","Data":"6e3ac97fa0518d76c515968ed8075742b5c8fe1a0047b3457c8dd3620b72e2e2"} Feb 16 18:01:07 crc kubenswrapper[4794]: I0216 18:01:07.901646 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e3ac97fa0518d76c515968ed8075742b5c8fe1a0047b3457c8dd3620b72e2e2" Feb 16 18:01:07 crc kubenswrapper[4794]: I0216 18:01:07.901680 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29521081-8bkn4" Feb 16 18:01:08 crc kubenswrapper[4794]: E0216 18:01:08.797672 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:01:10 crc kubenswrapper[4794]: I0216 18:01:10.792074 4794 scope.go:117] "RemoveContainer" containerID="b9dbd3fa18ebcb3d8888e3b92e151f642273c001c7a0d501f50a4853b9834121" Feb 16 18:01:10 crc kubenswrapper[4794]: E0216 18:01:10.792491 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:01:11 crc kubenswrapper[4794]: I0216 18:01:11.694941 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f258r"] Feb 16 18:01:11 crc kubenswrapper[4794]: E0216 18:01:11.697889 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e46afa6-6711-47b2-88ca-b2b185d690e7" containerName="keystone-cron" Feb 16 18:01:11 crc kubenswrapper[4794]: I0216 18:01:11.697933 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e46afa6-6711-47b2-88ca-b2b185d690e7" containerName="keystone-cron" Feb 16 18:01:11 crc kubenswrapper[4794]: I0216 18:01:11.698219 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e46afa6-6711-47b2-88ca-b2b185d690e7" containerName="keystone-cron" Feb 16 18:01:11 crc kubenswrapper[4794]: I0216 18:01:11.701697 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f258r" Feb 16 18:01:11 crc kubenswrapper[4794]: I0216 18:01:11.711535 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f258r"] Feb 16 18:01:11 crc kubenswrapper[4794]: I0216 18:01:11.821337 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11aceab4-0e41-4080-abbe-a7d2e12affc8-catalog-content\") pod \"certified-operators-f258r\" (UID: \"11aceab4-0e41-4080-abbe-a7d2e12affc8\") " pod="openshift-marketplace/certified-operators-f258r" Feb 16 18:01:11 crc kubenswrapper[4794]: I0216 18:01:11.821542 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11aceab4-0e41-4080-abbe-a7d2e12affc8-utilities\") pod \"certified-operators-f258r\" (UID: \"11aceab4-0e41-4080-abbe-a7d2e12affc8\") " pod="openshift-marketplace/certified-operators-f258r" Feb 16 18:01:11 crc kubenswrapper[4794]: I0216 18:01:11.822001 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c24nn\" (UniqueName: \"kubernetes.io/projected/11aceab4-0e41-4080-abbe-a7d2e12affc8-kube-api-access-c24nn\") pod \"certified-operators-f258r\" (UID: \"11aceab4-0e41-4080-abbe-a7d2e12affc8\") " pod="openshift-marketplace/certified-operators-f258r" Feb 16 18:01:11 crc kubenswrapper[4794]: I0216 18:01:11.924952 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c24nn\" (UniqueName: \"kubernetes.io/projected/11aceab4-0e41-4080-abbe-a7d2e12affc8-kube-api-access-c24nn\") pod \"certified-operators-f258r\" (UID: \"11aceab4-0e41-4080-abbe-a7d2e12affc8\") " pod="openshift-marketplace/certified-operators-f258r" Feb 16 18:01:11 crc kubenswrapper[4794]: I0216 18:01:11.925159 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11aceab4-0e41-4080-abbe-a7d2e12affc8-catalog-content\") pod \"certified-operators-f258r\" (UID: \"11aceab4-0e41-4080-abbe-a7d2e12affc8\") " pod="openshift-marketplace/certified-operators-f258r" Feb 16 18:01:11 crc kubenswrapper[4794]: I0216 18:01:11.925206 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11aceab4-0e41-4080-abbe-a7d2e12affc8-utilities\") pod \"certified-operators-f258r\" (UID: \"11aceab4-0e41-4080-abbe-a7d2e12affc8\") " pod="openshift-marketplace/certified-operators-f258r" Feb 16 18:01:11 crc kubenswrapper[4794]: I0216 18:01:11.925600 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11aceab4-0e41-4080-abbe-a7d2e12affc8-catalog-content\") pod \"certified-operators-f258r\" (UID: \"11aceab4-0e41-4080-abbe-a7d2e12affc8\") " pod="openshift-marketplace/certified-operators-f258r" Feb 16 18:01:11 crc kubenswrapper[4794]: I0216 18:01:11.925753 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11aceab4-0e41-4080-abbe-a7d2e12affc8-utilities\") pod \"certified-operators-f258r\" (UID: \"11aceab4-0e41-4080-abbe-a7d2e12affc8\") " pod="openshift-marketplace/certified-operators-f258r" Feb 16 18:01:11 crc kubenswrapper[4794]: I0216 18:01:11.944252 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c24nn\" (UniqueName: \"kubernetes.io/projected/11aceab4-0e41-4080-abbe-a7d2e12affc8-kube-api-access-c24nn\") pod \"certified-operators-f258r\" (UID: \"11aceab4-0e41-4080-abbe-a7d2e12affc8\") " pod="openshift-marketplace/certified-operators-f258r" Feb 16 18:01:12 crc kubenswrapper[4794]: I0216 18:01:12.040165 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f258r" Feb 16 18:01:12 crc kubenswrapper[4794]: I0216 18:01:12.218796 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mrn2w" podUID="a465458d-7515-43af-9220-6bd07e2a08ea" containerName="registry-server" probeResult="failure" output=< Feb 16 18:01:12 crc kubenswrapper[4794]: timeout: failed to connect service ":50051" within 1s Feb 16 18:01:12 crc kubenswrapper[4794]: > Feb 16 18:01:12 crc kubenswrapper[4794]: I0216 18:01:12.606829 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f258r"] Feb 16 18:01:12 crc kubenswrapper[4794]: I0216 18:01:12.966674 4794 generic.go:334] "Generic (PLEG): container finished" podID="11aceab4-0e41-4080-abbe-a7d2e12affc8" containerID="c1bf0969fc95eb03aeaff3d3cb8707faa37a2aa5a194ce5e6903096cba1af002" exitCode=0 Feb 16 18:01:12 crc kubenswrapper[4794]: I0216 18:01:12.966728 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f258r" event={"ID":"11aceab4-0e41-4080-abbe-a7d2e12affc8","Type":"ContainerDied","Data":"c1bf0969fc95eb03aeaff3d3cb8707faa37a2aa5a194ce5e6903096cba1af002"} Feb 16 18:01:12 crc kubenswrapper[4794]: I0216 18:01:12.966783 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f258r" event={"ID":"11aceab4-0e41-4080-abbe-a7d2e12affc8","Type":"ContainerStarted","Data":"6677097f93a2401793e0325f5f6e58ad626163ad19ca4758acecbe0ec695c940"} Feb 16 18:01:13 crc kubenswrapper[4794]: I0216 18:01:13.978576 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f258r" event={"ID":"11aceab4-0e41-4080-abbe-a7d2e12affc8","Type":"ContainerStarted","Data":"55e502a2819abd4fe9ba705dd5eda017962587ab72de33af9dbad994b5bb94bb"} Feb 16 18:01:14 crc kubenswrapper[4794]: I0216 18:01:14.989364 4794 generic.go:334] "Generic (PLEG): container finished" podID="11aceab4-0e41-4080-abbe-a7d2e12affc8" containerID="55e502a2819abd4fe9ba705dd5eda017962587ab72de33af9dbad994b5bb94bb" exitCode=0 Feb 16 18:01:14 crc kubenswrapper[4794]: I0216 18:01:14.989406 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f258r" event={"ID":"11aceab4-0e41-4080-abbe-a7d2e12affc8","Type":"ContainerDied","Data":"55e502a2819abd4fe9ba705dd5eda017962587ab72de33af9dbad994b5bb94bb"} Feb 16 18:01:15 crc kubenswrapper[4794]: E0216 18:01:15.794067 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:01:16 crc kubenswrapper[4794]: I0216 18:01:16.027919 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f258r" event={"ID":"11aceab4-0e41-4080-abbe-a7d2e12affc8","Type":"ContainerStarted","Data":"fa0d12cbb635d1aa8981af006ab00a57d530c13dbf1ed2fd4a7c784b17f16744"} Feb 16 18:01:16 crc kubenswrapper[4794]: I0216 18:01:16.054707 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f258r" podStartSLOduration=2.564366305 podStartE2EDuration="5.054689719s" podCreationTimestamp="2026-02-16 18:01:11 +0000 UTC" firstStartedPulling="2026-02-16 18:01:12.968698475 +0000 UTC m=+3698.916793122" lastFinishedPulling="2026-02-16 18:01:15.459021889 +0000 UTC m=+3701.407116536" observedRunningTime="2026-02-16 18:01:16.052371633 +0000 UTC m=+3702.000466300" watchObservedRunningTime="2026-02-16 18:01:16.054689719 +0000 UTC m=+3702.002784366" Feb 16 18:01:20 crc kubenswrapper[4794]: E0216 18:01:20.795633 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:01:22 crc kubenswrapper[4794]: I0216 18:01:22.040963 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f258r" Feb 16 18:01:22 crc kubenswrapper[4794]: I0216 18:01:22.041340 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f258r" Feb 16 18:01:22 crc kubenswrapper[4794]: I0216 18:01:22.095645 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f258r" Feb 16 18:01:22 crc kubenswrapper[4794]: I0216 18:01:22.155932 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f258r" Feb 16 18:01:22 crc kubenswrapper[4794]: I0216 18:01:22.193133 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mrn2w" podUID="a465458d-7515-43af-9220-6bd07e2a08ea" containerName="registry-server" probeResult="failure" output=< Feb 16 18:01:22 crc kubenswrapper[4794]: timeout: failed to connect service ":50051" within 1s Feb 16 18:01:22 crc kubenswrapper[4794]: > Feb 16 18:01:22 crc kubenswrapper[4794]: I0216 18:01:22.345383 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f258r"] Feb 16 18:01:22 crc kubenswrapper[4794]: I0216 18:01:22.791709 4794 scope.go:117] "RemoveContainer" containerID="b9dbd3fa18ebcb3d8888e3b92e151f642273c001c7a0d501f50a4853b9834121" Feb 16 18:01:22 crc kubenswrapper[4794]: E0216 18:01:22.792104 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:01:24 crc kubenswrapper[4794]: I0216 18:01:24.106281 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f258r" podUID="11aceab4-0e41-4080-abbe-a7d2e12affc8" containerName="registry-server" containerID="cri-o://fa0d12cbb635d1aa8981af006ab00a57d530c13dbf1ed2fd4a7c784b17f16744" gracePeriod=2 Feb 16 18:01:24 crc kubenswrapper[4794]: I0216 18:01:24.652617 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f258r" Feb 16 18:01:24 crc kubenswrapper[4794]: I0216 18:01:24.764477 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11aceab4-0e41-4080-abbe-a7d2e12affc8-utilities\") pod \"11aceab4-0e41-4080-abbe-a7d2e12affc8\" (UID: \"11aceab4-0e41-4080-abbe-a7d2e12affc8\") " Feb 16 18:01:24 crc kubenswrapper[4794]: I0216 18:01:24.764566 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11aceab4-0e41-4080-abbe-a7d2e12affc8-catalog-content\") pod \"11aceab4-0e41-4080-abbe-a7d2e12affc8\" (UID: \"11aceab4-0e41-4080-abbe-a7d2e12affc8\") " Feb 16 18:01:24 crc kubenswrapper[4794]: I0216 18:01:24.764641 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c24nn\" (UniqueName: \"kubernetes.io/projected/11aceab4-0e41-4080-abbe-a7d2e12affc8-kube-api-access-c24nn\") pod \"11aceab4-0e41-4080-abbe-a7d2e12affc8\" (UID: \"11aceab4-0e41-4080-abbe-a7d2e12affc8\") " Feb 16 18:01:24 crc kubenswrapper[4794]: I0216 18:01:24.765352 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11aceab4-0e41-4080-abbe-a7d2e12affc8-utilities" (OuterVolumeSpecName: "utilities") pod "11aceab4-0e41-4080-abbe-a7d2e12affc8" (UID: "11aceab4-0e41-4080-abbe-a7d2e12affc8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 18:01:24 crc kubenswrapper[4794]: I0216 18:01:24.770579 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11aceab4-0e41-4080-abbe-a7d2e12affc8-kube-api-access-c24nn" (OuterVolumeSpecName: "kube-api-access-c24nn") pod "11aceab4-0e41-4080-abbe-a7d2e12affc8" (UID: "11aceab4-0e41-4080-abbe-a7d2e12affc8"). InnerVolumeSpecName "kube-api-access-c24nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 18:01:24 crc kubenswrapper[4794]: I0216 18:01:24.816826 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11aceab4-0e41-4080-abbe-a7d2e12affc8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "11aceab4-0e41-4080-abbe-a7d2e12affc8" (UID: "11aceab4-0e41-4080-abbe-a7d2e12affc8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 18:01:24 crc kubenswrapper[4794]: I0216 18:01:24.868535 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11aceab4-0e41-4080-abbe-a7d2e12affc8-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 18:01:24 crc kubenswrapper[4794]: I0216 18:01:24.868562 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11aceab4-0e41-4080-abbe-a7d2e12affc8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 18:01:24 crc kubenswrapper[4794]: I0216 18:01:24.868574 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c24nn\" (UniqueName: \"kubernetes.io/projected/11aceab4-0e41-4080-abbe-a7d2e12affc8-kube-api-access-c24nn\") on node \"crc\" DevicePath \"\"" Feb 16 18:01:25 crc kubenswrapper[4794]: I0216 18:01:25.120143 4794 generic.go:334] "Generic (PLEG): container finished" podID="11aceab4-0e41-4080-abbe-a7d2e12affc8" containerID="fa0d12cbb635d1aa8981af006ab00a57d530c13dbf1ed2fd4a7c784b17f16744" exitCode=0 Feb 16 18:01:25 crc kubenswrapper[4794]: I0216 18:01:25.120201 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f258r" event={"ID":"11aceab4-0e41-4080-abbe-a7d2e12affc8","Type":"ContainerDied","Data":"fa0d12cbb635d1aa8981af006ab00a57d530c13dbf1ed2fd4a7c784b17f16744"} Feb 16 18:01:25 crc kubenswrapper[4794]: I0216 18:01:25.120216 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f258r" Feb 16 18:01:25 crc kubenswrapper[4794]: I0216 18:01:25.120238 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f258r" event={"ID":"11aceab4-0e41-4080-abbe-a7d2e12affc8","Type":"ContainerDied","Data":"6677097f93a2401793e0325f5f6e58ad626163ad19ca4758acecbe0ec695c940"} Feb 16 18:01:25 crc kubenswrapper[4794]: I0216 18:01:25.120264 4794 scope.go:117] "RemoveContainer" containerID="fa0d12cbb635d1aa8981af006ab00a57d530c13dbf1ed2fd4a7c784b17f16744" Feb 16 18:01:25 crc kubenswrapper[4794]: I0216 18:01:25.167364 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f258r"] Feb 16 18:01:25 crc kubenswrapper[4794]: I0216 18:01:25.173707 4794 scope.go:117] "RemoveContainer" containerID="55e502a2819abd4fe9ba705dd5eda017962587ab72de33af9dbad994b5bb94bb" Feb 16 18:01:25 crc kubenswrapper[4794]: I0216 18:01:25.185605 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f258r"] Feb 16 18:01:25 crc kubenswrapper[4794]: I0216 18:01:25.198714 4794 scope.go:117] "RemoveContainer" containerID="c1bf0969fc95eb03aeaff3d3cb8707faa37a2aa5a194ce5e6903096cba1af002" Feb 16 18:01:25 crc kubenswrapper[4794]: I0216 18:01:25.258082 4794 scope.go:117] "RemoveContainer" containerID="fa0d12cbb635d1aa8981af006ab00a57d530c13dbf1ed2fd4a7c784b17f16744" Feb 16 18:01:25 crc kubenswrapper[4794]: E0216 18:01:25.258730 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa0d12cbb635d1aa8981af006ab00a57d530c13dbf1ed2fd4a7c784b17f16744\": container with ID starting with fa0d12cbb635d1aa8981af006ab00a57d530c13dbf1ed2fd4a7c784b17f16744 not found: ID does not exist" containerID="fa0d12cbb635d1aa8981af006ab00a57d530c13dbf1ed2fd4a7c784b17f16744" Feb 16 18:01:25 crc kubenswrapper[4794]: I0216 18:01:25.258767 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa0d12cbb635d1aa8981af006ab00a57d530c13dbf1ed2fd4a7c784b17f16744"} err="failed to get container status \"fa0d12cbb635d1aa8981af006ab00a57d530c13dbf1ed2fd4a7c784b17f16744\": rpc error: code = NotFound desc = could not find container \"fa0d12cbb635d1aa8981af006ab00a57d530c13dbf1ed2fd4a7c784b17f16744\": container with ID starting with fa0d12cbb635d1aa8981af006ab00a57d530c13dbf1ed2fd4a7c784b17f16744 not found: ID does not exist" Feb 16 18:01:25 crc kubenswrapper[4794]: I0216 18:01:25.258799 4794 scope.go:117] "RemoveContainer" containerID="55e502a2819abd4fe9ba705dd5eda017962587ab72de33af9dbad994b5bb94bb" Feb 16 18:01:25 crc kubenswrapper[4794]: E0216 18:01:25.259079 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55e502a2819abd4fe9ba705dd5eda017962587ab72de33af9dbad994b5bb94bb\": container with ID starting with 55e502a2819abd4fe9ba705dd5eda017962587ab72de33af9dbad994b5bb94bb not found: ID does not exist" containerID="55e502a2819abd4fe9ba705dd5eda017962587ab72de33af9dbad994b5bb94bb" Feb 16 18:01:25 crc kubenswrapper[4794]: I0216 18:01:25.259106 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55e502a2819abd4fe9ba705dd5eda017962587ab72de33af9dbad994b5bb94bb"} err="failed to get container status \"55e502a2819abd4fe9ba705dd5eda017962587ab72de33af9dbad994b5bb94bb\": rpc error: code = NotFound desc = could not find container \"55e502a2819abd4fe9ba705dd5eda017962587ab72de33af9dbad994b5bb94bb\": container with ID starting with 55e502a2819abd4fe9ba705dd5eda017962587ab72de33af9dbad994b5bb94bb not found: ID does not exist" Feb 16 18:01:25 crc kubenswrapper[4794]: I0216 18:01:25.259125 4794 scope.go:117] "RemoveContainer" containerID="c1bf0969fc95eb03aeaff3d3cb8707faa37a2aa5a194ce5e6903096cba1af002" Feb 16 18:01:25 crc kubenswrapper[4794]: E0216 18:01:25.259648 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1bf0969fc95eb03aeaff3d3cb8707faa37a2aa5a194ce5e6903096cba1af002\": container with ID starting with c1bf0969fc95eb03aeaff3d3cb8707faa37a2aa5a194ce5e6903096cba1af002 not found: ID does not exist" containerID="c1bf0969fc95eb03aeaff3d3cb8707faa37a2aa5a194ce5e6903096cba1af002" Feb 16 18:01:25 crc kubenswrapper[4794]: I0216 18:01:25.259679 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1bf0969fc95eb03aeaff3d3cb8707faa37a2aa5a194ce5e6903096cba1af002"} err="failed to get container status \"c1bf0969fc95eb03aeaff3d3cb8707faa37a2aa5a194ce5e6903096cba1af002\": rpc error: code = NotFound desc = could not find container \"c1bf0969fc95eb03aeaff3d3cb8707faa37a2aa5a194ce5e6903096cba1af002\": container with ID starting with c1bf0969fc95eb03aeaff3d3cb8707faa37a2aa5a194ce5e6903096cba1af002 not found: ID does not exist" Feb 16 18:01:26 crc kubenswrapper[4794]: I0216 18:01:26.809176 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11aceab4-0e41-4080-abbe-a7d2e12affc8" path="/var/lib/kubelet/pods/11aceab4-0e41-4080-abbe-a7d2e12affc8/volumes" Feb 16 18:01:28 crc kubenswrapper[4794]: E0216 18:01:28.793158 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:01:31 crc kubenswrapper[4794]: I0216 18:01:31.198847 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mrn2w" Feb 16 18:01:31 crc kubenswrapper[4794]: I0216 18:01:31.267966 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mrn2w" Feb 16 18:01:31 crc kubenswrapper[4794]: I0216 18:01:31.442871 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mrn2w"] Feb 16 18:01:32 crc kubenswrapper[4794]: I0216 18:01:32.384034 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mrn2w" podUID="a465458d-7515-43af-9220-6bd07e2a08ea" containerName="registry-server" containerID="cri-o://e9fcc9313a6315c523f6e20244d4bc7100f4580443f356bc695afcd27033a6ab" gracePeriod=2 Feb 16 18:01:32 crc kubenswrapper[4794]: E0216 18:01:32.794115 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:01:32 crc kubenswrapper[4794]: I0216 18:01:32.958975 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mrn2w" Feb 16 18:01:33 crc kubenswrapper[4794]: I0216 18:01:33.062071 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7xh2\" (UniqueName: \"kubernetes.io/projected/a465458d-7515-43af-9220-6bd07e2a08ea-kube-api-access-t7xh2\") pod \"a465458d-7515-43af-9220-6bd07e2a08ea\" (UID: \"a465458d-7515-43af-9220-6bd07e2a08ea\") " Feb 16 18:01:33 crc kubenswrapper[4794]: I0216 18:01:33.062559 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a465458d-7515-43af-9220-6bd07e2a08ea-utilities\") pod \"a465458d-7515-43af-9220-6bd07e2a08ea\" (UID: \"a465458d-7515-43af-9220-6bd07e2a08ea\") " Feb 16 18:01:33 crc kubenswrapper[4794]: I0216 18:01:33.062669 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a465458d-7515-43af-9220-6bd07e2a08ea-catalog-content\") pod \"a465458d-7515-43af-9220-6bd07e2a08ea\" (UID: \"a465458d-7515-43af-9220-6bd07e2a08ea\") " Feb 16 18:01:33 crc kubenswrapper[4794]: I0216 18:01:33.063365 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a465458d-7515-43af-9220-6bd07e2a08ea-utilities" (OuterVolumeSpecName: "utilities") pod "a465458d-7515-43af-9220-6bd07e2a08ea" (UID: "a465458d-7515-43af-9220-6bd07e2a08ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 18:01:33 crc kubenswrapper[4794]: I0216 18:01:33.064504 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a465458d-7515-43af-9220-6bd07e2a08ea-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 18:01:33 crc kubenswrapper[4794]: I0216 18:01:33.068892 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a465458d-7515-43af-9220-6bd07e2a08ea-kube-api-access-t7xh2" (OuterVolumeSpecName: "kube-api-access-t7xh2") pod "a465458d-7515-43af-9220-6bd07e2a08ea" (UID: "a465458d-7515-43af-9220-6bd07e2a08ea"). InnerVolumeSpecName "kube-api-access-t7xh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 18:01:33 crc kubenswrapper[4794]: I0216 18:01:33.167412 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7xh2\" (UniqueName: \"kubernetes.io/projected/a465458d-7515-43af-9220-6bd07e2a08ea-kube-api-access-t7xh2\") on node \"crc\" DevicePath \"\"" Feb 16 18:01:33 crc kubenswrapper[4794]: I0216 18:01:33.181221 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a465458d-7515-43af-9220-6bd07e2a08ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a465458d-7515-43af-9220-6bd07e2a08ea" (UID: "a465458d-7515-43af-9220-6bd07e2a08ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 18:01:33 crc kubenswrapper[4794]: I0216 18:01:33.269448 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a465458d-7515-43af-9220-6bd07e2a08ea-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 18:01:33 crc kubenswrapper[4794]: I0216 18:01:33.398773 4794 generic.go:334] "Generic (PLEG): container finished" podID="a465458d-7515-43af-9220-6bd07e2a08ea" containerID="e9fcc9313a6315c523f6e20244d4bc7100f4580443f356bc695afcd27033a6ab" exitCode=0 Feb 16 18:01:33 crc kubenswrapper[4794]: I0216 18:01:33.399506 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mrn2w" Feb 16 18:01:33 crc kubenswrapper[4794]: I0216 18:01:33.399516 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrn2w" event={"ID":"a465458d-7515-43af-9220-6bd07e2a08ea","Type":"ContainerDied","Data":"e9fcc9313a6315c523f6e20244d4bc7100f4580443f356bc695afcd27033a6ab"} Feb 16 18:01:33 crc kubenswrapper[4794]: I0216 18:01:33.399978 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrn2w" event={"ID":"a465458d-7515-43af-9220-6bd07e2a08ea","Type":"ContainerDied","Data":"5fdd871548095e1dd134b0fda086f8b75ce0abedf127651d00d2ae5d6951b8b8"} Feb 16 18:01:33 crc kubenswrapper[4794]: I0216 18:01:33.400027 4794 scope.go:117] "RemoveContainer" containerID="e9fcc9313a6315c523f6e20244d4bc7100f4580443f356bc695afcd27033a6ab" Feb 16 18:01:33 crc kubenswrapper[4794]: I0216 18:01:33.443867 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mrn2w"] Feb 16 18:01:33 crc kubenswrapper[4794]: I0216 18:01:33.445851 4794 scope.go:117] "RemoveContainer" containerID="77c83bf9c83d18cf9f3d7f6147ae3ef5f8b4637c24d74671147c30dc1ae58f3d" Feb 16 18:01:33 crc kubenswrapper[4794]: I0216 18:01:33.452588 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mrn2w"] Feb 16 18:01:33 crc kubenswrapper[4794]: I0216 18:01:33.483595 4794 scope.go:117] "RemoveContainer" containerID="e1f9d18981484f9166134c8dcdfa200895ec7769bd76ed99b375201ed689060f" Feb 16 18:01:33 crc kubenswrapper[4794]: I0216 18:01:33.528060 4794 scope.go:117] "RemoveContainer" containerID="e9fcc9313a6315c523f6e20244d4bc7100f4580443f356bc695afcd27033a6ab" Feb 16 18:01:33 crc kubenswrapper[4794]: E0216 18:01:33.528529 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9fcc9313a6315c523f6e20244d4bc7100f4580443f356bc695afcd27033a6ab\": container with ID starting with e9fcc9313a6315c523f6e20244d4bc7100f4580443f356bc695afcd27033a6ab not found: ID does not exist" containerID="e9fcc9313a6315c523f6e20244d4bc7100f4580443f356bc695afcd27033a6ab" Feb 16 18:01:33 crc kubenswrapper[4794]: I0216 18:01:33.528563 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9fcc9313a6315c523f6e20244d4bc7100f4580443f356bc695afcd27033a6ab"} err="failed to get container status \"e9fcc9313a6315c523f6e20244d4bc7100f4580443f356bc695afcd27033a6ab\": rpc error: code = NotFound desc = could not find container \"e9fcc9313a6315c523f6e20244d4bc7100f4580443f356bc695afcd27033a6ab\": container with ID starting with e9fcc9313a6315c523f6e20244d4bc7100f4580443f356bc695afcd27033a6ab not found: ID does not exist" Feb 16 18:01:33 crc kubenswrapper[4794]: I0216 18:01:33.528585 4794 scope.go:117] "RemoveContainer" containerID="77c83bf9c83d18cf9f3d7f6147ae3ef5f8b4637c24d74671147c30dc1ae58f3d" Feb 16 18:01:33 crc kubenswrapper[4794]: E0216 18:01:33.528910 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77c83bf9c83d18cf9f3d7f6147ae3ef5f8b4637c24d74671147c30dc1ae58f3d\": container with ID starting with 77c83bf9c83d18cf9f3d7f6147ae3ef5f8b4637c24d74671147c30dc1ae58f3d not found: ID does not exist" containerID="77c83bf9c83d18cf9f3d7f6147ae3ef5f8b4637c24d74671147c30dc1ae58f3d" Feb 16 18:01:33 crc kubenswrapper[4794]: I0216 18:01:33.528930 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77c83bf9c83d18cf9f3d7f6147ae3ef5f8b4637c24d74671147c30dc1ae58f3d"} err="failed to get container status \"77c83bf9c83d18cf9f3d7f6147ae3ef5f8b4637c24d74671147c30dc1ae58f3d\": rpc error: code = NotFound desc = could not find container \"77c83bf9c83d18cf9f3d7f6147ae3ef5f8b4637c24d74671147c30dc1ae58f3d\": container with ID starting with 77c83bf9c83d18cf9f3d7f6147ae3ef5f8b4637c24d74671147c30dc1ae58f3d not found: ID does not exist" Feb 16 18:01:33 crc kubenswrapper[4794]: I0216 18:01:33.528944 4794 scope.go:117] "RemoveContainer" containerID="e1f9d18981484f9166134c8dcdfa200895ec7769bd76ed99b375201ed689060f" Feb 16 18:01:33 crc kubenswrapper[4794]: E0216 18:01:33.530548 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1f9d18981484f9166134c8dcdfa200895ec7769bd76ed99b375201ed689060f\": container with ID starting with e1f9d18981484f9166134c8dcdfa200895ec7769bd76ed99b375201ed689060f not found: ID does not exist" containerID="e1f9d18981484f9166134c8dcdfa200895ec7769bd76ed99b375201ed689060f" Feb 16 18:01:33 crc kubenswrapper[4794]: I0216 18:01:33.530572 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1f9d18981484f9166134c8dcdfa200895ec7769bd76ed99b375201ed689060f"} err="failed to get container status \"e1f9d18981484f9166134c8dcdfa200895ec7769bd76ed99b375201ed689060f\": rpc error: code = NotFound desc = could not find container \"e1f9d18981484f9166134c8dcdfa200895ec7769bd76ed99b375201ed689060f\": container with ID starting with e1f9d18981484f9166134c8dcdfa200895ec7769bd76ed99b375201ed689060f not found: ID does not exist" Feb 16 18:01:34 crc kubenswrapper[4794]: I0216 18:01:34.805147 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a465458d-7515-43af-9220-6bd07e2a08ea" path="/var/lib/kubelet/pods/a465458d-7515-43af-9220-6bd07e2a08ea/volumes" Feb 16 18:01:35 crc kubenswrapper[4794]: I0216 18:01:35.791452 4794 scope.go:117] "RemoveContainer" containerID="b9dbd3fa18ebcb3d8888e3b92e151f642273c001c7a0d501f50a4853b9834121" Feb 16 18:01:35 crc kubenswrapper[4794]: E0216 18:01:35.792076 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:01:42 crc kubenswrapper[4794]: E0216 18:01:42.793653 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:01:43 crc kubenswrapper[4794]: E0216 18:01:43.794129 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:01:48 crc kubenswrapper[4794]: I0216 18:01:48.792101 4794 scope.go:117] "RemoveContainer" containerID="b9dbd3fa18ebcb3d8888e3b92e151f642273c001c7a0d501f50a4853b9834121" Feb 16 18:01:48 crc kubenswrapper[4794]: E0216 18:01:48.794368 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:01:54 crc kubenswrapper[4794]: E0216 18:01:54.815252 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:01:54 crc kubenswrapper[4794]: E0216 18:01:54.815345 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:02:00 crc kubenswrapper[4794]: I0216 18:02:00.791608 4794 scope.go:117] "RemoveContainer" containerID="b9dbd3fa18ebcb3d8888e3b92e151f642273c001c7a0d501f50a4853b9834121" Feb 16 18:02:00 crc kubenswrapper[4794]: E0216 18:02:00.792582 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:02:07 crc kubenswrapper[4794]: E0216 18:02:07.793558 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:02:07 crc kubenswrapper[4794]: E0216 18:02:07.793572 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:02:14 crc kubenswrapper[4794]: I0216 18:02:14.810022 4794 scope.go:117] "RemoveContainer" containerID="b9dbd3fa18ebcb3d8888e3b92e151f642273c001c7a0d501f50a4853b9834121" Feb 16 18:02:14 crc kubenswrapper[4794]: E0216 18:02:14.811239 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:02:19 crc kubenswrapper[4794]: E0216 18:02:19.794369 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:02:21 crc kubenswrapper[4794]: I0216 18:02:21.793615 4794 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 18:02:21 crc kubenswrapper[4794]: E0216 18:02:21.879905 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 18:02:21 crc kubenswrapper[4794]: E0216 18:02:21.879968 4794 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 18:02:21 crc kubenswrapper[4794]: E0216 18:02:21.880103 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2h5l2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-7gcsf_openstack(c695f880-15cb-45b1-8545-60d8437ec631): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 18:02:21 crc kubenswrapper[4794]: E0216 18:02:21.881319 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:02:26 crc kubenswrapper[4794]: I0216 18:02:26.792036 4794 scope.go:117] "RemoveContainer" containerID="b9dbd3fa18ebcb3d8888e3b92e151f642273c001c7a0d501f50a4853b9834121" Feb 16 18:02:26 crc kubenswrapper[4794]: E0216 18:02:26.794466 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:02:31 crc kubenswrapper[4794]: E0216 18:02:31.934176 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 18:02:31 crc kubenswrapper[4794]: E0216 18:02:31.934795 4794 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 18:02:31 crc kubenswrapper[4794]: E0216 18:02:31.934936 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59fh58dh6ch557h84h55ch564h5bh58fh5c8h5d4h584h669h667h569h59hd5hdbh9dh67ch5f9h59fh597h96h664h687h66dhfch5ddh5b7h88h59cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9v9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(8981f528-1f74-4d56-a93c-22860725b490): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 18:02:31 crc kubenswrapper[4794]: E0216 18:02:31.936111 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:02:36 crc kubenswrapper[4794]: E0216 18:02:36.794982 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:02:37 crc kubenswrapper[4794]: I0216 18:02:37.792366 4794 scope.go:117] "RemoveContainer" containerID="b9dbd3fa18ebcb3d8888e3b92e151f642273c001c7a0d501f50a4853b9834121" Feb 16 18:02:37 crc kubenswrapper[4794]: E0216 18:02:37.793664 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:02:44 crc kubenswrapper[4794]: E0216 18:02:44.816986 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:02:50 crc kubenswrapper[4794]: E0216 18:02:50.794565 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:02:52 crc kubenswrapper[4794]: I0216 18:02:52.792250 4794 scope.go:117] "RemoveContainer" containerID="b9dbd3fa18ebcb3d8888e3b92e151f642273c001c7a0d501f50a4853b9834121" Feb 16 18:02:52 crc kubenswrapper[4794]: E0216 18:02:52.793246 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:02:56 crc kubenswrapper[4794]: E0216 18:02:56.804253 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:03:04 crc kubenswrapper[4794]: E0216 18:03:04.806623 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:03:05 crc kubenswrapper[4794]: I0216 18:03:05.791920 4794 scope.go:117] "RemoveContainer" containerID="b9dbd3fa18ebcb3d8888e3b92e151f642273c001c7a0d501f50a4853b9834121" Feb 16 18:03:05 crc kubenswrapper[4794]: E0216 18:03:05.792424 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:03:11 crc kubenswrapper[4794]: E0216 18:03:11.796034 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:03:15 crc kubenswrapper[4794]: E0216 18:03:15.794654 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:03:19 crc kubenswrapper[4794]: I0216 18:03:19.792425 4794 scope.go:117] "RemoveContainer" containerID="b9dbd3fa18ebcb3d8888e3b92e151f642273c001c7a0d501f50a4853b9834121" Feb 16 18:03:19 crc kubenswrapper[4794]: E0216 18:03:19.793468 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:03:26 crc kubenswrapper[4794]: E0216 18:03:26.796506 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:03:29 crc kubenswrapper[4794]: E0216 18:03:29.810880 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:03:32 crc kubenswrapper[4794]: I0216 18:03:32.791652 4794 scope.go:117] "RemoveContainer" containerID="b9dbd3fa18ebcb3d8888e3b92e151f642273c001c7a0d501f50a4853b9834121" Feb 16 18:03:32 crc kubenswrapper[4794]: E0216 18:03:32.794608 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:03:40 crc kubenswrapper[4794]: E0216 18:03:40.794394 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:03:41 crc kubenswrapper[4794]: E0216 18:03:41.795010 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:03:43 crc kubenswrapper[4794]: I0216 18:03:43.792139 4794 scope.go:117] "RemoveContainer" containerID="b9dbd3fa18ebcb3d8888e3b92e151f642273c001c7a0d501f50a4853b9834121" Feb 16 18:03:43 crc kubenswrapper[4794]: E0216 18:03:43.792715 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:03:55 crc kubenswrapper[4794]: E0216 18:03:55.795620 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:03:55 crc kubenswrapper[4794]: E0216 18:03:55.795831 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:03:57 crc kubenswrapper[4794]: I0216 18:03:57.792320 4794 scope.go:117] "RemoveContainer" containerID="b9dbd3fa18ebcb3d8888e3b92e151f642273c001c7a0d501f50a4853b9834121" Feb 16 18:03:58 crc kubenswrapper[4794]: I0216 18:03:58.148760 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerStarted","Data":"ff6c0c9a8ff214790fe27eab00a8571918a74a6f3bddb6e4205a0b13c5dcd7b5"} Feb 16 18:04:10 crc kubenswrapper[4794]: E0216 18:04:10.793517 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:04:10 crc kubenswrapper[4794]: E0216 18:04:10.794007 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:04:21 crc kubenswrapper[4794]: E0216 18:04:21.794489 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:04:23 crc kubenswrapper[4794]: E0216 18:04:23.794176 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:04:34 crc kubenswrapper[4794]: E0216 18:04:34.804946 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:04:37 crc kubenswrapper[4794]: E0216 18:04:37.794617 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:04:46 crc kubenswrapper[4794]: E0216 18:04:46.795026 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:04:52 crc kubenswrapper[4794]: E0216 18:04:52.799015 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:04:58 crc kubenswrapper[4794]: E0216 18:04:58.793342 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:05:07 crc kubenswrapper[4794]: E0216 18:05:07.796691 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:05:11 crc kubenswrapper[4794]: E0216 18:05:11.795412 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:05:15 crc kubenswrapper[4794]: I0216 18:05:15.115154 4794 generic.go:334] "Generic (PLEG): container finished" podID="7566f2a1-be5c-4ab7-8639-e162712a8ea4" containerID="e5f2b42706e76bb61cf97a726f1ee07ee60a7d6ccb4e89d107caee62ba4d8189" exitCode=2 Feb 16 18:05:15 crc kubenswrapper[4794]: I0216 18:05:15.115675 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8s26h" event={"ID":"7566f2a1-be5c-4ab7-8639-e162712a8ea4","Type":"ContainerDied","Data":"e5f2b42706e76bb61cf97a726f1ee07ee60a7d6ccb4e89d107caee62ba4d8189"} Feb 16 18:05:16 crc kubenswrapper[4794]: I0216 18:05:16.772423 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8s26h" Feb 16 18:05:16 crc kubenswrapper[4794]: I0216 18:05:16.882622 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xqph\" (UniqueName: \"kubernetes.io/projected/7566f2a1-be5c-4ab7-8639-e162712a8ea4-kube-api-access-5xqph\") pod \"7566f2a1-be5c-4ab7-8639-e162712a8ea4\" (UID: \"7566f2a1-be5c-4ab7-8639-e162712a8ea4\") " Feb 16 18:05:16 crc kubenswrapper[4794]: I0216 18:05:16.882794 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7566f2a1-be5c-4ab7-8639-e162712a8ea4-ssh-key-openstack-edpm-ipam\") pod \"7566f2a1-be5c-4ab7-8639-e162712a8ea4\" (UID: \"7566f2a1-be5c-4ab7-8639-e162712a8ea4\") " Feb 16 18:05:16 crc kubenswrapper[4794]: I0216 18:05:16.882869 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7566f2a1-be5c-4ab7-8639-e162712a8ea4-inventory\") pod \"7566f2a1-be5c-4ab7-8639-e162712a8ea4\" (UID: \"7566f2a1-be5c-4ab7-8639-e162712a8ea4\") " Feb 16 18:05:16 crc kubenswrapper[4794]: I0216 18:05:16.896979 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7566f2a1-be5c-4ab7-8639-e162712a8ea4-kube-api-access-5xqph" (OuterVolumeSpecName: "kube-api-access-5xqph") pod "7566f2a1-be5c-4ab7-8639-e162712a8ea4" (UID: "7566f2a1-be5c-4ab7-8639-e162712a8ea4"). InnerVolumeSpecName "kube-api-access-5xqph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 18:05:16 crc kubenswrapper[4794]: I0216 18:05:16.915646 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7566f2a1-be5c-4ab7-8639-e162712a8ea4-inventory" (OuterVolumeSpecName: "inventory") pod "7566f2a1-be5c-4ab7-8639-e162712a8ea4" (UID: "7566f2a1-be5c-4ab7-8639-e162712a8ea4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 18:05:16 crc kubenswrapper[4794]: I0216 18:05:16.921257 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7566f2a1-be5c-4ab7-8639-e162712a8ea4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7566f2a1-be5c-4ab7-8639-e162712a8ea4" (UID: "7566f2a1-be5c-4ab7-8639-e162712a8ea4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 18:05:16 crc kubenswrapper[4794]: I0216 18:05:16.986514 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xqph\" (UniqueName: \"kubernetes.io/projected/7566f2a1-be5c-4ab7-8639-e162712a8ea4-kube-api-access-5xqph\") on node \"crc\" DevicePath \"\"" Feb 16 18:05:16 crc kubenswrapper[4794]: I0216 18:05:16.987575 4794 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7566f2a1-be5c-4ab7-8639-e162712a8ea4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 18:05:16 crc kubenswrapper[4794]: I0216 18:05:16.987605 4794 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7566f2a1-be5c-4ab7-8639-e162712a8ea4-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 18:05:17 crc kubenswrapper[4794]: I0216 18:05:17.137171 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8s26h" event={"ID":"7566f2a1-be5c-4ab7-8639-e162712a8ea4","Type":"ContainerDied","Data":"7cd9d8774619fcc90573bc3afd7bc3960052f1d0f47a2792326c1e7acfc42a65"} Feb 16 18:05:17 crc kubenswrapper[4794]: I0216 18:05:17.137218 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cd9d8774619fcc90573bc3afd7bc3960052f1d0f47a2792326c1e7acfc42a65" Feb 16 18:05:17 crc kubenswrapper[4794]: I0216 18:05:17.137344 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8s26h" Feb 16 18:05:18 crc kubenswrapper[4794]: E0216 18:05:18.793568 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:05:23 crc kubenswrapper[4794]: E0216 18:05:23.793755 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:05:33 crc kubenswrapper[4794]: E0216 18:05:33.792952 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:05:34 crc kubenswrapper[4794]: E0216 18:05:34.819663 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:05:45 crc kubenswrapper[4794]: E0216 18:05:45.793838 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:05:49 crc kubenswrapper[4794]: E0216 18:05:49.794183 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:05:58 crc kubenswrapper[4794]: E0216 18:05:58.796111 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:06:03 crc kubenswrapper[4794]: E0216 18:06:03.801677 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:06:12 crc kubenswrapper[4794]: E0216 18:06:12.794906 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:06:14 crc kubenswrapper[4794]: E0216 18:06:14.812807 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:06:20 crc kubenswrapper[4794]: I0216 18:06:20.141225 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 18:06:20 crc kubenswrapper[4794]: I0216 18:06:20.141869 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 18:06:24 crc kubenswrapper[4794]: E0216 18:06:24.812519 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:06:26 crc kubenswrapper[4794]: E0216 18:06:26.796812 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:06:37 crc kubenswrapper[4794]: I0216 18:06:37.705956 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fcwcz"] Feb 16 18:06:37 crc kubenswrapper[4794]: E0216 18:06:37.707217 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a465458d-7515-43af-9220-6bd07e2a08ea" containerName="extract-content" Feb 16 18:06:37 crc kubenswrapper[4794]: I0216 18:06:37.707235 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="a465458d-7515-43af-9220-6bd07e2a08ea" containerName="extract-content" Feb 16 18:06:37 crc kubenswrapper[4794]: E0216 18:06:37.707268 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11aceab4-0e41-4080-abbe-a7d2e12affc8" containerName="extract-content" Feb 16 18:06:37 crc kubenswrapper[4794]: I0216 18:06:37.707275 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="11aceab4-0e41-4080-abbe-a7d2e12affc8" containerName="extract-content" Feb 16 18:06:37 crc kubenswrapper[4794]: E0216 18:06:37.707294 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11aceab4-0e41-4080-abbe-a7d2e12affc8" containerName="extract-utilities" Feb 16 18:06:37 crc kubenswrapper[4794]: I0216 18:06:37.707328 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="11aceab4-0e41-4080-abbe-a7d2e12affc8" containerName="extract-utilities" Feb 16 18:06:37 crc kubenswrapper[4794]: E0216 18:06:37.707354 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11aceab4-0e41-4080-abbe-a7d2e12affc8" containerName="registry-server" Feb 16 18:06:37 crc kubenswrapper[4794]: I0216 18:06:37.707361 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="11aceab4-0e41-4080-abbe-a7d2e12affc8" containerName="registry-server" Feb 16 18:06:37 crc kubenswrapper[4794]: E0216 18:06:37.707380 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a465458d-7515-43af-9220-6bd07e2a08ea" containerName="registry-server" Feb 16 18:06:37 crc kubenswrapper[4794]: I0216 18:06:37.707387 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="a465458d-7515-43af-9220-6bd07e2a08ea" containerName="registry-server" Feb 16 18:06:37 crc kubenswrapper[4794]: E0216 18:06:37.707412 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a465458d-7515-43af-9220-6bd07e2a08ea" containerName="extract-utilities" Feb 16 18:06:37 crc kubenswrapper[4794]: I0216 18:06:37.707420 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="a465458d-7515-43af-9220-6bd07e2a08ea" containerName="extract-utilities" Feb 16 18:06:37 crc kubenswrapper[4794]: E0216 18:06:37.707435 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7566f2a1-be5c-4ab7-8639-e162712a8ea4" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 18:06:37 crc kubenswrapper[4794]: I0216 18:06:37.707443 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="7566f2a1-be5c-4ab7-8639-e162712a8ea4" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 18:06:37 crc kubenswrapper[4794]: I0216 18:06:37.707719 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="7566f2a1-be5c-4ab7-8639-e162712a8ea4" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 18:06:37 crc kubenswrapper[4794]: I0216 18:06:37.707733 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="a465458d-7515-43af-9220-6bd07e2a08ea" containerName="registry-server" Feb 16 18:06:37 crc kubenswrapper[4794]: I0216 18:06:37.707757 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="11aceab4-0e41-4080-abbe-a7d2e12affc8" containerName="registry-server" Feb 16 18:06:37 crc kubenswrapper[4794]: I0216 18:06:37.709920 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fcwcz" Feb 16 18:06:37 crc kubenswrapper[4794]: I0216 18:06:37.727157 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fcwcz"] Feb 16 18:06:37 crc kubenswrapper[4794]: E0216 18:06:37.793590 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:06:37 crc kubenswrapper[4794]: I0216 18:06:37.876558 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwgqx\" (UniqueName: \"kubernetes.io/projected/068b8b8d-0e55-4a56-ad92-ae6d0353ed0d-kube-api-access-zwgqx\") pod \"redhat-marketplace-fcwcz\" (UID: \"068b8b8d-0e55-4a56-ad92-ae6d0353ed0d\") " pod="openshift-marketplace/redhat-marketplace-fcwcz" Feb 16 18:06:37 crc kubenswrapper[4794]: I0216 18:06:37.876805 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/068b8b8d-0e55-4a56-ad92-ae6d0353ed0d-utilities\") pod \"redhat-marketplace-fcwcz\" (UID: \"068b8b8d-0e55-4a56-ad92-ae6d0353ed0d\") " pod="openshift-marketplace/redhat-marketplace-fcwcz" Feb 16 18:06:37 crc kubenswrapper[4794]: I0216 18:06:37.877012 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/068b8b8d-0e55-4a56-ad92-ae6d0353ed0d-catalog-content\") pod \"redhat-marketplace-fcwcz\" (UID: \"068b8b8d-0e55-4a56-ad92-ae6d0353ed0d\") " pod="openshift-marketplace/redhat-marketplace-fcwcz" Feb 16 18:06:37 crc kubenswrapper[4794]: I0216 18:06:37.980060 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/068b8b8d-0e55-4a56-ad92-ae6d0353ed0d-utilities\") pod \"redhat-marketplace-fcwcz\" (UID: \"068b8b8d-0e55-4a56-ad92-ae6d0353ed0d\") " pod="openshift-marketplace/redhat-marketplace-fcwcz" Feb 16 18:06:37 crc kubenswrapper[4794]: I0216 18:06:37.980400 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/068b8b8d-0e55-4a56-ad92-ae6d0353ed0d-catalog-content\") pod \"redhat-marketplace-fcwcz\" (UID: \"068b8b8d-0e55-4a56-ad92-ae6d0353ed0d\") " pod="openshift-marketplace/redhat-marketplace-fcwcz" Feb 16 18:06:37 crc kubenswrapper[4794]: I0216 18:06:37.980458 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwgqx\" (UniqueName: \"kubernetes.io/projected/068b8b8d-0e55-4a56-ad92-ae6d0353ed0d-kube-api-access-zwgqx\") pod \"redhat-marketplace-fcwcz\" (UID: \"068b8b8d-0e55-4a56-ad92-ae6d0353ed0d\") " pod="openshift-marketplace/redhat-marketplace-fcwcz" Feb 16 18:06:37 crc kubenswrapper[4794]: I0216 18:06:37.980648 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/068b8b8d-0e55-4a56-ad92-ae6d0353ed0d-utilities\") pod \"redhat-marketplace-fcwcz\" (UID: \"068b8b8d-0e55-4a56-ad92-ae6d0353ed0d\") " pod="openshift-marketplace/redhat-marketplace-fcwcz" Feb 16 18:06:37 crc kubenswrapper[4794]: I0216 18:06:37.980763 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/068b8b8d-0e55-4a56-ad92-ae6d0353ed0d-catalog-content\") pod \"redhat-marketplace-fcwcz\" (UID: \"068b8b8d-0e55-4a56-ad92-ae6d0353ed0d\") " pod="openshift-marketplace/redhat-marketplace-fcwcz" Feb 16 18:06:38 crc kubenswrapper[4794]: I0216 18:06:38.574845 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwgqx\" (UniqueName: \"kubernetes.io/projected/068b8b8d-0e55-4a56-ad92-ae6d0353ed0d-kube-api-access-zwgqx\") pod \"redhat-marketplace-fcwcz\" (UID: \"068b8b8d-0e55-4a56-ad92-ae6d0353ed0d\") " pod="openshift-marketplace/redhat-marketplace-fcwcz" Feb 16 18:06:38 crc kubenswrapper[4794]: I0216 18:06:38.646483 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fcwcz" Feb 16 18:06:38 crc kubenswrapper[4794]: E0216 18:06:38.793128 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:06:39 crc kubenswrapper[4794]: I0216 18:06:39.251286 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fcwcz"] Feb 16 18:06:39 crc kubenswrapper[4794]: W0216 18:06:39.251666 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod068b8b8d_0e55_4a56_ad92_ae6d0353ed0d.slice/crio-02fd6e8262ec7b2a22b773f223e12ef5ce238f3d0ba060c11786fbe3f7c3d0de WatchSource:0}: Error finding container 02fd6e8262ec7b2a22b773f223e12ef5ce238f3d0ba060c11786fbe3f7c3d0de: Status 404 returned error can't find the container with id 02fd6e8262ec7b2a22b773f223e12ef5ce238f3d0ba060c11786fbe3f7c3d0de Feb 16 18:06:40 crc kubenswrapper[4794]: I0216 18:06:40.121440 4794 generic.go:334] "Generic (PLEG): container finished" podID="068b8b8d-0e55-4a56-ad92-ae6d0353ed0d" containerID="67cd9623d3ac73f4e5eed61aea364c9591c113da26b837718b3d1ace6c5e5476" exitCode=0 Feb 16 18:06:40 crc kubenswrapper[4794]: I0216 18:06:40.121493 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcwcz" event={"ID":"068b8b8d-0e55-4a56-ad92-ae6d0353ed0d","Type":"ContainerDied","Data":"67cd9623d3ac73f4e5eed61aea364c9591c113da26b837718b3d1ace6c5e5476"} Feb 16 18:06:40 crc kubenswrapper[4794]: I0216 18:06:40.121524 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcwcz" event={"ID":"068b8b8d-0e55-4a56-ad92-ae6d0353ed0d","Type":"ContainerStarted","Data":"02fd6e8262ec7b2a22b773f223e12ef5ce238f3d0ba060c11786fbe3f7c3d0de"} Feb 16 18:06:41 crc kubenswrapper[4794]: I0216 18:06:41.131531 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcwcz" event={"ID":"068b8b8d-0e55-4a56-ad92-ae6d0353ed0d","Type":"ContainerStarted","Data":"cfe5402111174c2dc48e465c7e8512fbfc2ef1d8a620ed6bdee101f9b12e3006"} Feb 16 18:06:42 crc kubenswrapper[4794]: I0216 18:06:42.142972 4794 generic.go:334] "Generic (PLEG): container finished" podID="068b8b8d-0e55-4a56-ad92-ae6d0353ed0d" containerID="cfe5402111174c2dc48e465c7e8512fbfc2ef1d8a620ed6bdee101f9b12e3006" exitCode=0 Feb 16 18:06:42 crc kubenswrapper[4794]: I0216 18:06:42.143054 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcwcz" event={"ID":"068b8b8d-0e55-4a56-ad92-ae6d0353ed0d","Type":"ContainerDied","Data":"cfe5402111174c2dc48e465c7e8512fbfc2ef1d8a620ed6bdee101f9b12e3006"} Feb 16 18:06:43 crc kubenswrapper[4794]: I0216 18:06:43.157100 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcwcz" event={"ID":"068b8b8d-0e55-4a56-ad92-ae6d0353ed0d","Type":"ContainerStarted","Data":"55bcda4adde5bcb15e3d63aaa731a4e9b5d3a38ad1d5ab2200e747ca8a3bed62"} Feb 16 18:06:43 crc kubenswrapper[4794]: I0216 18:06:43.183278 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fcwcz" podStartSLOduration=3.7731593979999998 podStartE2EDuration="6.183253203s" podCreationTimestamp="2026-02-16 18:06:37 +0000 UTC" firstStartedPulling="2026-02-16 18:06:40.125065936 +0000 UTC m=+4026.073160583" lastFinishedPulling="2026-02-16 18:06:42.535159701 +0000 UTC m=+4028.483254388" observedRunningTime="2026-02-16 18:06:43.175320319 +0000 UTC m=+4029.123414966" watchObservedRunningTime="2026-02-16 18:06:43.183253203 +0000 UTC m=+4029.131347890" Feb 16 18:06:48 crc kubenswrapper[4794]: I0216 18:06:48.647358 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fcwcz" Feb 16 18:06:48 crc kubenswrapper[4794]: I0216 18:06:48.649470 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fcwcz" Feb 16 18:06:48 crc kubenswrapper[4794]: I0216 18:06:48.740492 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fcwcz" Feb 16 18:06:49 crc kubenswrapper[4794]: I0216 18:06:49.314326 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fcwcz" Feb 16 18:06:49 crc kubenswrapper[4794]: I0216 18:06:49.377040 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fcwcz"] Feb 16 18:06:50 crc kubenswrapper[4794]: I0216 18:06:50.141015 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 18:06:50 crc kubenswrapper[4794]: I0216 18:06:50.141071 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 18:06:50 crc kubenswrapper[4794]: E0216 18:06:50.793708 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:06:50 crc kubenswrapper[4794]: E0216 18:06:50.794038 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:06:51 crc kubenswrapper[4794]: I0216 18:06:51.272221 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fcwcz" podUID="068b8b8d-0e55-4a56-ad92-ae6d0353ed0d" containerName="registry-server" containerID="cri-o://55bcda4adde5bcb15e3d63aaa731a4e9b5d3a38ad1d5ab2200e747ca8a3bed62" gracePeriod=2 Feb 16 18:06:52 crc kubenswrapper[4794]: I0216 18:06:52.074215 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fcwcz" Feb 16 18:06:52 crc kubenswrapper[4794]: I0216 18:06:52.152205 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/068b8b8d-0e55-4a56-ad92-ae6d0353ed0d-utilities\") pod \"068b8b8d-0e55-4a56-ad92-ae6d0353ed0d\" (UID: \"068b8b8d-0e55-4a56-ad92-ae6d0353ed0d\") " Feb 16 18:06:52 crc kubenswrapper[4794]: I0216 18:06:52.152248 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/068b8b8d-0e55-4a56-ad92-ae6d0353ed0d-catalog-content\") pod \"068b8b8d-0e55-4a56-ad92-ae6d0353ed0d\" (UID: \"068b8b8d-0e55-4a56-ad92-ae6d0353ed0d\") " Feb 16 18:06:52 crc kubenswrapper[4794]: I0216 18:06:52.152408 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwgqx\" (UniqueName: \"kubernetes.io/projected/068b8b8d-0e55-4a56-ad92-ae6d0353ed0d-kube-api-access-zwgqx\") pod \"068b8b8d-0e55-4a56-ad92-ae6d0353ed0d\" (UID: \"068b8b8d-0e55-4a56-ad92-ae6d0353ed0d\") " Feb 16 18:06:52 crc kubenswrapper[4794]: I0216 18:06:52.153045 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/068b8b8d-0e55-4a56-ad92-ae6d0353ed0d-utilities" (OuterVolumeSpecName: "utilities") pod "068b8b8d-0e55-4a56-ad92-ae6d0353ed0d" (UID: "068b8b8d-0e55-4a56-ad92-ae6d0353ed0d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 18:06:52 crc kubenswrapper[4794]: I0216 18:06:52.153829 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/068b8b8d-0e55-4a56-ad92-ae6d0353ed0d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 18:06:52 crc kubenswrapper[4794]: I0216 18:06:52.159369 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/068b8b8d-0e55-4a56-ad92-ae6d0353ed0d-kube-api-access-zwgqx" (OuterVolumeSpecName: "kube-api-access-zwgqx") pod "068b8b8d-0e55-4a56-ad92-ae6d0353ed0d" (UID: "068b8b8d-0e55-4a56-ad92-ae6d0353ed0d"). InnerVolumeSpecName "kube-api-access-zwgqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 18:06:52 crc kubenswrapper[4794]: I0216 18:06:52.183390 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/068b8b8d-0e55-4a56-ad92-ae6d0353ed0d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "068b8b8d-0e55-4a56-ad92-ae6d0353ed0d" (UID: "068b8b8d-0e55-4a56-ad92-ae6d0353ed0d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 18:06:52 crc kubenswrapper[4794]: I0216 18:06:52.256211 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwgqx\" (UniqueName: \"kubernetes.io/projected/068b8b8d-0e55-4a56-ad92-ae6d0353ed0d-kube-api-access-zwgqx\") on node \"crc\" DevicePath \"\"" Feb 16 18:06:52 crc kubenswrapper[4794]: I0216 18:06:52.256248 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/068b8b8d-0e55-4a56-ad92-ae6d0353ed0d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 18:06:52 crc kubenswrapper[4794]: I0216 18:06:52.287333 4794 generic.go:334] "Generic (PLEG): container finished" podID="068b8b8d-0e55-4a56-ad92-ae6d0353ed0d" containerID="55bcda4adde5bcb15e3d63aaa731a4e9b5d3a38ad1d5ab2200e747ca8a3bed62" exitCode=0 Feb 16 18:06:52 crc kubenswrapper[4794]: I0216 18:06:52.287377 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fcwcz" Feb 16 18:06:52 crc kubenswrapper[4794]: I0216 18:06:52.287397 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcwcz" event={"ID":"068b8b8d-0e55-4a56-ad92-ae6d0353ed0d","Type":"ContainerDied","Data":"55bcda4adde5bcb15e3d63aaa731a4e9b5d3a38ad1d5ab2200e747ca8a3bed62"} Feb 16 18:06:52 crc kubenswrapper[4794]: I0216 18:06:52.287465 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcwcz" event={"ID":"068b8b8d-0e55-4a56-ad92-ae6d0353ed0d","Type":"ContainerDied","Data":"02fd6e8262ec7b2a22b773f223e12ef5ce238f3d0ba060c11786fbe3f7c3d0de"} Feb 16 18:06:52 crc kubenswrapper[4794]: I0216 18:06:52.287489 4794 scope.go:117] "RemoveContainer" containerID="55bcda4adde5bcb15e3d63aaa731a4e9b5d3a38ad1d5ab2200e747ca8a3bed62" Feb 16 18:06:52 crc kubenswrapper[4794]: I0216 18:06:52.339975 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fcwcz"] Feb 16 18:06:52 crc kubenswrapper[4794]: I0216 18:06:52.341155 4794 scope.go:117] "RemoveContainer" containerID="cfe5402111174c2dc48e465c7e8512fbfc2ef1d8a620ed6bdee101f9b12e3006" Feb 16 18:06:52 crc kubenswrapper[4794]: I0216 18:06:52.351513 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fcwcz"] Feb 16 18:06:52 crc kubenswrapper[4794]: I0216 18:06:52.385159 4794 scope.go:117] "RemoveContainer" containerID="67cd9623d3ac73f4e5eed61aea364c9591c113da26b837718b3d1ace6c5e5476" Feb 16 18:06:52 crc kubenswrapper[4794]: I0216 18:06:52.418425 4794 scope.go:117] "RemoveContainer" containerID="55bcda4adde5bcb15e3d63aaa731a4e9b5d3a38ad1d5ab2200e747ca8a3bed62" Feb 16 18:06:52 crc kubenswrapper[4794]: E0216 18:06:52.418967 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55bcda4adde5bcb15e3d63aaa731a4e9b5d3a38ad1d5ab2200e747ca8a3bed62\": container with ID starting with 55bcda4adde5bcb15e3d63aaa731a4e9b5d3a38ad1d5ab2200e747ca8a3bed62 not found: ID does not exist" containerID="55bcda4adde5bcb15e3d63aaa731a4e9b5d3a38ad1d5ab2200e747ca8a3bed62" Feb 16 18:06:52 crc kubenswrapper[4794]: I0216 18:06:52.419011 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55bcda4adde5bcb15e3d63aaa731a4e9b5d3a38ad1d5ab2200e747ca8a3bed62"} err="failed to get container status \"55bcda4adde5bcb15e3d63aaa731a4e9b5d3a38ad1d5ab2200e747ca8a3bed62\": rpc error: code = NotFound desc = could not find container \"55bcda4adde5bcb15e3d63aaa731a4e9b5d3a38ad1d5ab2200e747ca8a3bed62\": container with ID starting with 55bcda4adde5bcb15e3d63aaa731a4e9b5d3a38ad1d5ab2200e747ca8a3bed62 not found: ID does not exist" Feb 16 18:06:52 crc kubenswrapper[4794]: I0216 18:06:52.419040 4794 scope.go:117] "RemoveContainer" containerID="cfe5402111174c2dc48e465c7e8512fbfc2ef1d8a620ed6bdee101f9b12e3006" Feb 16 18:06:52 crc kubenswrapper[4794]: E0216 18:06:52.419334 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfe5402111174c2dc48e465c7e8512fbfc2ef1d8a620ed6bdee101f9b12e3006\": container with ID starting with cfe5402111174c2dc48e465c7e8512fbfc2ef1d8a620ed6bdee101f9b12e3006 not found: ID does not exist" containerID="cfe5402111174c2dc48e465c7e8512fbfc2ef1d8a620ed6bdee101f9b12e3006" Feb 16 18:06:52 crc kubenswrapper[4794]: I0216 18:06:52.419370 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfe5402111174c2dc48e465c7e8512fbfc2ef1d8a620ed6bdee101f9b12e3006"} err="failed to get container status \"cfe5402111174c2dc48e465c7e8512fbfc2ef1d8a620ed6bdee101f9b12e3006\": rpc error: code = NotFound desc = could not find container \"cfe5402111174c2dc48e465c7e8512fbfc2ef1d8a620ed6bdee101f9b12e3006\": container with ID starting with cfe5402111174c2dc48e465c7e8512fbfc2ef1d8a620ed6bdee101f9b12e3006 not found: ID does not exist" Feb 16 18:06:52 crc kubenswrapper[4794]: I0216 18:06:52.419391 4794 scope.go:117] "RemoveContainer" containerID="67cd9623d3ac73f4e5eed61aea364c9591c113da26b837718b3d1ace6c5e5476" Feb 16 18:06:52 crc kubenswrapper[4794]: E0216 18:06:52.419756 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67cd9623d3ac73f4e5eed61aea364c9591c113da26b837718b3d1ace6c5e5476\": container with ID starting with 67cd9623d3ac73f4e5eed61aea364c9591c113da26b837718b3d1ace6c5e5476 not found: ID does not exist" containerID="67cd9623d3ac73f4e5eed61aea364c9591c113da26b837718b3d1ace6c5e5476" Feb 16 18:06:52 crc kubenswrapper[4794]: I0216 18:06:52.419785 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67cd9623d3ac73f4e5eed61aea364c9591c113da26b837718b3d1ace6c5e5476"} err="failed to get container status \"67cd9623d3ac73f4e5eed61aea364c9591c113da26b837718b3d1ace6c5e5476\": rpc error: code = NotFound desc = could not find container \"67cd9623d3ac73f4e5eed61aea364c9591c113da26b837718b3d1ace6c5e5476\": container with ID starting with 67cd9623d3ac73f4e5eed61aea364c9591c113da26b837718b3d1ace6c5e5476 not found: ID does not exist" Feb 16 18:06:52 crc kubenswrapper[4794]: I0216 18:06:52.829560 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="068b8b8d-0e55-4a56-ad92-ae6d0353ed0d" path="/var/lib/kubelet/pods/068b8b8d-0e55-4a56-ad92-ae6d0353ed0d/volumes" Feb 16 18:07:02 crc kubenswrapper[4794]: E0216 18:07:02.793942 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:07:05 crc kubenswrapper[4794]: E0216 18:07:05.794049 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:07:16 crc kubenswrapper[4794]: E0216 18:07:16.796234 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:07:16 crc kubenswrapper[4794]: E0216 18:07:16.796403 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:07:20 crc kubenswrapper[4794]: I0216 18:07:20.140292 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 18:07:20 crc kubenswrapper[4794]: I0216 18:07:20.140828 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 18:07:20 crc kubenswrapper[4794]: I0216 18:07:20.140872 4794 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" Feb 16 18:07:20 crc kubenswrapper[4794]: I0216 18:07:20.141905 4794 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ff6c0c9a8ff214790fe27eab00a8571918a74a6f3bddb6e4205a0b13c5dcd7b5"} pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 18:07:20 crc kubenswrapper[4794]: I0216 18:07:20.141952 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" containerID="cri-o://ff6c0c9a8ff214790fe27eab00a8571918a74a6f3bddb6e4205a0b13c5dcd7b5" gracePeriod=600 Feb 16 18:07:20 crc kubenswrapper[4794]: I0216 18:07:20.621568 4794 generic.go:334] "Generic (PLEG): container finished" podID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerID="ff6c0c9a8ff214790fe27eab00a8571918a74a6f3bddb6e4205a0b13c5dcd7b5" exitCode=0 Feb 16 18:07:20 crc kubenswrapper[4794]: I0216 18:07:20.621634 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerDied","Data":"ff6c0c9a8ff214790fe27eab00a8571918a74a6f3bddb6e4205a0b13c5dcd7b5"} Feb 16 18:07:20 crc kubenswrapper[4794]: I0216 18:07:20.621682 4794 scope.go:117] "RemoveContainer" containerID="b9dbd3fa18ebcb3d8888e3b92e151f642273c001c7a0d501f50a4853b9834121" Feb 16 18:07:21 crc kubenswrapper[4794]: I0216 18:07:21.637089 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerStarted","Data":"cbb8935ebd2ac776900cf3a87a589d519f92877b04f772fa4c7f2bd3aec10a3d"} Feb 16 18:07:27 crc kubenswrapper[4794]: I0216 18:07:27.794468 4794 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 18:07:27 crc kubenswrapper[4794]: E0216 18:07:27.923226 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 18:07:27 crc kubenswrapper[4794]: E0216 18:07:27.923288 4794 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 18:07:27 crc kubenswrapper[4794]: E0216 18:07:27.923421 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2h5l2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-7gcsf_openstack(c695f880-15cb-45b1-8545-60d8437ec631): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 18:07:27 crc kubenswrapper[4794]: E0216 18:07:27.924592 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:07:29 crc kubenswrapper[4794]: E0216 18:07:29.793374 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:07:39 crc kubenswrapper[4794]: E0216 18:07:39.795547 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:07:42 crc kubenswrapper[4794]: E0216 18:07:42.929995 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 18:07:42 crc kubenswrapper[4794]: E0216 18:07:42.930877 4794 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 18:07:42 crc kubenswrapper[4794]: E0216 18:07:42.931106 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59fh58dh6ch557h84h55ch564h5bh58fh5c8h5d4h584h669h667h569h59hd5hdbh9dh67ch5f9h59fh597h96h664h687h66dhfch5ddh5b7h88h59cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9v9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(8981f528-1f74-4d56-a93c-22860725b490): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 18:07:42 crc kubenswrapper[4794]: E0216 18:07:42.932613 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:07:54 crc kubenswrapper[4794]: I0216 18:07:54.042202 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4l9qg"] Feb 16 18:07:54 crc kubenswrapper[4794]: E0216 18:07:54.043798 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="068b8b8d-0e55-4a56-ad92-ae6d0353ed0d" containerName="extract-utilities" Feb 16 18:07:54 crc kubenswrapper[4794]: I0216 18:07:54.043822 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="068b8b8d-0e55-4a56-ad92-ae6d0353ed0d" containerName="extract-utilities" Feb 16 18:07:54 crc kubenswrapper[4794]: E0216 18:07:54.043883 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="068b8b8d-0e55-4a56-ad92-ae6d0353ed0d" containerName="extract-content" Feb 16 18:07:54 crc kubenswrapper[4794]: I0216 18:07:54.043895 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="068b8b8d-0e55-4a56-ad92-ae6d0353ed0d" containerName="extract-content" Feb 16 18:07:54 crc kubenswrapper[4794]: E0216 18:07:54.043924 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="068b8b8d-0e55-4a56-ad92-ae6d0353ed0d" containerName="registry-server" Feb 16 18:07:54 crc kubenswrapper[4794]: I0216 18:07:54.043941 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="068b8b8d-0e55-4a56-ad92-ae6d0353ed0d" containerName="registry-server" Feb 16 18:07:54 crc kubenswrapper[4794]: I0216 18:07:54.044454 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="068b8b8d-0e55-4a56-ad92-ae6d0353ed0d" containerName="registry-server" Feb 16 18:07:54 crc kubenswrapper[4794]: I0216 18:07:54.045856 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4l9qg" Feb 16 18:07:54 crc kubenswrapper[4794]: I0216 18:07:54.048989 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kshzw" Feb 16 18:07:54 crc kubenswrapper[4794]: I0216 18:07:54.049029 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 18:07:54 crc kubenswrapper[4794]: I0216 18:07:54.049259 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 18:07:54 crc kubenswrapper[4794]: I0216 18:07:54.049574 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 18:07:54 crc kubenswrapper[4794]: I0216 18:07:54.060107 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4l9qg"] Feb 16 18:07:54 crc kubenswrapper[4794]: I0216 18:07:54.198125 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4l9qg\" (UID: \"d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4l9qg" Feb 16 18:07:54 crc kubenswrapper[4794]: I0216 18:07:54.198208 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9rc6\" (UniqueName: \"kubernetes.io/projected/d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12-kube-api-access-w9rc6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4l9qg\" (UID: \"d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4l9qg" Feb 16 18:07:54 crc kubenswrapper[4794]: I0216 18:07:54.198417 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4l9qg\" (UID: \"d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4l9qg" Feb 16 18:07:54 crc kubenswrapper[4794]: I0216 18:07:54.300979 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9rc6\" (UniqueName: \"kubernetes.io/projected/d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12-kube-api-access-w9rc6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4l9qg\" (UID: \"d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4l9qg" Feb 16 18:07:54 crc kubenswrapper[4794]: I0216 18:07:54.301115 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4l9qg\" (UID: \"d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4l9qg" Feb 16 18:07:54 crc kubenswrapper[4794]: I0216 18:07:54.301425 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4l9qg\" (UID: \"d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4l9qg" Feb 16 18:07:54 crc kubenswrapper[4794]: I0216 18:07:54.308015 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4l9qg\" (UID: \"d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4l9qg" Feb 16 18:07:54 crc kubenswrapper[4794]: I0216 18:07:54.308434 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4l9qg\" (UID: \"d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4l9qg" Feb 16 18:07:54 crc kubenswrapper[4794]: I0216 18:07:54.323965 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9rc6\" (UniqueName: \"kubernetes.io/projected/d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12-kube-api-access-w9rc6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4l9qg\" (UID: \"d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4l9qg" Feb 16 18:07:54 crc kubenswrapper[4794]: I0216 18:07:54.400764 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4l9qg" Feb 16 18:07:54 crc kubenswrapper[4794]: E0216 18:07:54.811107 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:07:55 crc kubenswrapper[4794]: I0216 18:07:55.004049 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4l9qg"] Feb 16 18:07:56 crc kubenswrapper[4794]: I0216 18:07:56.055035 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4l9qg" event={"ID":"d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12","Type":"ContainerStarted","Data":"dc8d8cbb5db161d87302a4a4b2116dd5b65df49edd32d1c61030c569a3fe93cb"} Feb 16 18:07:56 crc kubenswrapper[4794]: E0216 18:07:56.794856 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:07:57 crc kubenswrapper[4794]: I0216 18:07:57.068665 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4l9qg" event={"ID":"d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12","Type":"ContainerStarted","Data":"f7568096cd1993ac5b758c72ab35c7e051b79f7b5ea5caad4140cf5262d458a4"} Feb 16 18:07:57 crc kubenswrapper[4794]: I0216 18:07:57.097281 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4l9qg" podStartSLOduration=2.724501169 podStartE2EDuration="3.097257477s" podCreationTimestamp="2026-02-16 18:07:54 +0000 UTC" firstStartedPulling="2026-02-16 18:07:55.594436101 +0000 UTC m=+4101.542530738" lastFinishedPulling="2026-02-16 18:07:55.967192379 +0000 UTC m=+4101.915287046" observedRunningTime="2026-02-16 18:07:57.086111601 +0000 UTC m=+4103.034206258" watchObservedRunningTime="2026-02-16 18:07:57.097257477 +0000 UTC m=+4103.045352134" Feb 16 18:08:07 crc kubenswrapper[4794]: E0216 18:08:07.795378 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:08:07 crc kubenswrapper[4794]: E0216 18:08:07.796244 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:08:19 crc kubenswrapper[4794]: E0216 18:08:19.796601 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:08:22 crc kubenswrapper[4794]: E0216 18:08:22.794044 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:08:34 crc kubenswrapper[4794]: E0216 18:08:34.811173 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:08:34 crc kubenswrapper[4794]: E0216 18:08:34.811207 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:08:45 crc kubenswrapper[4794]: E0216 18:08:45.794618 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:08:48 crc kubenswrapper[4794]: E0216 18:08:48.795033 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:09:00 crc kubenswrapper[4794]: E0216 18:09:00.793671 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:09:03 crc kubenswrapper[4794]: E0216 18:09:03.794732 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:09:15 crc kubenswrapper[4794]: E0216 18:09:15.794382 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:09:15 crc kubenswrapper[4794]: E0216 18:09:15.794749 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:09:27 crc kubenswrapper[4794]: E0216 18:09:27.797357 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:09:28 crc kubenswrapper[4794]: E0216 18:09:28.798378 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:09:40 crc kubenswrapper[4794]: E0216 18:09:40.793546 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:09:43 crc kubenswrapper[4794]: E0216 18:09:43.793531 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:09:50 crc kubenswrapper[4794]: I0216 18:09:50.140292 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 18:09:50 crc kubenswrapper[4794]: I0216 18:09:50.140935 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 18:09:55 crc kubenswrapper[4794]: E0216 18:09:55.796665 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:09:58 crc kubenswrapper[4794]: E0216 18:09:58.795085 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:10:10 crc kubenswrapper[4794]: E0216 18:10:10.793409 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:10:10 crc kubenswrapper[4794]: E0216 18:10:10.793687 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:10:20 crc kubenswrapper[4794]: I0216 18:10:20.141398 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 18:10:20 crc kubenswrapper[4794]: I0216 18:10:20.141825 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 18:10:22 crc kubenswrapper[4794]: E0216 18:10:22.795489 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:10:23 crc kubenswrapper[4794]: E0216 18:10:23.792723 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:10:34 crc kubenswrapper[4794]: E0216 18:10:34.805364 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:10:34 crc kubenswrapper[4794]: E0216 18:10:34.806649 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:10:45 crc kubenswrapper[4794]: E0216 18:10:45.795616 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:10:49 crc kubenswrapper[4794]: E0216 18:10:49.795452 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:10:50 crc kubenswrapper[4794]: I0216 18:10:50.140322 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 18:10:50 crc kubenswrapper[4794]: I0216 18:10:50.140393 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 18:10:50 crc kubenswrapper[4794]: I0216 18:10:50.140436 4794 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" Feb 16 18:10:50 crc kubenswrapper[4794]: I0216 18:10:50.141163 4794 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cbb8935ebd2ac776900cf3a87a589d519f92877b04f772fa4c7f2bd3aec10a3d"} pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 18:10:50 crc kubenswrapper[4794]: I0216 18:10:50.141250 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" containerID="cri-o://cbb8935ebd2ac776900cf3a87a589d519f92877b04f772fa4c7f2bd3aec10a3d" gracePeriod=600 Feb 16 18:10:50 crc kubenswrapper[4794]: E0216 18:10:50.258638 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:10:51 crc kubenswrapper[4794]: I0216 18:10:51.165451 4794 generic.go:334] "Generic (PLEG): container finished" podID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerID="cbb8935ebd2ac776900cf3a87a589d519f92877b04f772fa4c7f2bd3aec10a3d" exitCode=0 Feb 16 18:10:51 crc kubenswrapper[4794]: I0216 18:10:51.165668 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerDied","Data":"cbb8935ebd2ac776900cf3a87a589d519f92877b04f772fa4c7f2bd3aec10a3d"} Feb 16 18:10:51 crc kubenswrapper[4794]: I0216 18:10:51.165863 4794 scope.go:117] "RemoveContainer" containerID="ff6c0c9a8ff214790fe27eab00a8571918a74a6f3bddb6e4205a0b13c5dcd7b5" Feb 16 18:10:51 crc kubenswrapper[4794]: I0216 18:10:51.166867 4794 scope.go:117] "RemoveContainer" containerID="cbb8935ebd2ac776900cf3a87a589d519f92877b04f772fa4c7f2bd3aec10a3d" Feb 16 18:10:51 crc kubenswrapper[4794]: E0216 18:10:51.167510 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:10:57 crc kubenswrapper[4794]: I0216 18:10:57.210863 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-59sxb"] Feb 16 18:10:57 crc kubenswrapper[4794]: I0216 18:10:57.213999 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-59sxb" Feb 16 18:10:57 crc kubenswrapper[4794]: I0216 18:10:57.235107 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-59sxb"] Feb 16 18:10:57 crc kubenswrapper[4794]: I0216 18:10:57.335150 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8df7f41-86e8-40bf-b0d2-987dee5ff000-catalog-content\") pod \"community-operators-59sxb\" (UID: \"b8df7f41-86e8-40bf-b0d2-987dee5ff000\") " pod="openshift-marketplace/community-operators-59sxb" Feb 16 18:10:57 crc kubenswrapper[4794]: I0216 18:10:57.335755 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j9kz\" (UniqueName: \"kubernetes.io/projected/b8df7f41-86e8-40bf-b0d2-987dee5ff000-kube-api-access-5j9kz\") pod \"community-operators-59sxb\" (UID: \"b8df7f41-86e8-40bf-b0d2-987dee5ff000\") " pod="openshift-marketplace/community-operators-59sxb" Feb 16 18:10:57 crc kubenswrapper[4794]: I0216 18:10:57.336062 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8df7f41-86e8-40bf-b0d2-987dee5ff000-utilities\") pod \"community-operators-59sxb\" (UID: \"b8df7f41-86e8-40bf-b0d2-987dee5ff000\") " pod="openshift-marketplace/community-operators-59sxb" Feb 16 18:10:57 crc kubenswrapper[4794]: I0216 18:10:57.438755 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8df7f41-86e8-40bf-b0d2-987dee5ff000-utilities\") pod \"community-operators-59sxb\" (UID: \"b8df7f41-86e8-40bf-b0d2-987dee5ff000\") " pod="openshift-marketplace/community-operators-59sxb" Feb 16 18:10:57 crc kubenswrapper[4794]: I0216 18:10:57.438904 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8df7f41-86e8-40bf-b0d2-987dee5ff000-catalog-content\") pod \"community-operators-59sxb\" (UID: \"b8df7f41-86e8-40bf-b0d2-987dee5ff000\") " pod="openshift-marketplace/community-operators-59sxb" Feb 16 18:10:57 crc kubenswrapper[4794]: I0216 18:10:57.439049 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j9kz\" (UniqueName: \"kubernetes.io/projected/b8df7f41-86e8-40bf-b0d2-987dee5ff000-kube-api-access-5j9kz\") pod \"community-operators-59sxb\" (UID: \"b8df7f41-86e8-40bf-b0d2-987dee5ff000\") " pod="openshift-marketplace/community-operators-59sxb" Feb 16 18:10:57 crc kubenswrapper[4794]: I0216 18:10:57.439488 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8df7f41-86e8-40bf-b0d2-987dee5ff000-utilities\") pod \"community-operators-59sxb\" (UID: \"b8df7f41-86e8-40bf-b0d2-987dee5ff000\") " pod="openshift-marketplace/community-operators-59sxb" Feb 16 18:10:57 crc kubenswrapper[4794]: I0216 18:10:57.439538 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8df7f41-86e8-40bf-b0d2-987dee5ff000-catalog-content\") pod \"community-operators-59sxb\" (UID: \"b8df7f41-86e8-40bf-b0d2-987dee5ff000\") " pod="openshift-marketplace/community-operators-59sxb" Feb 16 18:10:57 crc kubenswrapper[4794]: I0216 18:10:57.464276 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j9kz\" (UniqueName: \"kubernetes.io/projected/b8df7f41-86e8-40bf-b0d2-987dee5ff000-kube-api-access-5j9kz\") pod \"community-operators-59sxb\" (UID: \"b8df7f41-86e8-40bf-b0d2-987dee5ff000\") " pod="openshift-marketplace/community-operators-59sxb" Feb 16 18:10:57 crc kubenswrapper[4794]: I0216 18:10:57.610773 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-59sxb" Feb 16 18:10:58 crc kubenswrapper[4794]: I0216 18:10:58.135176 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-59sxb"] Feb 16 18:10:58 crc kubenswrapper[4794]: I0216 18:10:58.262899 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-59sxb" event={"ID":"b8df7f41-86e8-40bf-b0d2-987dee5ff000","Type":"ContainerStarted","Data":"e9ed43b50abe55cad43aa7d9f2c5b1e54dbd10275acb9ab178931148da74a663"} Feb 16 18:10:58 crc kubenswrapper[4794]: E0216 18:10:58.794227 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:10:59 crc kubenswrapper[4794]: I0216 18:10:59.274709 4794 generic.go:334] "Generic (PLEG): container finished" podID="b8df7f41-86e8-40bf-b0d2-987dee5ff000" containerID="8d8ff0a206cfb0512d6d751479b58926933180571c03b9b8685ade3a854b7bbd" exitCode=0 Feb 16 18:10:59 crc kubenswrapper[4794]: I0216 18:10:59.274762 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-59sxb" event={"ID":"b8df7f41-86e8-40bf-b0d2-987dee5ff000","Type":"ContainerDied","Data":"8d8ff0a206cfb0512d6d751479b58926933180571c03b9b8685ade3a854b7bbd"} Feb 16 18:11:00 crc kubenswrapper[4794]: I0216 18:11:00.289618 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-59sxb" event={"ID":"b8df7f41-86e8-40bf-b0d2-987dee5ff000","Type":"ContainerStarted","Data":"82776d18922bec07ddaef3bbd83fae9e5910946dd5a20eaa667e4c7f083a4542"} Feb 16 18:11:01 crc kubenswrapper[4794]: I0216 18:11:01.301073 4794 generic.go:334] "Generic (PLEG): container finished" podID="b8df7f41-86e8-40bf-b0d2-987dee5ff000" containerID="82776d18922bec07ddaef3bbd83fae9e5910946dd5a20eaa667e4c7f083a4542" exitCode=0 Feb 16 18:11:01 crc kubenswrapper[4794]: I0216 18:11:01.301123 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-59sxb" event={"ID":"b8df7f41-86e8-40bf-b0d2-987dee5ff000","Type":"ContainerDied","Data":"82776d18922bec07ddaef3bbd83fae9e5910946dd5a20eaa667e4c7f083a4542"} Feb 16 18:11:01 crc kubenswrapper[4794]: E0216 18:11:01.798049 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:11:02 crc kubenswrapper[4794]: I0216 18:11:02.314646 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-59sxb" event={"ID":"b8df7f41-86e8-40bf-b0d2-987dee5ff000","Type":"ContainerStarted","Data":"2a8eebf719d172ba0c2229b4f65cad16d95a962473737c9ee0aaa715dfc381e0"} Feb 16 18:11:02 crc kubenswrapper[4794]: I0216 18:11:02.352416 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-59sxb" podStartSLOduration=2.885379198 podStartE2EDuration="5.35238709s" podCreationTimestamp="2026-02-16 18:10:57 +0000 UTC" firstStartedPulling="2026-02-16 18:10:59.278824132 +0000 UTC m=+4285.226918799" lastFinishedPulling="2026-02-16 18:11:01.745832014 +0000 UTC m=+4287.693926691" observedRunningTime="2026-02-16 18:11:02.343122847 +0000 UTC m=+4288.291217514" watchObservedRunningTime="2026-02-16 18:11:02.35238709 +0000 UTC m=+4288.300481767" Feb 16 18:11:04 crc kubenswrapper[4794]: I0216 18:11:04.816913 4794 scope.go:117] "RemoveContainer" containerID="cbb8935ebd2ac776900cf3a87a589d519f92877b04f772fa4c7f2bd3aec10a3d" Feb 16 18:11:04 crc kubenswrapper[4794]: E0216 18:11:04.818816 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:11:07 crc kubenswrapper[4794]: I0216 18:11:07.611392 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-59sxb" Feb 16 18:11:07 crc kubenswrapper[4794]: I0216 18:11:07.613148 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-59sxb" Feb 16 18:11:07 crc kubenswrapper[4794]: I0216 18:11:07.691124 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-59sxb" Feb 16 18:11:08 crc kubenswrapper[4794]: I0216 18:11:08.859590 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-59sxb" Feb 16 18:11:08 crc kubenswrapper[4794]: I0216 18:11:08.945808 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-59sxb"] Feb 16 18:11:10 crc kubenswrapper[4794]: I0216 18:11:10.436641 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-59sxb" podUID="b8df7f41-86e8-40bf-b0d2-987dee5ff000" containerName="registry-server" containerID="cri-o://2a8eebf719d172ba0c2229b4f65cad16d95a962473737c9ee0aaa715dfc381e0" gracePeriod=2 Feb 16 18:11:10 crc kubenswrapper[4794]: I0216 18:11:10.991646 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-59sxb" Feb 16 18:11:11 crc kubenswrapper[4794]: I0216 18:11:11.124003 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8df7f41-86e8-40bf-b0d2-987dee5ff000-utilities\") pod \"b8df7f41-86e8-40bf-b0d2-987dee5ff000\" (UID: \"b8df7f41-86e8-40bf-b0d2-987dee5ff000\") " Feb 16 18:11:11 crc kubenswrapper[4794]: I0216 18:11:11.124454 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8df7f41-86e8-40bf-b0d2-987dee5ff000-catalog-content\") pod \"b8df7f41-86e8-40bf-b0d2-987dee5ff000\" (UID: \"b8df7f41-86e8-40bf-b0d2-987dee5ff000\") " Feb 16 18:11:11 crc kubenswrapper[4794]: I0216 18:11:11.124525 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j9kz\" (UniqueName: \"kubernetes.io/projected/b8df7f41-86e8-40bf-b0d2-987dee5ff000-kube-api-access-5j9kz\") pod \"b8df7f41-86e8-40bf-b0d2-987dee5ff000\" (UID: \"b8df7f41-86e8-40bf-b0d2-987dee5ff000\") " Feb 16 18:11:11 crc kubenswrapper[4794]: I0216 18:11:11.125182 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8df7f41-86e8-40bf-b0d2-987dee5ff000-utilities" (OuterVolumeSpecName: "utilities") pod "b8df7f41-86e8-40bf-b0d2-987dee5ff000" (UID: "b8df7f41-86e8-40bf-b0d2-987dee5ff000"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 18:11:11 crc kubenswrapper[4794]: I0216 18:11:11.132296 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8df7f41-86e8-40bf-b0d2-987dee5ff000-kube-api-access-5j9kz" (OuterVolumeSpecName: "kube-api-access-5j9kz") pod "b8df7f41-86e8-40bf-b0d2-987dee5ff000" (UID: "b8df7f41-86e8-40bf-b0d2-987dee5ff000"). InnerVolumeSpecName "kube-api-access-5j9kz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 18:11:11 crc kubenswrapper[4794]: I0216 18:11:11.177981 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8df7f41-86e8-40bf-b0d2-987dee5ff000-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b8df7f41-86e8-40bf-b0d2-987dee5ff000" (UID: "b8df7f41-86e8-40bf-b0d2-987dee5ff000"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 18:11:11 crc kubenswrapper[4794]: I0216 18:11:11.227450 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8df7f41-86e8-40bf-b0d2-987dee5ff000-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 18:11:11 crc kubenswrapper[4794]: I0216 18:11:11.227888 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8df7f41-86e8-40bf-b0d2-987dee5ff000-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 18:11:11 crc kubenswrapper[4794]: I0216 18:11:11.227907 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j9kz\" (UniqueName: \"kubernetes.io/projected/b8df7f41-86e8-40bf-b0d2-987dee5ff000-kube-api-access-5j9kz\") on node \"crc\" DevicePath \"\"" Feb 16 18:11:11 crc kubenswrapper[4794]: I0216 18:11:11.451832 4794 generic.go:334] "Generic (PLEG): container finished" podID="b8df7f41-86e8-40bf-b0d2-987dee5ff000" containerID="2a8eebf719d172ba0c2229b4f65cad16d95a962473737c9ee0aaa715dfc381e0" exitCode=0 Feb 16 18:11:11 crc kubenswrapper[4794]: I0216 18:11:11.451888 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-59sxb" event={"ID":"b8df7f41-86e8-40bf-b0d2-987dee5ff000","Type":"ContainerDied","Data":"2a8eebf719d172ba0c2229b4f65cad16d95a962473737c9ee0aaa715dfc381e0"} Feb 16 18:11:11 crc kubenswrapper[4794]: I0216 18:11:11.451931 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-59sxb" event={"ID":"b8df7f41-86e8-40bf-b0d2-987dee5ff000","Type":"ContainerDied","Data":"e9ed43b50abe55cad43aa7d9f2c5b1e54dbd10275acb9ab178931148da74a663"} Feb 16 18:11:11 crc kubenswrapper[4794]: I0216 18:11:11.451975 4794 scope.go:117] "RemoveContainer" containerID="2a8eebf719d172ba0c2229b4f65cad16d95a962473737c9ee0aaa715dfc381e0" Feb 16 18:11:11 crc kubenswrapper[4794]: I0216 18:11:11.451977 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-59sxb" Feb 16 18:11:11 crc kubenswrapper[4794]: I0216 18:11:11.477565 4794 scope.go:117] "RemoveContainer" containerID="82776d18922bec07ddaef3bbd83fae9e5910946dd5a20eaa667e4c7f083a4542" Feb 16 18:11:11 crc kubenswrapper[4794]: I0216 18:11:11.512917 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-59sxb"] Feb 16 18:11:11 crc kubenswrapper[4794]: I0216 18:11:11.517947 4794 scope.go:117] "RemoveContainer" containerID="8d8ff0a206cfb0512d6d751479b58926933180571c03b9b8685ade3a854b7bbd" Feb 16 18:11:11 crc kubenswrapper[4794]: I0216 18:11:11.526450 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-59sxb"] Feb 16 18:11:11 crc kubenswrapper[4794]: I0216 18:11:11.566602 4794 scope.go:117] "RemoveContainer" containerID="2a8eebf719d172ba0c2229b4f65cad16d95a962473737c9ee0aaa715dfc381e0" Feb 16 18:11:11 crc kubenswrapper[4794]: E0216 18:11:11.567157 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a8eebf719d172ba0c2229b4f65cad16d95a962473737c9ee0aaa715dfc381e0\": container with ID starting with 2a8eebf719d172ba0c2229b4f65cad16d95a962473737c9ee0aaa715dfc381e0 not found: ID does not exist" containerID="2a8eebf719d172ba0c2229b4f65cad16d95a962473737c9ee0aaa715dfc381e0" Feb 16 18:11:11 crc kubenswrapper[4794]: I0216 18:11:11.567205 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a8eebf719d172ba0c2229b4f65cad16d95a962473737c9ee0aaa715dfc381e0"} err="failed to get container status \"2a8eebf719d172ba0c2229b4f65cad16d95a962473737c9ee0aaa715dfc381e0\": rpc error: code = NotFound desc = could not find container \"2a8eebf719d172ba0c2229b4f65cad16d95a962473737c9ee0aaa715dfc381e0\": container with ID starting with 2a8eebf719d172ba0c2229b4f65cad16d95a962473737c9ee0aaa715dfc381e0 not found: ID does not exist" Feb 16 18:11:11 crc kubenswrapper[4794]: I0216 18:11:11.567239 4794 scope.go:117] "RemoveContainer" containerID="82776d18922bec07ddaef3bbd83fae9e5910946dd5a20eaa667e4c7f083a4542" Feb 16 18:11:11 crc kubenswrapper[4794]: E0216 18:11:11.567819 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82776d18922bec07ddaef3bbd83fae9e5910946dd5a20eaa667e4c7f083a4542\": container with ID starting with 82776d18922bec07ddaef3bbd83fae9e5910946dd5a20eaa667e4c7f083a4542 not found: ID does not exist" containerID="82776d18922bec07ddaef3bbd83fae9e5910946dd5a20eaa667e4c7f083a4542" Feb 16 18:11:11 crc kubenswrapper[4794]: I0216 18:11:11.567983 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82776d18922bec07ddaef3bbd83fae9e5910946dd5a20eaa667e4c7f083a4542"} err="failed to get container status \"82776d18922bec07ddaef3bbd83fae9e5910946dd5a20eaa667e4c7f083a4542\": rpc error: code = NotFound desc = could not find container \"82776d18922bec07ddaef3bbd83fae9e5910946dd5a20eaa667e4c7f083a4542\": container with ID starting with 82776d18922bec07ddaef3bbd83fae9e5910946dd5a20eaa667e4c7f083a4542 not found: ID does not exist" Feb 16 18:11:11 crc kubenswrapper[4794]: I0216 18:11:11.568133 4794 scope.go:117] "RemoveContainer" containerID="8d8ff0a206cfb0512d6d751479b58926933180571c03b9b8685ade3a854b7bbd" Feb 16 18:11:11 crc kubenswrapper[4794]: E0216 18:11:11.568639 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d8ff0a206cfb0512d6d751479b58926933180571c03b9b8685ade3a854b7bbd\": container with ID starting with 8d8ff0a206cfb0512d6d751479b58926933180571c03b9b8685ade3a854b7bbd not found: ID does not exist" containerID="8d8ff0a206cfb0512d6d751479b58926933180571c03b9b8685ade3a854b7bbd" Feb 16 18:11:11 crc kubenswrapper[4794]: I0216 18:11:11.568833 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d8ff0a206cfb0512d6d751479b58926933180571c03b9b8685ade3a854b7bbd"} err="failed to get container status \"8d8ff0a206cfb0512d6d751479b58926933180571c03b9b8685ade3a854b7bbd\": rpc error: code = NotFound desc = could not find container \"8d8ff0a206cfb0512d6d751479b58926933180571c03b9b8685ade3a854b7bbd\": container with ID starting with 8d8ff0a206cfb0512d6d751479b58926933180571c03b9b8685ade3a854b7bbd not found: ID does not exist" Feb 16 18:11:12 crc kubenswrapper[4794]: I0216 18:11:12.809110 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8df7f41-86e8-40bf-b0d2-987dee5ff000" path="/var/lib/kubelet/pods/b8df7f41-86e8-40bf-b0d2-987dee5ff000/volumes" Feb 16 18:11:13 crc kubenswrapper[4794]: E0216 18:11:13.793817 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:11:16 crc kubenswrapper[4794]: E0216 18:11:16.797391 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:11:19 crc kubenswrapper[4794]: I0216 18:11:19.791876 4794 scope.go:117] "RemoveContainer" containerID="cbb8935ebd2ac776900cf3a87a589d519f92877b04f772fa4c7f2bd3aec10a3d" Feb 16 18:11:19 crc kubenswrapper[4794]: E0216 18:11:19.792517 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:11:25 crc kubenswrapper[4794]: E0216 18:11:25.793955 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:11:29 crc kubenswrapper[4794]: E0216 18:11:29.794578 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:11:34 crc kubenswrapper[4794]: I0216 18:11:34.800290 4794 scope.go:117] "RemoveContainer" containerID="cbb8935ebd2ac776900cf3a87a589d519f92877b04f772fa4c7f2bd3aec10a3d" Feb 16 18:11:34 crc kubenswrapper[4794]: E0216 18:11:34.802597 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:11:38 crc kubenswrapper[4794]: E0216 18:11:38.794800 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:11:43 crc kubenswrapper[4794]: E0216 18:11:43.794831 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:11:43 crc kubenswrapper[4794]: I0216 18:11:43.980715 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kz2js"] Feb 16 18:11:43 crc kubenswrapper[4794]: E0216 18:11:43.982389 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8df7f41-86e8-40bf-b0d2-987dee5ff000" containerName="extract-utilities" Feb 16 18:11:43 crc kubenswrapper[4794]: I0216 18:11:43.982433 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8df7f41-86e8-40bf-b0d2-987dee5ff000" containerName="extract-utilities" Feb 16 18:11:43 crc kubenswrapper[4794]: E0216 18:11:43.982473 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8df7f41-86e8-40bf-b0d2-987dee5ff000" containerName="extract-content" Feb 16 18:11:43 crc kubenswrapper[4794]: I0216 18:11:43.982507 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8df7f41-86e8-40bf-b0d2-987dee5ff000" containerName="extract-content" Feb 16 18:11:43 crc kubenswrapper[4794]: E0216 18:11:43.982533 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8df7f41-86e8-40bf-b0d2-987dee5ff000" containerName="registry-server" Feb 16 18:11:43 crc kubenswrapper[4794]: I0216 18:11:43.982549 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8df7f41-86e8-40bf-b0d2-987dee5ff000" containerName="registry-server" Feb 16 18:11:43 crc kubenswrapper[4794]: I0216 18:11:43.983054 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8df7f41-86e8-40bf-b0d2-987dee5ff000" containerName="registry-server" Feb 16 18:11:43 crc kubenswrapper[4794]: I0216 18:11:43.986905 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kz2js" Feb 16 18:11:44 crc kubenswrapper[4794]: I0216 18:11:44.000254 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kz2js"] Feb 16 18:11:44 crc kubenswrapper[4794]: I0216 18:11:44.104789 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb257420-ae22-4e5c-a428-4a1fb1f109b3-catalog-content\") pod \"redhat-operators-kz2js\" (UID: \"cb257420-ae22-4e5c-a428-4a1fb1f109b3\") " pod="openshift-marketplace/redhat-operators-kz2js" Feb 16 18:11:44 crc kubenswrapper[4794]: I0216 18:11:44.104863 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb257420-ae22-4e5c-a428-4a1fb1f109b3-utilities\") pod \"redhat-operators-kz2js\" (UID: \"cb257420-ae22-4e5c-a428-4a1fb1f109b3\") " pod="openshift-marketplace/redhat-operators-kz2js" Feb 16 18:11:44 crc kubenswrapper[4794]: I0216 18:11:44.104880 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7pqm\" (UniqueName: \"kubernetes.io/projected/cb257420-ae22-4e5c-a428-4a1fb1f109b3-kube-api-access-r7pqm\") pod \"redhat-operators-kz2js\" (UID: \"cb257420-ae22-4e5c-a428-4a1fb1f109b3\") " pod="openshift-marketplace/redhat-operators-kz2js" Feb 16 18:11:44 crc kubenswrapper[4794]: I0216 18:11:44.208083 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb257420-ae22-4e5c-a428-4a1fb1f109b3-catalog-content\") pod \"redhat-operators-kz2js\" (UID: \"cb257420-ae22-4e5c-a428-4a1fb1f109b3\") " pod="openshift-marketplace/redhat-operators-kz2js" Feb 16 18:11:44 crc kubenswrapper[4794]: I0216 18:11:44.208203 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb257420-ae22-4e5c-a428-4a1fb1f109b3-utilities\") pod \"redhat-operators-kz2js\" (UID: \"cb257420-ae22-4e5c-a428-4a1fb1f109b3\") " pod="openshift-marketplace/redhat-operators-kz2js" Feb 16 18:11:44 crc kubenswrapper[4794]: I0216 18:11:44.208234 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7pqm\" (UniqueName: \"kubernetes.io/projected/cb257420-ae22-4e5c-a428-4a1fb1f109b3-kube-api-access-r7pqm\") pod \"redhat-operators-kz2js\" (UID: \"cb257420-ae22-4e5c-a428-4a1fb1f109b3\") " pod="openshift-marketplace/redhat-operators-kz2js" Feb 16 18:11:44 crc kubenswrapper[4794]: I0216 18:11:44.208741 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb257420-ae22-4e5c-a428-4a1fb1f109b3-catalog-content\") pod \"redhat-operators-kz2js\" (UID: \"cb257420-ae22-4e5c-a428-4a1fb1f109b3\") " pod="openshift-marketplace/redhat-operators-kz2js" Feb 16 18:11:44 crc kubenswrapper[4794]: I0216 18:11:44.208786 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb257420-ae22-4e5c-a428-4a1fb1f109b3-utilities\") pod \"redhat-operators-kz2js\" (UID: \"cb257420-ae22-4e5c-a428-4a1fb1f109b3\") " pod="openshift-marketplace/redhat-operators-kz2js" Feb 16 18:11:44 crc kubenswrapper[4794]: I0216 18:11:44.481064 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7pqm\" (UniqueName: \"kubernetes.io/projected/cb257420-ae22-4e5c-a428-4a1fb1f109b3-kube-api-access-r7pqm\") pod \"redhat-operators-kz2js\" (UID: \"cb257420-ae22-4e5c-a428-4a1fb1f109b3\") " pod="openshift-marketplace/redhat-operators-kz2js" Feb 16 18:11:44 crc kubenswrapper[4794]: I0216 18:11:44.628151 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kz2js" Feb 16 18:11:45 crc kubenswrapper[4794]: I0216 18:11:45.185828 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kz2js"] Feb 16 18:11:45 crc kubenswrapper[4794]: I0216 18:11:45.871981 4794 generic.go:334] "Generic (PLEG): container finished" podID="cb257420-ae22-4e5c-a428-4a1fb1f109b3" containerID="820e4e3d4a58c25516e7cdc80093c8c3a8c14c8347e83579a34a599d7fe93b1c" exitCode=0 Feb 16 18:11:45 crc kubenswrapper[4794]: I0216 18:11:45.872084 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kz2js" event={"ID":"cb257420-ae22-4e5c-a428-4a1fb1f109b3","Type":"ContainerDied","Data":"820e4e3d4a58c25516e7cdc80093c8c3a8c14c8347e83579a34a599d7fe93b1c"} Feb 16 18:11:45 crc kubenswrapper[4794]: I0216 18:11:45.872362 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kz2js" event={"ID":"cb257420-ae22-4e5c-a428-4a1fb1f109b3","Type":"ContainerStarted","Data":"e2da6179f13978ad6b136546a8927daa780b8874dea282dbc08aa30761f6de3f"} Feb 16 18:11:47 crc kubenswrapper[4794]: I0216 18:11:47.925938 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kz2js" event={"ID":"cb257420-ae22-4e5c-a428-4a1fb1f109b3","Type":"ContainerStarted","Data":"8a9172d9813a5755b9026670a0af7f987c29a20dd7cbf749863f02ac5fdac820"} Feb 16 18:11:48 crc kubenswrapper[4794]: I0216 18:11:48.791975 4794 scope.go:117] "RemoveContainer" containerID="cbb8935ebd2ac776900cf3a87a589d519f92877b04f772fa4c7f2bd3aec10a3d" Feb 16 18:11:48 crc kubenswrapper[4794]: E0216 18:11:48.792604 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:11:51 crc kubenswrapper[4794]: E0216 18:11:51.794211 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:11:51 crc kubenswrapper[4794]: I0216 18:11:51.972458 4794 generic.go:334] "Generic (PLEG): container finished" podID="cb257420-ae22-4e5c-a428-4a1fb1f109b3" containerID="8a9172d9813a5755b9026670a0af7f987c29a20dd7cbf749863f02ac5fdac820" exitCode=0 Feb 16 18:11:51 crc kubenswrapper[4794]: I0216 18:11:51.973408 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kz2js" event={"ID":"cb257420-ae22-4e5c-a428-4a1fb1f109b3","Type":"ContainerDied","Data":"8a9172d9813a5755b9026670a0af7f987c29a20dd7cbf749863f02ac5fdac820"} Feb 16 18:11:52 crc kubenswrapper[4794]: I0216 18:11:52.986809 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kz2js" event={"ID":"cb257420-ae22-4e5c-a428-4a1fb1f109b3","Type":"ContainerStarted","Data":"4d5048f6673f0e33a3102bbf371f24e891a38653ab827fae90cd22b87219d989"} Feb 16 18:11:53 crc kubenswrapper[4794]: I0216 18:11:53.015647 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kz2js" podStartSLOduration=3.5041972120000002 podStartE2EDuration="10.015620404s" podCreationTimestamp="2026-02-16 18:11:43 +0000 UTC" firstStartedPulling="2026-02-16 18:11:45.874904051 +0000 UTC m=+4331.822998698" lastFinishedPulling="2026-02-16 18:11:52.386327223 +0000 UTC m=+4338.334421890" observedRunningTime="2026-02-16 18:11:53.010819858 +0000 UTC m=+4338.958914505" watchObservedRunningTime="2026-02-16 18:11:53.015620404 +0000 UTC m=+4338.963715051" Feb 16 18:11:54 crc kubenswrapper[4794]: I0216 18:11:54.628611 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kz2js" Feb 16 18:11:54 crc kubenswrapper[4794]: I0216 18:11:54.628945 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kz2js" Feb 16 18:11:55 crc kubenswrapper[4794]: I0216 18:11:55.692216 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kz2js" podUID="cb257420-ae22-4e5c-a428-4a1fb1f109b3" containerName="registry-server" probeResult="failure" output=< Feb 16 18:11:55 crc kubenswrapper[4794]: timeout: failed to connect service ":50051" within 1s Feb 16 18:11:55 crc kubenswrapper[4794]: > Feb 16 18:11:56 crc kubenswrapper[4794]: E0216 18:11:56.793829 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:12:00 crc kubenswrapper[4794]: I0216 18:12:00.792364 4794 scope.go:117] "RemoveContainer" containerID="cbb8935ebd2ac776900cf3a87a589d519f92877b04f772fa4c7f2bd3aec10a3d" Feb 16 18:12:00 crc kubenswrapper[4794]: E0216 18:12:00.793659 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:12:03 crc kubenswrapper[4794]: E0216 18:12:03.794605 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:12:04 crc kubenswrapper[4794]: I0216 18:12:04.684040 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kz2js" Feb 16 18:12:04 crc kubenswrapper[4794]: I0216 18:12:04.730018 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kz2js" Feb 16 18:12:05 crc kubenswrapper[4794]: I0216 18:12:05.276076 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m9tjg"] Feb 16 18:12:05 crc kubenswrapper[4794]: I0216 18:12:05.279100 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m9tjg" Feb 16 18:12:05 crc kubenswrapper[4794]: I0216 18:12:05.296485 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m9tjg"] Feb 16 18:12:05 crc kubenswrapper[4794]: I0216 18:12:05.376957 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4404d184-c6ea-453c-9ec0-94aed6db19fa-catalog-content\") pod \"certified-operators-m9tjg\" (UID: \"4404d184-c6ea-453c-9ec0-94aed6db19fa\") " pod="openshift-marketplace/certified-operators-m9tjg" Feb 16 18:12:05 crc kubenswrapper[4794]: I0216 18:12:05.377367 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbn97\" (UniqueName: \"kubernetes.io/projected/4404d184-c6ea-453c-9ec0-94aed6db19fa-kube-api-access-fbn97\") pod \"certified-operators-m9tjg\" (UID: \"4404d184-c6ea-453c-9ec0-94aed6db19fa\") " pod="openshift-marketplace/certified-operators-m9tjg" Feb 16 18:12:05 crc kubenswrapper[4794]: I0216 18:12:05.377454 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4404d184-c6ea-453c-9ec0-94aed6db19fa-utilities\") pod \"certified-operators-m9tjg\" (UID: \"4404d184-c6ea-453c-9ec0-94aed6db19fa\") " pod="openshift-marketplace/certified-operators-m9tjg" Feb 16 18:12:05 crc kubenswrapper[4794]: I0216 18:12:05.479539 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4404d184-c6ea-453c-9ec0-94aed6db19fa-catalog-content\") pod \"certified-operators-m9tjg\" (UID: \"4404d184-c6ea-453c-9ec0-94aed6db19fa\") " pod="openshift-marketplace/certified-operators-m9tjg" Feb 16 18:12:05 crc kubenswrapper[4794]: I0216 18:12:05.479854 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbn97\" (UniqueName: \"kubernetes.io/projected/4404d184-c6ea-453c-9ec0-94aed6db19fa-kube-api-access-fbn97\") pod \"certified-operators-m9tjg\" (UID: \"4404d184-c6ea-453c-9ec0-94aed6db19fa\") " pod="openshift-marketplace/certified-operators-m9tjg" Feb 16 18:12:05 crc kubenswrapper[4794]: I0216 18:12:05.479993 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4404d184-c6ea-453c-9ec0-94aed6db19fa-utilities\") pod \"certified-operators-m9tjg\" (UID: \"4404d184-c6ea-453c-9ec0-94aed6db19fa\") " pod="openshift-marketplace/certified-operators-m9tjg" Feb 16 18:12:05 crc kubenswrapper[4794]: I0216 18:12:05.480109 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4404d184-c6ea-453c-9ec0-94aed6db19fa-catalog-content\") pod \"certified-operators-m9tjg\" (UID: \"4404d184-c6ea-453c-9ec0-94aed6db19fa\") " pod="openshift-marketplace/certified-operators-m9tjg" Feb 16 18:12:05 crc kubenswrapper[4794]: I0216 18:12:05.480332 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4404d184-c6ea-453c-9ec0-94aed6db19fa-utilities\") pod \"certified-operators-m9tjg\" (UID: \"4404d184-c6ea-453c-9ec0-94aed6db19fa\") " pod="openshift-marketplace/certified-operators-m9tjg" Feb 16 18:12:05 crc kubenswrapper[4794]: I0216 18:12:05.510609 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbn97\" (UniqueName: \"kubernetes.io/projected/4404d184-c6ea-453c-9ec0-94aed6db19fa-kube-api-access-fbn97\") pod \"certified-operators-m9tjg\" (UID: \"4404d184-c6ea-453c-9ec0-94aed6db19fa\") " pod="openshift-marketplace/certified-operators-m9tjg" Feb 16 18:12:05 crc kubenswrapper[4794]: I0216 18:12:05.600933 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m9tjg" Feb 16 18:12:06 crc kubenswrapper[4794]: I0216 18:12:06.142866 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m9tjg"] Feb 16 18:12:06 crc kubenswrapper[4794]: I0216 18:12:06.164034 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m9tjg" event={"ID":"4404d184-c6ea-453c-9ec0-94aed6db19fa","Type":"ContainerStarted","Data":"f57d5304a3cb9ce697589840fc6a2a798a52a607066acb5edd04d2fef2e3fffd"} Feb 16 18:12:07 crc kubenswrapper[4794]: I0216 18:12:07.180704 4794 generic.go:334] "Generic (PLEG): container finished" podID="4404d184-c6ea-453c-9ec0-94aed6db19fa" containerID="c4ecdcf0a727ec5ddd7fc3afd98d7a94aca6a4fc252ea6dd5a1ea352f3d19321" exitCode=0 Feb 16 18:12:07 crc kubenswrapper[4794]: I0216 18:12:07.180768 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m9tjg" event={"ID":"4404d184-c6ea-453c-9ec0-94aed6db19fa","Type":"ContainerDied","Data":"c4ecdcf0a727ec5ddd7fc3afd98d7a94aca6a4fc252ea6dd5a1ea352f3d19321"} Feb 16 18:12:08 crc kubenswrapper[4794]: I0216 18:12:08.194023 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m9tjg" event={"ID":"4404d184-c6ea-453c-9ec0-94aed6db19fa","Type":"ContainerStarted","Data":"7e8083e9e17ef472068eab63202d017d6cceca6d9c4a3ce357443c4cd57340a9"} Feb 16 18:12:09 crc kubenswrapper[4794]: I0216 18:12:09.661034 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kz2js"] Feb 16 18:12:09 crc kubenswrapper[4794]: I0216 18:12:09.661674 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kz2js" podUID="cb257420-ae22-4e5c-a428-4a1fb1f109b3" containerName="registry-server" containerID="cri-o://4d5048f6673f0e33a3102bbf371f24e891a38653ab827fae90cd22b87219d989" gracePeriod=2 Feb 16 18:12:10 crc kubenswrapper[4794]: I0216 18:12:10.216294 4794 generic.go:334] "Generic (PLEG): container finished" podID="4404d184-c6ea-453c-9ec0-94aed6db19fa" containerID="7e8083e9e17ef472068eab63202d017d6cceca6d9c4a3ce357443c4cd57340a9" exitCode=0 Feb 16 18:12:10 crc kubenswrapper[4794]: I0216 18:12:10.216349 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m9tjg" event={"ID":"4404d184-c6ea-453c-9ec0-94aed6db19fa","Type":"ContainerDied","Data":"7e8083e9e17ef472068eab63202d017d6cceca6d9c4a3ce357443c4cd57340a9"} Feb 16 18:12:10 crc kubenswrapper[4794]: I0216 18:12:10.220968 4794 generic.go:334] "Generic (PLEG): container finished" podID="cb257420-ae22-4e5c-a428-4a1fb1f109b3" containerID="4d5048f6673f0e33a3102bbf371f24e891a38653ab827fae90cd22b87219d989" exitCode=0 Feb 16 18:12:10 crc kubenswrapper[4794]: I0216 18:12:10.221053 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kz2js" event={"ID":"cb257420-ae22-4e5c-a428-4a1fb1f109b3","Type":"ContainerDied","Data":"4d5048f6673f0e33a3102bbf371f24e891a38653ab827fae90cd22b87219d989"} Feb 16 18:12:10 crc kubenswrapper[4794]: I0216 18:12:10.221090 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kz2js" event={"ID":"cb257420-ae22-4e5c-a428-4a1fb1f109b3","Type":"ContainerDied","Data":"e2da6179f13978ad6b136546a8927daa780b8874dea282dbc08aa30761f6de3f"} Feb 16 18:12:10 crc kubenswrapper[4794]: I0216 18:12:10.221101 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2da6179f13978ad6b136546a8927daa780b8874dea282dbc08aa30761f6de3f" Feb 16 18:12:10 crc kubenswrapper[4794]: I0216 18:12:10.315825 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kz2js" Feb 16 18:12:10 crc kubenswrapper[4794]: I0216 18:12:10.410981 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb257420-ae22-4e5c-a428-4a1fb1f109b3-utilities\") pod \"cb257420-ae22-4e5c-a428-4a1fb1f109b3\" (UID: \"cb257420-ae22-4e5c-a428-4a1fb1f109b3\") " Feb 16 18:12:10 crc kubenswrapper[4794]: I0216 18:12:10.411374 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb257420-ae22-4e5c-a428-4a1fb1f109b3-catalog-content\") pod \"cb257420-ae22-4e5c-a428-4a1fb1f109b3\" (UID: \"cb257420-ae22-4e5c-a428-4a1fb1f109b3\") " Feb 16 18:12:10 crc kubenswrapper[4794]: I0216 18:12:10.411536 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7pqm\" (UniqueName: \"kubernetes.io/projected/cb257420-ae22-4e5c-a428-4a1fb1f109b3-kube-api-access-r7pqm\") pod \"cb257420-ae22-4e5c-a428-4a1fb1f109b3\" (UID: \"cb257420-ae22-4e5c-a428-4a1fb1f109b3\") " Feb 16 18:12:10 crc kubenswrapper[4794]: I0216 18:12:10.412382 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb257420-ae22-4e5c-a428-4a1fb1f109b3-utilities" (OuterVolumeSpecName: "utilities") pod "cb257420-ae22-4e5c-a428-4a1fb1f109b3" (UID: "cb257420-ae22-4e5c-a428-4a1fb1f109b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 18:12:10 crc kubenswrapper[4794]: I0216 18:12:10.418372 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb257420-ae22-4e5c-a428-4a1fb1f109b3-kube-api-access-r7pqm" (OuterVolumeSpecName: "kube-api-access-r7pqm") pod "cb257420-ae22-4e5c-a428-4a1fb1f109b3" (UID: "cb257420-ae22-4e5c-a428-4a1fb1f109b3"). InnerVolumeSpecName "kube-api-access-r7pqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 18:12:10 crc kubenswrapper[4794]: I0216 18:12:10.514289 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb257420-ae22-4e5c-a428-4a1fb1f109b3-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 18:12:10 crc kubenswrapper[4794]: I0216 18:12:10.514333 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7pqm\" (UniqueName: \"kubernetes.io/projected/cb257420-ae22-4e5c-a428-4a1fb1f109b3-kube-api-access-r7pqm\") on node \"crc\" DevicePath \"\"" Feb 16 18:12:10 crc kubenswrapper[4794]: I0216 18:12:10.536795 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb257420-ae22-4e5c-a428-4a1fb1f109b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb257420-ae22-4e5c-a428-4a1fb1f109b3" (UID: "cb257420-ae22-4e5c-a428-4a1fb1f109b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 18:12:10 crc kubenswrapper[4794]: I0216 18:12:10.617812 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb257420-ae22-4e5c-a428-4a1fb1f109b3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 18:12:11 crc kubenswrapper[4794]: I0216 18:12:11.238495 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kz2js" Feb 16 18:12:11 crc kubenswrapper[4794]: I0216 18:12:11.691974 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kz2js"] Feb 16 18:12:11 crc kubenswrapper[4794]: I0216 18:12:11.703856 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kz2js"] Feb 16 18:12:11 crc kubenswrapper[4794]: E0216 18:12:11.792796 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:12:12 crc kubenswrapper[4794]: I0216 18:12:12.250367 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m9tjg" event={"ID":"4404d184-c6ea-453c-9ec0-94aed6db19fa","Type":"ContainerStarted","Data":"c8e53e55509c3f7c5c4a2c32f4a66146c1e5080906168c73e278b09c4bd43959"} Feb 16 18:12:12 crc kubenswrapper[4794]: I0216 18:12:12.791879 4794 scope.go:117] "RemoveContainer" containerID="cbb8935ebd2ac776900cf3a87a589d519f92877b04f772fa4c7f2bd3aec10a3d" Feb 16 18:12:12 crc kubenswrapper[4794]: E0216 18:12:12.792604 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:12:12 crc kubenswrapper[4794]: I0216 18:12:12.812034 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb257420-ae22-4e5c-a428-4a1fb1f109b3" path="/var/lib/kubelet/pods/cb257420-ae22-4e5c-a428-4a1fb1f109b3/volumes" Feb 16 18:12:15 crc kubenswrapper[4794]: I0216 18:12:15.601238 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m9tjg" Feb 16 18:12:15 crc kubenswrapper[4794]: I0216 18:12:15.601906 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m9tjg" Feb 16 18:12:15 crc kubenswrapper[4794]: I0216 18:12:15.662170 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m9tjg" Feb 16 18:12:15 crc kubenswrapper[4794]: I0216 18:12:15.699866 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m9tjg" podStartSLOduration=7.024628092 podStartE2EDuration="10.699839926s" podCreationTimestamp="2026-02-16 18:12:05 +0000 UTC" firstStartedPulling="2026-02-16 18:12:07.183700459 +0000 UTC m=+4353.131795156" lastFinishedPulling="2026-02-16 18:12:10.858912343 +0000 UTC m=+4356.807006990" observedRunningTime="2026-02-16 18:12:12.273418035 +0000 UTC m=+4358.221512692" watchObservedRunningTime="2026-02-16 18:12:15.699839926 +0000 UTC m=+4361.647934603" Feb 16 18:12:15 crc kubenswrapper[4794]: E0216 18:12:15.795435 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:12:16 crc kubenswrapper[4794]: I0216 18:12:16.354008 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m9tjg" Feb 16 18:12:20 crc kubenswrapper[4794]: I0216 18:12:20.865539 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m9tjg"] Feb 16 18:12:20 crc kubenswrapper[4794]: I0216 18:12:20.866373 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m9tjg" podUID="4404d184-c6ea-453c-9ec0-94aed6db19fa" containerName="registry-server" containerID="cri-o://c8e53e55509c3f7c5c4a2c32f4a66146c1e5080906168c73e278b09c4bd43959" gracePeriod=2 Feb 16 18:12:21 crc kubenswrapper[4794]: I0216 18:12:21.367969 4794 generic.go:334] "Generic (PLEG): container finished" podID="4404d184-c6ea-453c-9ec0-94aed6db19fa" containerID="c8e53e55509c3f7c5c4a2c32f4a66146c1e5080906168c73e278b09c4bd43959" exitCode=0 Feb 16 18:12:21 crc kubenswrapper[4794]: I0216 18:12:21.368400 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m9tjg" event={"ID":"4404d184-c6ea-453c-9ec0-94aed6db19fa","Type":"ContainerDied","Data":"c8e53e55509c3f7c5c4a2c32f4a66146c1e5080906168c73e278b09c4bd43959"} Feb 16 18:12:21 crc kubenswrapper[4794]: I0216 18:12:21.710769 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m9tjg" Feb 16 18:12:21 crc kubenswrapper[4794]: I0216 18:12:21.817723 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4404d184-c6ea-453c-9ec0-94aed6db19fa-catalog-content\") pod \"4404d184-c6ea-453c-9ec0-94aed6db19fa\" (UID: \"4404d184-c6ea-453c-9ec0-94aed6db19fa\") " Feb 16 18:12:21 crc kubenswrapper[4794]: I0216 18:12:21.817986 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbn97\" (UniqueName: \"kubernetes.io/projected/4404d184-c6ea-453c-9ec0-94aed6db19fa-kube-api-access-fbn97\") pod \"4404d184-c6ea-453c-9ec0-94aed6db19fa\" (UID: \"4404d184-c6ea-453c-9ec0-94aed6db19fa\") " Feb 16 18:12:21 crc kubenswrapper[4794]: I0216 18:12:21.818172 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4404d184-c6ea-453c-9ec0-94aed6db19fa-utilities\") pod \"4404d184-c6ea-453c-9ec0-94aed6db19fa\" (UID: \"4404d184-c6ea-453c-9ec0-94aed6db19fa\") " Feb 16 18:12:21 crc kubenswrapper[4794]: I0216 18:12:21.819500 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4404d184-c6ea-453c-9ec0-94aed6db19fa-utilities" (OuterVolumeSpecName: "utilities") pod "4404d184-c6ea-453c-9ec0-94aed6db19fa" (UID: "4404d184-c6ea-453c-9ec0-94aed6db19fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 18:12:21 crc kubenswrapper[4794]: I0216 18:12:21.871626 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4404d184-c6ea-453c-9ec0-94aed6db19fa-kube-api-access-fbn97" (OuterVolumeSpecName: "kube-api-access-fbn97") pod "4404d184-c6ea-453c-9ec0-94aed6db19fa" (UID: "4404d184-c6ea-453c-9ec0-94aed6db19fa"). InnerVolumeSpecName "kube-api-access-fbn97". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 18:12:21 crc kubenswrapper[4794]: I0216 18:12:21.872832 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4404d184-c6ea-453c-9ec0-94aed6db19fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4404d184-c6ea-453c-9ec0-94aed6db19fa" (UID: "4404d184-c6ea-453c-9ec0-94aed6db19fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 18:12:21 crc kubenswrapper[4794]: I0216 18:12:21.921905 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4404d184-c6ea-453c-9ec0-94aed6db19fa-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 18:12:21 crc kubenswrapper[4794]: I0216 18:12:21.921977 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbn97\" (UniqueName: \"kubernetes.io/projected/4404d184-c6ea-453c-9ec0-94aed6db19fa-kube-api-access-fbn97\") on node \"crc\" DevicePath \"\"" Feb 16 18:12:21 crc kubenswrapper[4794]: I0216 18:12:21.922010 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4404d184-c6ea-453c-9ec0-94aed6db19fa-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 18:12:22 crc kubenswrapper[4794]: I0216 18:12:22.379158 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m9tjg" event={"ID":"4404d184-c6ea-453c-9ec0-94aed6db19fa","Type":"ContainerDied","Data":"f57d5304a3cb9ce697589840fc6a2a798a52a607066acb5edd04d2fef2e3fffd"} Feb 16 18:12:22 crc kubenswrapper[4794]: I0216 18:12:22.379512 4794 scope.go:117] "RemoveContainer" containerID="c8e53e55509c3f7c5c4a2c32f4a66146c1e5080906168c73e278b09c4bd43959" Feb 16 18:12:22 crc kubenswrapper[4794]: I0216 18:12:22.379202 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m9tjg" Feb 16 18:12:22 crc kubenswrapper[4794]: I0216 18:12:22.403027 4794 scope.go:117] "RemoveContainer" containerID="7e8083e9e17ef472068eab63202d017d6cceca6d9c4a3ce357443c4cd57340a9" Feb 16 18:12:22 crc kubenswrapper[4794]: I0216 18:12:22.424397 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m9tjg"] Feb 16 18:12:22 crc kubenswrapper[4794]: I0216 18:12:22.437404 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m9tjg"] Feb 16 18:12:22 crc kubenswrapper[4794]: I0216 18:12:22.452571 4794 scope.go:117] "RemoveContainer" containerID="c4ecdcf0a727ec5ddd7fc3afd98d7a94aca6a4fc252ea6dd5a1ea352f3d19321" Feb 16 18:12:22 crc kubenswrapper[4794]: E0216 18:12:22.792622 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:12:22 crc kubenswrapper[4794]: I0216 18:12:22.803605 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4404d184-c6ea-453c-9ec0-94aed6db19fa" path="/var/lib/kubelet/pods/4404d184-c6ea-453c-9ec0-94aed6db19fa/volumes" Feb 16 18:12:26 crc kubenswrapper[4794]: I0216 18:12:26.792734 4794 scope.go:117] "RemoveContainer" containerID="cbb8935ebd2ac776900cf3a87a589d519f92877b04f772fa4c7f2bd3aec10a3d" Feb 16 18:12:26 crc kubenswrapper[4794]: E0216 18:12:26.794040 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:12:26 crc kubenswrapper[4794]: E0216 18:12:26.796930 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:12:33 crc kubenswrapper[4794]: E0216 18:12:33.794972 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:12:37 crc kubenswrapper[4794]: I0216 18:12:37.792357 4794 scope.go:117] "RemoveContainer" containerID="cbb8935ebd2ac776900cf3a87a589d519f92877b04f772fa4c7f2bd3aec10a3d" Feb 16 18:12:37 crc kubenswrapper[4794]: E0216 18:12:37.792950 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:12:38 crc kubenswrapper[4794]: I0216 18:12:38.794918 4794 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 18:12:38 crc kubenswrapper[4794]: E0216 18:12:38.893878 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 18:12:38 crc kubenswrapper[4794]: E0216 18:12:38.893940 4794 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 18:12:38 crc kubenswrapper[4794]: E0216 18:12:38.894058 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2h5l2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-7gcsf_openstack(c695f880-15cb-45b1-8545-60d8437ec631): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 18:12:38 crc kubenswrapper[4794]: E0216 18:12:38.895248 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:12:48 crc kubenswrapper[4794]: E0216 18:12:48.938464 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 18:12:48 crc kubenswrapper[4794]: E0216 18:12:48.939028 4794 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 18:12:48 crc kubenswrapper[4794]: E0216 18:12:48.939156 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59fh58dh6ch557h84h55ch564h5bh58fh5c8h5d4h584h669h667h569h59hd5hdbh9dh67ch5f9h59fh597h96h664h687h66dhfch5ddh5b7h88h59cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9v9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(8981f528-1f74-4d56-a93c-22860725b490): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 18:12:48 crc kubenswrapper[4794]: E0216 18:12:48.940570 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:12:49 crc kubenswrapper[4794]: E0216 18:12:49.793489 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:12:50 crc kubenswrapper[4794]: I0216 18:12:50.791739 4794 scope.go:117] "RemoveContainer" containerID="cbb8935ebd2ac776900cf3a87a589d519f92877b04f772fa4c7f2bd3aec10a3d" Feb 16 18:12:50 crc kubenswrapper[4794]: E0216 18:12:50.792542 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:13:02 crc kubenswrapper[4794]: I0216 18:13:02.792232 4794 scope.go:117] "RemoveContainer" containerID="cbb8935ebd2ac776900cf3a87a589d519f92877b04f772fa4c7f2bd3aec10a3d" Feb 16 18:13:02 crc kubenswrapper[4794]: E0216 18:13:02.793053 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:13:02 crc kubenswrapper[4794]: E0216 18:13:02.793863 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:13:04 crc kubenswrapper[4794]: E0216 18:13:04.806428 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:13:13 crc kubenswrapper[4794]: E0216 18:13:13.794619 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:13:16 crc kubenswrapper[4794]: I0216 18:13:16.792599 4794 scope.go:117] "RemoveContainer" containerID="cbb8935ebd2ac776900cf3a87a589d519f92877b04f772fa4c7f2bd3aec10a3d" Feb 16 18:13:16 crc kubenswrapper[4794]: E0216 18:13:16.793396 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:13:19 crc kubenswrapper[4794]: E0216 18:13:19.795916 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:13:24 crc kubenswrapper[4794]: E0216 18:13:24.802180 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:13:30 crc kubenswrapper[4794]: I0216 18:13:30.793116 4794 scope.go:117] "RemoveContainer" containerID="cbb8935ebd2ac776900cf3a87a589d519f92877b04f772fa4c7f2bd3aec10a3d" Feb 16 18:13:30 crc kubenswrapper[4794]: E0216 18:13:30.796938 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:13:32 crc kubenswrapper[4794]: E0216 18:13:32.793422 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:13:35 crc kubenswrapper[4794]: E0216 18:13:35.796161 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:13:44 crc kubenswrapper[4794]: I0216 18:13:44.804676 4794 scope.go:117] "RemoveContainer" containerID="cbb8935ebd2ac776900cf3a87a589d519f92877b04f772fa4c7f2bd3aec10a3d" Feb 16 18:13:44 crc kubenswrapper[4794]: E0216 18:13:44.805493 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:13:45 crc kubenswrapper[4794]: E0216 18:13:45.793956 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:13:47 crc kubenswrapper[4794]: E0216 18:13:47.794160 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:13:56 crc kubenswrapper[4794]: E0216 18:13:56.794363 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:13:58 crc kubenswrapper[4794]: I0216 18:13:58.791645 4794 scope.go:117] "RemoveContainer" containerID="cbb8935ebd2ac776900cf3a87a589d519f92877b04f772fa4c7f2bd3aec10a3d" Feb 16 18:13:58 crc kubenswrapper[4794]: E0216 18:13:58.792258 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:13:59 crc kubenswrapper[4794]: E0216 18:13:59.795430 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:14:07 crc kubenswrapper[4794]: E0216 18:14:07.795715 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:14:12 crc kubenswrapper[4794]: I0216 18:14:12.837929 4794 generic.go:334] "Generic (PLEG): container finished" podID="d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12" containerID="f7568096cd1993ac5b758c72ab35c7e051b79f7b5ea5caad4140cf5262d458a4" exitCode=2 Feb 16 18:14:12 crc kubenswrapper[4794]: I0216 18:14:12.837992 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4l9qg" event={"ID":"d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12","Type":"ContainerDied","Data":"f7568096cd1993ac5b758c72ab35c7e051b79f7b5ea5caad4140cf5262d458a4"} Feb 16 18:14:13 crc kubenswrapper[4794]: I0216 18:14:13.792780 4794 scope.go:117] "RemoveContainer" containerID="cbb8935ebd2ac776900cf3a87a589d519f92877b04f772fa4c7f2bd3aec10a3d" Feb 16 18:14:13 crc kubenswrapper[4794]: E0216 18:14:13.793486 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:14:14 crc kubenswrapper[4794]: I0216 18:14:14.377099 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4l9qg" Feb 16 18:14:14 crc kubenswrapper[4794]: I0216 18:14:14.413053 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rc6\" (UniqueName: \"kubernetes.io/projected/d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12-kube-api-access-w9rc6\") pod \"d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12\" (UID: \"d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12\") " Feb 16 18:14:14 crc kubenswrapper[4794]: I0216 18:14:14.413150 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12-inventory\") pod \"d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12\" (UID: \"d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12\") " Feb 16 18:14:14 crc kubenswrapper[4794]: I0216 18:14:14.413485 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12-ssh-key-openstack-edpm-ipam\") pod \"d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12\" (UID: \"d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12\") " Feb 16 18:14:14 crc kubenswrapper[4794]: I0216 18:14:14.420954 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12-kube-api-access-w9rc6" (OuterVolumeSpecName: "kube-api-access-w9rc6") pod "d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12" (UID: "d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12"). InnerVolumeSpecName "kube-api-access-w9rc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 18:14:14 crc kubenswrapper[4794]: I0216 18:14:14.447194 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12" (UID: "d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 18:14:14 crc kubenswrapper[4794]: I0216 18:14:14.457316 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12-inventory" (OuterVolumeSpecName: "inventory") pod "d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12" (UID: "d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 18:14:14 crc kubenswrapper[4794]: I0216 18:14:14.517591 4794 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 18:14:14 crc kubenswrapper[4794]: I0216 18:14:14.517889 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rc6\" (UniqueName: \"kubernetes.io/projected/d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12-kube-api-access-w9rc6\") on node \"crc\" DevicePath \"\"" Feb 16 18:14:14 crc kubenswrapper[4794]: I0216 18:14:14.517903 4794 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 18:14:14 crc kubenswrapper[4794]: E0216 18:14:14.801179 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:14:14 crc kubenswrapper[4794]: I0216 18:14:14.887936 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4l9qg" event={"ID":"d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12","Type":"ContainerDied","Data":"dc8d8cbb5db161d87302a4a4b2116dd5b65df49edd32d1c61030c569a3fe93cb"} Feb 16 18:14:14 crc kubenswrapper[4794]: I0216 18:14:14.887993 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc8d8cbb5db161d87302a4a4b2116dd5b65df49edd32d1c61030c569a3fe93cb" Feb 16 18:14:14 crc kubenswrapper[4794]: I0216 18:14:14.889010 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4l9qg" Feb 16 18:14:22 crc kubenswrapper[4794]: E0216 18:14:22.793868 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:14:25 crc kubenswrapper[4794]: E0216 18:14:25.794724 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:14:28 crc kubenswrapper[4794]: I0216 18:14:28.791853 4794 scope.go:117] "RemoveContainer" containerID="cbb8935ebd2ac776900cf3a87a589d519f92877b04f772fa4c7f2bd3aec10a3d" Feb 16 18:14:28 crc kubenswrapper[4794]: E0216 18:14:28.792662 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:14:37 crc kubenswrapper[4794]: E0216 18:14:37.793977 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:14:40 crc kubenswrapper[4794]: I0216 18:14:40.791972 4794 scope.go:117] "RemoveContainer" containerID="cbb8935ebd2ac776900cf3a87a589d519f92877b04f772fa4c7f2bd3aec10a3d" Feb 16 18:14:40 crc kubenswrapper[4794]: E0216 18:14:40.792673 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:14:40 crc kubenswrapper[4794]: E0216 18:14:40.793923 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:14:50 crc kubenswrapper[4794]: E0216 18:14:50.797562 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:14:51 crc kubenswrapper[4794]: E0216 18:14:51.794651 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:14:55 crc kubenswrapper[4794]: I0216 18:14:55.791863 4794 scope.go:117] "RemoveContainer" containerID="cbb8935ebd2ac776900cf3a87a589d519f92877b04f772fa4c7f2bd3aec10a3d" Feb 16 18:14:55 crc kubenswrapper[4794]: E0216 18:14:55.792978 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:15:00 crc kubenswrapper[4794]: I0216 18:15:00.180401 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521095-7g5rd"] Feb 16 18:15:00 crc kubenswrapper[4794]: E0216 18:15:00.182922 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb257420-ae22-4e5c-a428-4a1fb1f109b3" containerName="extract-content" Feb 16 18:15:00 crc kubenswrapper[4794]: I0216 18:15:00.183078 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb257420-ae22-4e5c-a428-4a1fb1f109b3" containerName="extract-content" Feb 16 18:15:00 crc kubenswrapper[4794]: E0216 18:15:00.183221 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb257420-ae22-4e5c-a428-4a1fb1f109b3" containerName="registry-server" Feb 16 18:15:00 crc kubenswrapper[4794]: I0216 18:15:00.183396 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb257420-ae22-4e5c-a428-4a1fb1f109b3" containerName="registry-server" Feb 16 18:15:00 crc kubenswrapper[4794]: E0216 18:15:00.183592 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4404d184-c6ea-453c-9ec0-94aed6db19fa" containerName="extract-content" Feb 16 18:15:00 crc kubenswrapper[4794]: I0216 18:15:00.183716 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="4404d184-c6ea-453c-9ec0-94aed6db19fa" containerName="extract-content" Feb 16 18:15:00 crc kubenswrapper[4794]: E0216 18:15:00.183847 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb257420-ae22-4e5c-a428-4a1fb1f109b3" containerName="extract-utilities" Feb 16 18:15:00 crc kubenswrapper[4794]: I0216 18:15:00.183963 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb257420-ae22-4e5c-a428-4a1fb1f109b3" containerName="extract-utilities" Feb 16 18:15:00 crc kubenswrapper[4794]: E0216 18:15:00.184091 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4404d184-c6ea-453c-9ec0-94aed6db19fa" containerName="registry-server" Feb 16 18:15:00 crc kubenswrapper[4794]: I0216 18:15:00.184197 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="4404d184-c6ea-453c-9ec0-94aed6db19fa" containerName="registry-server" Feb 16 18:15:00 crc kubenswrapper[4794]: E0216 18:15:00.184343 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 18:15:00 crc kubenswrapper[4794]: I0216 18:15:00.184474 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 18:15:00 crc kubenswrapper[4794]: E0216 18:15:00.184617 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4404d184-c6ea-453c-9ec0-94aed6db19fa" containerName="extract-utilities" Feb 16 18:15:00 crc kubenswrapper[4794]: I0216 18:15:00.184741 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="4404d184-c6ea-453c-9ec0-94aed6db19fa" containerName="extract-utilities" Feb 16 18:15:00 crc kubenswrapper[4794]: I0216 18:15:00.185284 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="4404d184-c6ea-453c-9ec0-94aed6db19fa" containerName="registry-server" Feb 16 18:15:00 crc kubenswrapper[4794]: I0216 18:15:00.185578 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb257420-ae22-4e5c-a428-4a1fb1f109b3" containerName="registry-server" Feb 16 18:15:00 crc kubenswrapper[4794]: I0216 18:15:00.185696 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 18:15:00 crc kubenswrapper[4794]: I0216 18:15:00.187300 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521095-7g5rd" Feb 16 18:15:00 crc kubenswrapper[4794]: I0216 18:15:00.190051 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 18:15:00 crc kubenswrapper[4794]: I0216 18:15:00.190101 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521095-7g5rd"] Feb 16 18:15:00 crc kubenswrapper[4794]: I0216 18:15:00.190248 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 18:15:00 crc kubenswrapper[4794]: I0216 18:15:00.290968 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlv6p\" (UniqueName: \"kubernetes.io/projected/c71b34a0-a95e-4675-91f0-9f65bad2a8e2-kube-api-access-dlv6p\") pod \"collect-profiles-29521095-7g5rd\" (UID: \"c71b34a0-a95e-4675-91f0-9f65bad2a8e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521095-7g5rd" Feb 16 18:15:00 crc kubenswrapper[4794]: I0216 18:15:00.291039 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c71b34a0-a95e-4675-91f0-9f65bad2a8e2-secret-volume\") pod \"collect-profiles-29521095-7g5rd\" (UID: \"c71b34a0-a95e-4675-91f0-9f65bad2a8e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521095-7g5rd" Feb 16 18:15:00 crc kubenswrapper[4794]: I0216 18:15:00.291633 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c71b34a0-a95e-4675-91f0-9f65bad2a8e2-config-volume\") pod \"collect-profiles-29521095-7g5rd\" (UID: \"c71b34a0-a95e-4675-91f0-9f65bad2a8e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521095-7g5rd" Feb 16 18:15:00 crc kubenswrapper[4794]: I0216 18:15:00.395257 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c71b34a0-a95e-4675-91f0-9f65bad2a8e2-config-volume\") pod \"collect-profiles-29521095-7g5rd\" (UID: \"c71b34a0-a95e-4675-91f0-9f65bad2a8e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521095-7g5rd" Feb 16 18:15:00 crc kubenswrapper[4794]: I0216 18:15:00.396350 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlv6p\" (UniqueName: \"kubernetes.io/projected/c71b34a0-a95e-4675-91f0-9f65bad2a8e2-kube-api-access-dlv6p\") pod \"collect-profiles-29521095-7g5rd\" (UID: \"c71b34a0-a95e-4675-91f0-9f65bad2a8e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521095-7g5rd" Feb 16 18:15:00 crc kubenswrapper[4794]: I0216 18:15:00.397342 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c71b34a0-a95e-4675-91f0-9f65bad2a8e2-secret-volume\") pod \"collect-profiles-29521095-7g5rd\" (UID: \"c71b34a0-a95e-4675-91f0-9f65bad2a8e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521095-7g5rd" Feb 16 18:15:00 crc kubenswrapper[4794]: I0216 18:15:00.401256 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c71b34a0-a95e-4675-91f0-9f65bad2a8e2-config-volume\") pod \"collect-profiles-29521095-7g5rd\" (UID: \"c71b34a0-a95e-4675-91f0-9f65bad2a8e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521095-7g5rd" Feb 16 18:15:00 crc kubenswrapper[4794]: I0216 18:15:00.481378 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c71b34a0-a95e-4675-91f0-9f65bad2a8e2-secret-volume\") pod \"collect-profiles-29521095-7g5rd\" (UID: \"c71b34a0-a95e-4675-91f0-9f65bad2a8e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521095-7g5rd" Feb 16 18:15:00 crc kubenswrapper[4794]: I0216 18:15:00.482409 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlv6p\" (UniqueName: \"kubernetes.io/projected/c71b34a0-a95e-4675-91f0-9f65bad2a8e2-kube-api-access-dlv6p\") pod \"collect-profiles-29521095-7g5rd\" (UID: \"c71b34a0-a95e-4675-91f0-9f65bad2a8e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521095-7g5rd" Feb 16 18:15:00 crc kubenswrapper[4794]: I0216 18:15:00.520046 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521095-7g5rd" Feb 16 18:15:00 crc kubenswrapper[4794]: I0216 18:15:00.980340 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521095-7g5rd"] Feb 16 18:15:01 crc kubenswrapper[4794]: I0216 18:15:01.425485 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521095-7g5rd" event={"ID":"c71b34a0-a95e-4675-91f0-9f65bad2a8e2","Type":"ContainerStarted","Data":"c2fc003ebee5791065f5dcf3d8a455af464d8357f0857f0c8651448ce63ea7ed"} Feb 16 18:15:01 crc kubenswrapper[4794]: I0216 18:15:01.425791 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521095-7g5rd" event={"ID":"c71b34a0-a95e-4675-91f0-9f65bad2a8e2","Type":"ContainerStarted","Data":"6cf0cf60c8a3555eacbf3a2fad730437b89e7acc7d4a0a27ab7516c2ca279a45"} Feb 16 18:15:01 crc kubenswrapper[4794]: I0216 18:15:01.452256 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29521095-7g5rd" podStartSLOduration=1.452231802 podStartE2EDuration="1.452231802s" podCreationTimestamp="2026-02-16 18:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 18:15:01.438076051 +0000 UTC m=+4527.386170708" watchObservedRunningTime="2026-02-16 18:15:01.452231802 +0000 UTC m=+4527.400326469" Feb 16 18:15:02 crc kubenswrapper[4794]: I0216 18:15:02.444271 4794 generic.go:334] "Generic (PLEG): container finished" podID="c71b34a0-a95e-4675-91f0-9f65bad2a8e2" containerID="c2fc003ebee5791065f5dcf3d8a455af464d8357f0857f0c8651448ce63ea7ed" exitCode=0 Feb 16 18:15:02 crc kubenswrapper[4794]: I0216 18:15:02.444760 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521095-7g5rd" event={"ID":"c71b34a0-a95e-4675-91f0-9f65bad2a8e2","Type":"ContainerDied","Data":"c2fc003ebee5791065f5dcf3d8a455af464d8357f0857f0c8651448ce63ea7ed"} Feb 16 18:15:03 crc kubenswrapper[4794]: E0216 18:15:03.800263 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:15:03 crc kubenswrapper[4794]: E0216 18:15:03.800829 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:15:03 crc kubenswrapper[4794]: I0216 18:15:03.935943 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521095-7g5rd" Feb 16 18:15:04 crc kubenswrapper[4794]: I0216 18:15:04.012571 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlv6p\" (UniqueName: \"kubernetes.io/projected/c71b34a0-a95e-4675-91f0-9f65bad2a8e2-kube-api-access-dlv6p\") pod \"c71b34a0-a95e-4675-91f0-9f65bad2a8e2\" (UID: \"c71b34a0-a95e-4675-91f0-9f65bad2a8e2\") " Feb 16 18:15:04 crc kubenswrapper[4794]: I0216 18:15:04.012701 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c71b34a0-a95e-4675-91f0-9f65bad2a8e2-secret-volume\") pod \"c71b34a0-a95e-4675-91f0-9f65bad2a8e2\" (UID: \"c71b34a0-a95e-4675-91f0-9f65bad2a8e2\") " Feb 16 18:15:04 crc kubenswrapper[4794]: I0216 18:15:04.012758 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c71b34a0-a95e-4675-91f0-9f65bad2a8e2-config-volume\") pod \"c71b34a0-a95e-4675-91f0-9f65bad2a8e2\" (UID: \"c71b34a0-a95e-4675-91f0-9f65bad2a8e2\") " Feb 16 18:15:04 crc kubenswrapper[4794]: I0216 18:15:04.014635 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c71b34a0-a95e-4675-91f0-9f65bad2a8e2-config-volume" (OuterVolumeSpecName: "config-volume") pod "c71b34a0-a95e-4675-91f0-9f65bad2a8e2" (UID: "c71b34a0-a95e-4675-91f0-9f65bad2a8e2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 18:15:04 crc kubenswrapper[4794]: I0216 18:15:04.019894 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c71b34a0-a95e-4675-91f0-9f65bad2a8e2-kube-api-access-dlv6p" (OuterVolumeSpecName: "kube-api-access-dlv6p") pod "c71b34a0-a95e-4675-91f0-9f65bad2a8e2" (UID: "c71b34a0-a95e-4675-91f0-9f65bad2a8e2"). InnerVolumeSpecName "kube-api-access-dlv6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 18:15:04 crc kubenswrapper[4794]: I0216 18:15:04.020250 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c71b34a0-a95e-4675-91f0-9f65bad2a8e2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c71b34a0-a95e-4675-91f0-9f65bad2a8e2" (UID: "c71b34a0-a95e-4675-91f0-9f65bad2a8e2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 18:15:04 crc kubenswrapper[4794]: I0216 18:15:04.118783 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlv6p\" (UniqueName: \"kubernetes.io/projected/c71b34a0-a95e-4675-91f0-9f65bad2a8e2-kube-api-access-dlv6p\") on node \"crc\" DevicePath \"\"" Feb 16 18:15:04 crc kubenswrapper[4794]: I0216 18:15:04.118826 4794 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c71b34a0-a95e-4675-91f0-9f65bad2a8e2-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 18:15:04 crc kubenswrapper[4794]: I0216 18:15:04.118835 4794 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c71b34a0-a95e-4675-91f0-9f65bad2a8e2-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 18:15:04 crc kubenswrapper[4794]: I0216 18:15:04.472255 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521095-7g5rd" event={"ID":"c71b34a0-a95e-4675-91f0-9f65bad2a8e2","Type":"ContainerDied","Data":"6cf0cf60c8a3555eacbf3a2fad730437b89e7acc7d4a0a27ab7516c2ca279a45"} Feb 16 18:15:04 crc kubenswrapper[4794]: I0216 18:15:04.472565 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cf0cf60c8a3555eacbf3a2fad730437b89e7acc7d4a0a27ab7516c2ca279a45" Feb 16 18:15:04 crc kubenswrapper[4794]: I0216 18:15:04.472415 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521095-7g5rd" Feb 16 18:15:04 crc kubenswrapper[4794]: I0216 18:15:04.547684 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521050-jzcdh"] Feb 16 18:15:04 crc kubenswrapper[4794]: I0216 18:15:04.562680 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521050-jzcdh"] Feb 16 18:15:04 crc kubenswrapper[4794]: I0216 18:15:04.810092 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44652222-f734-44ff-8769-44adae44fc93" path="/var/lib/kubelet/pods/44652222-f734-44ff-8769-44adae44fc93/volumes" Feb 16 18:15:07 crc kubenswrapper[4794]: I0216 18:15:07.792819 4794 scope.go:117] "RemoveContainer" containerID="cbb8935ebd2ac776900cf3a87a589d519f92877b04f772fa4c7f2bd3aec10a3d" Feb 16 18:15:07 crc kubenswrapper[4794]: E0216 18:15:07.795036 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:15:14 crc kubenswrapper[4794]: E0216 18:15:14.809339 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:15:17 crc kubenswrapper[4794]: E0216 18:15:17.798460 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:15:22 crc kubenswrapper[4794]: I0216 18:15:22.791966 4794 scope.go:117] "RemoveContainer" containerID="cbb8935ebd2ac776900cf3a87a589d519f92877b04f772fa4c7f2bd3aec10a3d" Feb 16 18:15:22 crc kubenswrapper[4794]: E0216 18:15:22.792897 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:15:27 crc kubenswrapper[4794]: E0216 18:15:27.794185 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:15:29 crc kubenswrapper[4794]: E0216 18:15:29.795342 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:15:34 crc kubenswrapper[4794]: I0216 18:15:34.800361 4794 scope.go:117] "RemoveContainer" containerID="cbb8935ebd2ac776900cf3a87a589d519f92877b04f772fa4c7f2bd3aec10a3d" Feb 16 18:15:34 crc kubenswrapper[4794]: E0216 18:15:34.801136 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:15:40 crc kubenswrapper[4794]: E0216 18:15:40.795953 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:15:41 crc kubenswrapper[4794]: E0216 18:15:41.794056 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:15:42 crc kubenswrapper[4794]: I0216 18:15:42.875797 4794 scope.go:117] "RemoveContainer" containerID="bc9365e8426a88c0b09ed8c3836f8a80d98196debeec5b07be146511e0454e50" Feb 16 18:15:49 crc kubenswrapper[4794]: I0216 18:15:49.792172 4794 scope.go:117] "RemoveContainer" containerID="cbb8935ebd2ac776900cf3a87a589d519f92877b04f772fa4c7f2bd3aec10a3d" Feb 16 18:15:49 crc kubenswrapper[4794]: E0216 18:15:49.793376 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:15:51 crc kubenswrapper[4794]: E0216 18:15:51.794843 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:15:52 crc kubenswrapper[4794]: E0216 18:15:52.793523 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:16:02 crc kubenswrapper[4794]: E0216 18:16:02.795355 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:16:03 crc kubenswrapper[4794]: I0216 18:16:03.791166 4794 scope.go:117] "RemoveContainer" containerID="cbb8935ebd2ac776900cf3a87a589d519f92877b04f772fa4c7f2bd3aec10a3d" Feb 16 18:16:04 crc kubenswrapper[4794]: I0216 18:16:04.207406 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerStarted","Data":"6a86808fa24a97024716a4f82a177899dec831043626583733eb25cffc19e3bb"} Feb 16 18:16:07 crc kubenswrapper[4794]: E0216 18:16:07.793944 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:16:16 crc kubenswrapper[4794]: E0216 18:16:16.795380 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:16:20 crc kubenswrapper[4794]: E0216 18:16:20.794754 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:16:29 crc kubenswrapper[4794]: E0216 18:16:29.795852 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:16:31 crc kubenswrapper[4794]: E0216 18:16:31.793872 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:16:43 crc kubenswrapper[4794]: E0216 18:16:43.793427 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:16:44 crc kubenswrapper[4794]: E0216 18:16:44.802859 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:16:55 crc kubenswrapper[4794]: E0216 18:16:55.794165 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:16:58 crc kubenswrapper[4794]: E0216 18:16:58.794465 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:17:06 crc kubenswrapper[4794]: E0216 18:17:06.794717 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:17:08 crc kubenswrapper[4794]: I0216 18:17:08.408531 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-snbfj"] Feb 16 18:17:08 crc kubenswrapper[4794]: E0216 18:17:08.409270 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c71b34a0-a95e-4675-91f0-9f65bad2a8e2" containerName="collect-profiles" Feb 16 18:17:08 crc kubenswrapper[4794]: I0216 18:17:08.409283 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="c71b34a0-a95e-4675-91f0-9f65bad2a8e2" containerName="collect-profiles" Feb 16 18:17:08 crc kubenswrapper[4794]: I0216 18:17:08.409534 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="c71b34a0-a95e-4675-91f0-9f65bad2a8e2" containerName="collect-profiles" Feb 16 18:17:08 crc kubenswrapper[4794]: I0216 18:17:08.411342 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-snbfj" Feb 16 18:17:08 crc kubenswrapper[4794]: I0216 18:17:08.429027 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-snbfj"] Feb 16 18:17:08 crc kubenswrapper[4794]: I0216 18:17:08.565782 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/228679fc-e8a6-458b-b5e4-6cadf762ee0b-catalog-content\") pod \"redhat-marketplace-snbfj\" (UID: \"228679fc-e8a6-458b-b5e4-6cadf762ee0b\") " pod="openshift-marketplace/redhat-marketplace-snbfj" Feb 16 18:17:08 crc kubenswrapper[4794]: I0216 18:17:08.566087 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbg65\" (UniqueName: \"kubernetes.io/projected/228679fc-e8a6-458b-b5e4-6cadf762ee0b-kube-api-access-xbg65\") pod \"redhat-marketplace-snbfj\" (UID: \"228679fc-e8a6-458b-b5e4-6cadf762ee0b\") " pod="openshift-marketplace/redhat-marketplace-snbfj" Feb 16 18:17:08 crc kubenswrapper[4794]: I0216 18:17:08.566316 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/228679fc-e8a6-458b-b5e4-6cadf762ee0b-utilities\") pod \"redhat-marketplace-snbfj\" (UID: \"228679fc-e8a6-458b-b5e4-6cadf762ee0b\") " pod="openshift-marketplace/redhat-marketplace-snbfj" Feb 16 18:17:08 crc kubenswrapper[4794]: I0216 18:17:08.669794 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/228679fc-e8a6-458b-b5e4-6cadf762ee0b-utilities\") pod \"redhat-marketplace-snbfj\" (UID: \"228679fc-e8a6-458b-b5e4-6cadf762ee0b\") " pod="openshift-marketplace/redhat-marketplace-snbfj" Feb 16 18:17:08 crc kubenswrapper[4794]: I0216 18:17:08.669949 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/228679fc-e8a6-458b-b5e4-6cadf762ee0b-catalog-content\") pod \"redhat-marketplace-snbfj\" (UID: \"228679fc-e8a6-458b-b5e4-6cadf762ee0b\") " pod="openshift-marketplace/redhat-marketplace-snbfj" Feb 16 18:17:08 crc kubenswrapper[4794]: I0216 18:17:08.670295 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbg65\" (UniqueName: \"kubernetes.io/projected/228679fc-e8a6-458b-b5e4-6cadf762ee0b-kube-api-access-xbg65\") pod \"redhat-marketplace-snbfj\" (UID: \"228679fc-e8a6-458b-b5e4-6cadf762ee0b\") " pod="openshift-marketplace/redhat-marketplace-snbfj" Feb 16 18:17:08 crc kubenswrapper[4794]: I0216 18:17:08.670463 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/228679fc-e8a6-458b-b5e4-6cadf762ee0b-utilities\") pod \"redhat-marketplace-snbfj\" (UID: \"228679fc-e8a6-458b-b5e4-6cadf762ee0b\") " pod="openshift-marketplace/redhat-marketplace-snbfj" Feb 16 18:17:08 crc kubenswrapper[4794]: I0216 18:17:08.670493 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/228679fc-e8a6-458b-b5e4-6cadf762ee0b-catalog-content\") pod \"redhat-marketplace-snbfj\" (UID: \"228679fc-e8a6-458b-b5e4-6cadf762ee0b\") " pod="openshift-marketplace/redhat-marketplace-snbfj" Feb 16 18:17:08 crc kubenswrapper[4794]: I0216 18:17:08.693916 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbg65\" (UniqueName: \"kubernetes.io/projected/228679fc-e8a6-458b-b5e4-6cadf762ee0b-kube-api-access-xbg65\") pod \"redhat-marketplace-snbfj\" (UID: \"228679fc-e8a6-458b-b5e4-6cadf762ee0b\") " pod="openshift-marketplace/redhat-marketplace-snbfj" Feb 16 18:17:08 crc kubenswrapper[4794]: I0216 18:17:08.743974 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-snbfj" Feb 16 18:17:09 crc kubenswrapper[4794]: I0216 18:17:09.212077 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-snbfj"] Feb 16 18:17:09 crc kubenswrapper[4794]: W0216 18:17:09.219546 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod228679fc_e8a6_458b_b5e4_6cadf762ee0b.slice/crio-e59b3cd445979c0ab39737a6ad90320746d4ff763bff4bb1ddb5cb0d75ac644d WatchSource:0}: Error finding container e59b3cd445979c0ab39737a6ad90320746d4ff763bff4bb1ddb5cb0d75ac644d: Status 404 returned error can't find the container with id e59b3cd445979c0ab39737a6ad90320746d4ff763bff4bb1ddb5cb0d75ac644d Feb 16 18:17:10 crc kubenswrapper[4794]: I0216 18:17:10.015091 4794 generic.go:334] "Generic (PLEG): container finished" podID="228679fc-e8a6-458b-b5e4-6cadf762ee0b" containerID="2b05d77e10cfa5fd5a487e1497f1743e0c68bda41a0ce7bd1dd609f2be33876b" exitCode=0 Feb 16 18:17:10 crc kubenswrapper[4794]: I0216 18:17:10.015319 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-snbfj" event={"ID":"228679fc-e8a6-458b-b5e4-6cadf762ee0b","Type":"ContainerDied","Data":"2b05d77e10cfa5fd5a487e1497f1743e0c68bda41a0ce7bd1dd609f2be33876b"} Feb 16 18:17:10 crc kubenswrapper[4794]: I0216 18:17:10.016638 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-snbfj" event={"ID":"228679fc-e8a6-458b-b5e4-6cadf762ee0b","Type":"ContainerStarted","Data":"e59b3cd445979c0ab39737a6ad90320746d4ff763bff4bb1ddb5cb0d75ac644d"} Feb 16 18:17:11 crc kubenswrapper[4794]: I0216 18:17:11.030972 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-snbfj" event={"ID":"228679fc-e8a6-458b-b5e4-6cadf762ee0b","Type":"ContainerStarted","Data":"47a60ffdaaeb609e4854401a9b6ae38dac062285dddf8d0edaf9f3953f9a5135"} Feb 16 18:17:12 crc kubenswrapper[4794]: I0216 18:17:12.047916 4794 generic.go:334] "Generic (PLEG): container finished" podID="228679fc-e8a6-458b-b5e4-6cadf762ee0b" containerID="47a60ffdaaeb609e4854401a9b6ae38dac062285dddf8d0edaf9f3953f9a5135" exitCode=0 Feb 16 18:17:12 crc kubenswrapper[4794]: I0216 18:17:12.048025 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-snbfj" event={"ID":"228679fc-e8a6-458b-b5e4-6cadf762ee0b","Type":"ContainerDied","Data":"47a60ffdaaeb609e4854401a9b6ae38dac062285dddf8d0edaf9f3953f9a5135"} Feb 16 18:17:13 crc kubenswrapper[4794]: I0216 18:17:13.064626 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-snbfj" event={"ID":"228679fc-e8a6-458b-b5e4-6cadf762ee0b","Type":"ContainerStarted","Data":"f64a2c1dfb51de267fb9eb500dc65a7609bd447a23748f834cb81fb5a29050b4"} Feb 16 18:17:13 crc kubenswrapper[4794]: I0216 18:17:13.092972 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-snbfj" podStartSLOduration=2.5659824159999998 podStartE2EDuration="5.092953132s" podCreationTimestamp="2026-02-16 18:17:08 +0000 UTC" firstStartedPulling="2026-02-16 18:17:10.019908291 +0000 UTC m=+4655.968002938" lastFinishedPulling="2026-02-16 18:17:12.546878997 +0000 UTC m=+4658.494973654" observedRunningTime="2026-02-16 18:17:13.082799995 +0000 UTC m=+4659.030894642" watchObservedRunningTime="2026-02-16 18:17:13.092953132 +0000 UTC m=+4659.041047789" Feb 16 18:17:13 crc kubenswrapper[4794]: E0216 18:17:13.793153 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:17:18 crc kubenswrapper[4794]: I0216 18:17:18.744197 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-snbfj" Feb 16 18:17:18 crc kubenswrapper[4794]: I0216 18:17:18.745044 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-snbfj" Feb 16 18:17:18 crc kubenswrapper[4794]: I0216 18:17:18.813318 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-snbfj" Feb 16 18:17:19 crc kubenswrapper[4794]: I0216 18:17:19.628506 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-snbfj" Feb 16 18:17:19 crc kubenswrapper[4794]: I0216 18:17:19.676403 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-snbfj"] Feb 16 18:17:20 crc kubenswrapper[4794]: E0216 18:17:20.793934 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:17:21 crc kubenswrapper[4794]: I0216 18:17:21.160844 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-snbfj" podUID="228679fc-e8a6-458b-b5e4-6cadf762ee0b" containerName="registry-server" containerID="cri-o://f64a2c1dfb51de267fb9eb500dc65a7609bd447a23748f834cb81fb5a29050b4" gracePeriod=2 Feb 16 18:17:21 crc kubenswrapper[4794]: I0216 18:17:21.681396 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-snbfj" Feb 16 18:17:21 crc kubenswrapper[4794]: I0216 18:17:21.766046 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/228679fc-e8a6-458b-b5e4-6cadf762ee0b-catalog-content\") pod \"228679fc-e8a6-458b-b5e4-6cadf762ee0b\" (UID: \"228679fc-e8a6-458b-b5e4-6cadf762ee0b\") " Feb 16 18:17:21 crc kubenswrapper[4794]: I0216 18:17:21.766118 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbg65\" (UniqueName: \"kubernetes.io/projected/228679fc-e8a6-458b-b5e4-6cadf762ee0b-kube-api-access-xbg65\") pod \"228679fc-e8a6-458b-b5e4-6cadf762ee0b\" (UID: \"228679fc-e8a6-458b-b5e4-6cadf762ee0b\") " Feb 16 18:17:21 crc kubenswrapper[4794]: I0216 18:17:21.766491 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/228679fc-e8a6-458b-b5e4-6cadf762ee0b-utilities\") pod \"228679fc-e8a6-458b-b5e4-6cadf762ee0b\" (UID: \"228679fc-e8a6-458b-b5e4-6cadf762ee0b\") " Feb 16 18:17:21 crc kubenswrapper[4794]: I0216 18:17:21.768380 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/228679fc-e8a6-458b-b5e4-6cadf762ee0b-utilities" (OuterVolumeSpecName: "utilities") pod "228679fc-e8a6-458b-b5e4-6cadf762ee0b" (UID: "228679fc-e8a6-458b-b5e4-6cadf762ee0b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 18:17:21 crc kubenswrapper[4794]: I0216 18:17:21.774631 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/228679fc-e8a6-458b-b5e4-6cadf762ee0b-kube-api-access-xbg65" (OuterVolumeSpecName: "kube-api-access-xbg65") pod "228679fc-e8a6-458b-b5e4-6cadf762ee0b" (UID: "228679fc-e8a6-458b-b5e4-6cadf762ee0b"). InnerVolumeSpecName "kube-api-access-xbg65". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 18:17:21 crc kubenswrapper[4794]: I0216 18:17:21.808797 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/228679fc-e8a6-458b-b5e4-6cadf762ee0b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "228679fc-e8a6-458b-b5e4-6cadf762ee0b" (UID: "228679fc-e8a6-458b-b5e4-6cadf762ee0b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 18:17:21 crc kubenswrapper[4794]: I0216 18:17:21.869582 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/228679fc-e8a6-458b-b5e4-6cadf762ee0b-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 18:17:21 crc kubenswrapper[4794]: I0216 18:17:21.869615 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/228679fc-e8a6-458b-b5e4-6cadf762ee0b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 18:17:21 crc kubenswrapper[4794]: I0216 18:17:21.869627 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbg65\" (UniqueName: \"kubernetes.io/projected/228679fc-e8a6-458b-b5e4-6cadf762ee0b-kube-api-access-xbg65\") on node \"crc\" DevicePath \"\"" Feb 16 18:17:22 crc kubenswrapper[4794]: I0216 18:17:22.172224 4794 generic.go:334] "Generic (PLEG): container finished" podID="228679fc-e8a6-458b-b5e4-6cadf762ee0b" containerID="f64a2c1dfb51de267fb9eb500dc65a7609bd447a23748f834cb81fb5a29050b4" exitCode=0 Feb 16 18:17:22 crc kubenswrapper[4794]: I0216 18:17:22.172267 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-snbfj" event={"ID":"228679fc-e8a6-458b-b5e4-6cadf762ee0b","Type":"ContainerDied","Data":"f64a2c1dfb51de267fb9eb500dc65a7609bd447a23748f834cb81fb5a29050b4"} Feb 16 18:17:22 crc kubenswrapper[4794]: I0216 18:17:22.172315 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-snbfj" event={"ID":"228679fc-e8a6-458b-b5e4-6cadf762ee0b","Type":"ContainerDied","Data":"e59b3cd445979c0ab39737a6ad90320746d4ff763bff4bb1ddb5cb0d75ac644d"} Feb 16 18:17:22 crc kubenswrapper[4794]: I0216 18:17:22.172337 4794 scope.go:117] "RemoveContainer" containerID="f64a2c1dfb51de267fb9eb500dc65a7609bd447a23748f834cb81fb5a29050b4" Feb 16 18:17:22 crc kubenswrapper[4794]: I0216 18:17:22.172381 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-snbfj" Feb 16 18:17:22 crc kubenswrapper[4794]: I0216 18:17:22.215718 4794 scope.go:117] "RemoveContainer" containerID="47a60ffdaaeb609e4854401a9b6ae38dac062285dddf8d0edaf9f3953f9a5135" Feb 16 18:17:22 crc kubenswrapper[4794]: I0216 18:17:22.229492 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-snbfj"] Feb 16 18:17:22 crc kubenswrapper[4794]: I0216 18:17:22.245842 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-snbfj"] Feb 16 18:17:22 crc kubenswrapper[4794]: I0216 18:17:22.249431 4794 scope.go:117] "RemoveContainer" containerID="2b05d77e10cfa5fd5a487e1497f1743e0c68bda41a0ce7bd1dd609f2be33876b" Feb 16 18:17:22 crc kubenswrapper[4794]: I0216 18:17:22.306215 4794 scope.go:117] "RemoveContainer" containerID="f64a2c1dfb51de267fb9eb500dc65a7609bd447a23748f834cb81fb5a29050b4" Feb 16 18:17:22 crc kubenswrapper[4794]: E0216 18:17:22.306711 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f64a2c1dfb51de267fb9eb500dc65a7609bd447a23748f834cb81fb5a29050b4\": container with ID starting with f64a2c1dfb51de267fb9eb500dc65a7609bd447a23748f834cb81fb5a29050b4 not found: ID does not exist" containerID="f64a2c1dfb51de267fb9eb500dc65a7609bd447a23748f834cb81fb5a29050b4" Feb 16 18:17:22 crc kubenswrapper[4794]: I0216 18:17:22.306757 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f64a2c1dfb51de267fb9eb500dc65a7609bd447a23748f834cb81fb5a29050b4"} err="failed to get container status \"f64a2c1dfb51de267fb9eb500dc65a7609bd447a23748f834cb81fb5a29050b4\": rpc error: code = NotFound desc = could not find container \"f64a2c1dfb51de267fb9eb500dc65a7609bd447a23748f834cb81fb5a29050b4\": container with ID starting with f64a2c1dfb51de267fb9eb500dc65a7609bd447a23748f834cb81fb5a29050b4 not found: ID does not exist" Feb 16 18:17:22 crc kubenswrapper[4794]: I0216 18:17:22.306784 4794 scope.go:117] "RemoveContainer" containerID="47a60ffdaaeb609e4854401a9b6ae38dac062285dddf8d0edaf9f3953f9a5135" Feb 16 18:17:22 crc kubenswrapper[4794]: E0216 18:17:22.307100 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47a60ffdaaeb609e4854401a9b6ae38dac062285dddf8d0edaf9f3953f9a5135\": container with ID starting with 47a60ffdaaeb609e4854401a9b6ae38dac062285dddf8d0edaf9f3953f9a5135 not found: ID does not exist" containerID="47a60ffdaaeb609e4854401a9b6ae38dac062285dddf8d0edaf9f3953f9a5135" Feb 16 18:17:22 crc kubenswrapper[4794]: I0216 18:17:22.307127 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47a60ffdaaeb609e4854401a9b6ae38dac062285dddf8d0edaf9f3953f9a5135"} err="failed to get container status \"47a60ffdaaeb609e4854401a9b6ae38dac062285dddf8d0edaf9f3953f9a5135\": rpc error: code = NotFound desc = could not find container \"47a60ffdaaeb609e4854401a9b6ae38dac062285dddf8d0edaf9f3953f9a5135\": container with ID starting with 47a60ffdaaeb609e4854401a9b6ae38dac062285dddf8d0edaf9f3953f9a5135 not found: ID does not exist" Feb 16 18:17:22 crc kubenswrapper[4794]: I0216 18:17:22.307148 4794 scope.go:117] "RemoveContainer" containerID="2b05d77e10cfa5fd5a487e1497f1743e0c68bda41a0ce7bd1dd609f2be33876b" Feb 16 18:17:22 crc kubenswrapper[4794]: E0216 18:17:22.307705 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b05d77e10cfa5fd5a487e1497f1743e0c68bda41a0ce7bd1dd609f2be33876b\": container with ID starting with 2b05d77e10cfa5fd5a487e1497f1743e0c68bda41a0ce7bd1dd609f2be33876b not found: ID does not exist" containerID="2b05d77e10cfa5fd5a487e1497f1743e0c68bda41a0ce7bd1dd609f2be33876b" Feb 16 18:17:22 crc kubenswrapper[4794]: I0216 18:17:22.307733 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b05d77e10cfa5fd5a487e1497f1743e0c68bda41a0ce7bd1dd609f2be33876b"} err="failed to get container status \"2b05d77e10cfa5fd5a487e1497f1743e0c68bda41a0ce7bd1dd609f2be33876b\": rpc error: code = NotFound desc = could not find container \"2b05d77e10cfa5fd5a487e1497f1743e0c68bda41a0ce7bd1dd609f2be33876b\": container with ID starting with 2b05d77e10cfa5fd5a487e1497f1743e0c68bda41a0ce7bd1dd609f2be33876b not found: ID does not exist" Feb 16 18:17:22 crc kubenswrapper[4794]: I0216 18:17:22.807199 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="228679fc-e8a6-458b-b5e4-6cadf762ee0b" path="/var/lib/kubelet/pods/228679fc-e8a6-458b-b5e4-6cadf762ee0b/volumes" Feb 16 18:17:25 crc kubenswrapper[4794]: E0216 18:17:25.795275 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:17:34 crc kubenswrapper[4794]: E0216 18:17:34.815202 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:17:39 crc kubenswrapper[4794]: I0216 18:17:39.795113 4794 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 18:17:39 crc kubenswrapper[4794]: E0216 18:17:39.949027 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 18:17:39 crc kubenswrapper[4794]: E0216 18:17:39.949084 4794 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 18:17:39 crc kubenswrapper[4794]: E0216 18:17:39.949187 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2h5l2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-7gcsf_openstack(c695f880-15cb-45b1-8545-60d8437ec631): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 18:17:39 crc kubenswrapper[4794]: E0216 18:17:39.950557 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:17:45 crc kubenswrapper[4794]: E0216 18:17:45.793385 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:17:50 crc kubenswrapper[4794]: E0216 18:17:50.795044 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:17:58 crc kubenswrapper[4794]: E0216 18:17:58.919724 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 18:17:58 crc kubenswrapper[4794]: E0216 18:17:58.920148 4794 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 18:17:58 crc kubenswrapper[4794]: E0216 18:17:58.920265 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59fh58dh6ch557h84h55ch564h5bh58fh5c8h5d4h584h669h667h569h59hd5hdbh9dh67ch5f9h59fh597h96h664h687h66dhfch5ddh5b7h88h59cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9v9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(8981f528-1f74-4d56-a93c-22860725b490): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 18:17:58 crc kubenswrapper[4794]: E0216 18:17:58.921522 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:18:04 crc kubenswrapper[4794]: E0216 18:18:04.809124 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:18:10 crc kubenswrapper[4794]: E0216 18:18:10.794958 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:18:19 crc kubenswrapper[4794]: E0216 18:18:19.794254 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:18:20 crc kubenswrapper[4794]: I0216 18:18:20.141244 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 18:18:20 crc kubenswrapper[4794]: I0216 18:18:20.141414 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 18:18:21 crc kubenswrapper[4794]: E0216 18:18:21.794614 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:18:32 crc kubenswrapper[4794]: I0216 18:18:32.016875 4794 patch_prober.go:28] interesting pod/logging-loki-gateway-5db5847d75-whsqk container/gateway namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={ Feb 16 18:18:32 crc kubenswrapper[4794]: "http": "returned status 429, expected 200" Feb 16 18:18:32 crc kubenswrapper[4794]: } Feb 16 18:18:32 crc kubenswrapper[4794]: I0216 18:18:32.017330 4794 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5db5847d75-whsqk" podUID="9d2f1ecd-980b-430c-8ed1-e83406722170" containerName="gateway" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 18:18:33 crc kubenswrapper[4794]: E0216 18:18:33.795070 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:18:36 crc kubenswrapper[4794]: E0216 18:18:36.794920 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:18:43 crc kubenswrapper[4794]: I0216 18:18:43.015201 4794 scope.go:117] "RemoveContainer" containerID="8a9172d9813a5755b9026670a0af7f987c29a20dd7cbf749863f02ac5fdac820" Feb 16 18:18:43 crc kubenswrapper[4794]: I0216 18:18:43.061892 4794 scope.go:117] "RemoveContainer" containerID="4d5048f6673f0e33a3102bbf371f24e891a38653ab827fae90cd22b87219d989" Feb 16 18:18:43 crc kubenswrapper[4794]: I0216 18:18:43.114329 4794 scope.go:117] "RemoveContainer" containerID="820e4e3d4a58c25516e7cdc80093c8c3a8c14c8347e83579a34a599d7fe93b1c" Feb 16 18:18:47 crc kubenswrapper[4794]: E0216 18:18:47.793776 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:18:50 crc kubenswrapper[4794]: I0216 18:18:50.140749 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 18:18:50 crc kubenswrapper[4794]: I0216 18:18:50.141238 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 18:18:50 crc kubenswrapper[4794]: E0216 18:18:50.795335 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:18:59 crc kubenswrapper[4794]: E0216 18:18:59.800545 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:19:05 crc kubenswrapper[4794]: E0216 18:19:05.794402 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:19:10 crc kubenswrapper[4794]: E0216 18:19:10.799064 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:19:19 crc kubenswrapper[4794]: E0216 18:19:19.793903 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:19:20 crc kubenswrapper[4794]: I0216 18:19:20.140248 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 18:19:20 crc kubenswrapper[4794]: I0216 18:19:20.140335 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 18:19:20 crc kubenswrapper[4794]: I0216 18:19:20.140379 4794 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" Feb 16 18:19:20 crc kubenswrapper[4794]: I0216 18:19:20.141328 4794 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6a86808fa24a97024716a4f82a177899dec831043626583733eb25cffc19e3bb"} pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 18:19:20 crc kubenswrapper[4794]: I0216 18:19:20.141406 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" containerID="cri-o://6a86808fa24a97024716a4f82a177899dec831043626583733eb25cffc19e3bb" gracePeriod=600 Feb 16 18:19:20 crc kubenswrapper[4794]: I0216 18:19:20.685312 4794 generic.go:334] "Generic (PLEG): container finished" podID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerID="6a86808fa24a97024716a4f82a177899dec831043626583733eb25cffc19e3bb" exitCode=0 Feb 16 18:19:20 crc kubenswrapper[4794]: I0216 18:19:20.685359 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerDied","Data":"6a86808fa24a97024716a4f82a177899dec831043626583733eb25cffc19e3bb"} Feb 16 18:19:20 crc kubenswrapper[4794]: I0216 18:19:20.685702 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerStarted","Data":"62a44f9cc7dcaeb9cc0d4e21b632c5aa1a13b6b5d5c47077cb926fe9f97c5b81"} Feb 16 18:19:20 crc kubenswrapper[4794]: I0216 18:19:20.685728 4794 scope.go:117] "RemoveContainer" containerID="cbb8935ebd2ac776900cf3a87a589d519f92877b04f772fa4c7f2bd3aec10a3d" Feb 16 18:19:21 crc kubenswrapper[4794]: E0216 18:19:21.794015 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:19:32 crc kubenswrapper[4794]: I0216 18:19:32.039872 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q7gkk"] Feb 16 18:19:32 crc kubenswrapper[4794]: E0216 18:19:32.043203 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="228679fc-e8a6-458b-b5e4-6cadf762ee0b" containerName="extract-content" Feb 16 18:19:32 crc kubenswrapper[4794]: I0216 18:19:32.043315 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="228679fc-e8a6-458b-b5e4-6cadf762ee0b" containerName="extract-content" Feb 16 18:19:32 crc kubenswrapper[4794]: E0216 18:19:32.043393 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="228679fc-e8a6-458b-b5e4-6cadf762ee0b" containerName="extract-utilities" Feb 16 18:19:32 crc kubenswrapper[4794]: I0216 18:19:32.043449 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="228679fc-e8a6-458b-b5e4-6cadf762ee0b" containerName="extract-utilities" Feb 16 18:19:32 crc kubenswrapper[4794]: E0216 18:19:32.043526 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="228679fc-e8a6-458b-b5e4-6cadf762ee0b" containerName="registry-server" Feb 16 18:19:32 crc kubenswrapper[4794]: I0216 18:19:32.043582 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="228679fc-e8a6-458b-b5e4-6cadf762ee0b" containerName="registry-server" Feb 16 18:19:32 crc kubenswrapper[4794]: I0216 18:19:32.048844 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="228679fc-e8a6-458b-b5e4-6cadf762ee0b" containerName="registry-server" Feb 16 18:19:32 crc kubenswrapper[4794]: I0216 18:19:32.049959 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q7gkk" Feb 16 18:19:32 crc kubenswrapper[4794]: I0216 18:19:32.053436 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 18:19:32 crc kubenswrapper[4794]: I0216 18:19:32.053480 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 18:19:32 crc kubenswrapper[4794]: I0216 18:19:32.054047 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 18:19:32 crc kubenswrapper[4794]: I0216 18:19:32.054324 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kshzw" Feb 16 18:19:32 crc kubenswrapper[4794]: I0216 18:19:32.054342 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q7gkk"] Feb 16 18:19:32 crc kubenswrapper[4794]: I0216 18:19:32.107522 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh5rm\" (UniqueName: \"kubernetes.io/projected/60fab4d9-75ee-41a4-8a19-11f232514267-kube-api-access-vh5rm\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-q7gkk\" (UID: \"60fab4d9-75ee-41a4-8a19-11f232514267\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q7gkk" Feb 16 18:19:32 crc kubenswrapper[4794]: I0216 18:19:32.108086 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60fab4d9-75ee-41a4-8a19-11f232514267-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-q7gkk\" (UID: \"60fab4d9-75ee-41a4-8a19-11f232514267\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q7gkk" Feb 16 18:19:32 crc kubenswrapper[4794]: I0216 18:19:32.108150 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60fab4d9-75ee-41a4-8a19-11f232514267-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-q7gkk\" (UID: \"60fab4d9-75ee-41a4-8a19-11f232514267\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q7gkk" Feb 16 18:19:32 crc kubenswrapper[4794]: I0216 18:19:32.211220 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh5rm\" (UniqueName: \"kubernetes.io/projected/60fab4d9-75ee-41a4-8a19-11f232514267-kube-api-access-vh5rm\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-q7gkk\" (UID: \"60fab4d9-75ee-41a4-8a19-11f232514267\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q7gkk" Feb 16 18:19:32 crc kubenswrapper[4794]: I0216 18:19:32.212139 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60fab4d9-75ee-41a4-8a19-11f232514267-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-q7gkk\" (UID: \"60fab4d9-75ee-41a4-8a19-11f232514267\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q7gkk" Feb 16 18:19:32 crc kubenswrapper[4794]: I0216 18:19:32.212185 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60fab4d9-75ee-41a4-8a19-11f232514267-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-q7gkk\" (UID: \"60fab4d9-75ee-41a4-8a19-11f232514267\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q7gkk" Feb 16 18:19:32 crc kubenswrapper[4794]: I0216 18:19:32.219076 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60fab4d9-75ee-41a4-8a19-11f232514267-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-q7gkk\" (UID: \"60fab4d9-75ee-41a4-8a19-11f232514267\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q7gkk" Feb 16 18:19:32 crc kubenswrapper[4794]: I0216 18:19:32.219452 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60fab4d9-75ee-41a4-8a19-11f232514267-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-q7gkk\" (UID: \"60fab4d9-75ee-41a4-8a19-11f232514267\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q7gkk" Feb 16 18:19:32 crc kubenswrapper[4794]: I0216 18:19:32.232223 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh5rm\" (UniqueName: \"kubernetes.io/projected/60fab4d9-75ee-41a4-8a19-11f232514267-kube-api-access-vh5rm\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-q7gkk\" (UID: \"60fab4d9-75ee-41a4-8a19-11f232514267\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q7gkk" Feb 16 18:19:32 crc kubenswrapper[4794]: I0216 18:19:32.379659 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q7gkk" Feb 16 18:19:32 crc kubenswrapper[4794]: E0216 18:19:32.793483 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:19:32 crc kubenswrapper[4794]: E0216 18:19:32.793541 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:19:32 crc kubenswrapper[4794]: I0216 18:19:32.966872 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q7gkk"] Feb 16 18:19:33 crc kubenswrapper[4794]: I0216 18:19:33.869525 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q7gkk" event={"ID":"60fab4d9-75ee-41a4-8a19-11f232514267","Type":"ContainerStarted","Data":"c7fc766b4254346928003c7d677315be911d23e0a6918e860f4c4aee214c78f4"} Feb 16 18:19:33 crc kubenswrapper[4794]: I0216 18:19:33.869853 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q7gkk" event={"ID":"60fab4d9-75ee-41a4-8a19-11f232514267","Type":"ContainerStarted","Data":"2356b34b57c74e73826ef81542ea0b870ff7f4b380e6fc3224156aae8a8776f9"} Feb 16 18:19:33 crc kubenswrapper[4794]: I0216 18:19:33.885355 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q7gkk" podStartSLOduration=1.426798038 podStartE2EDuration="1.885338438s" podCreationTimestamp="2026-02-16 18:19:32 +0000 UTC" firstStartedPulling="2026-02-16 18:19:32.972741885 +0000 UTC m=+4798.920836532" lastFinishedPulling="2026-02-16 18:19:33.431282285 +0000 UTC m=+4799.379376932" observedRunningTime="2026-02-16 18:19:33.883168907 +0000 UTC m=+4799.831263554" watchObservedRunningTime="2026-02-16 18:19:33.885338438 +0000 UTC m=+4799.833433085" Feb 16 18:19:43 crc kubenswrapper[4794]: E0216 18:19:43.793414 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:19:47 crc kubenswrapper[4794]: E0216 18:19:47.792289 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:19:58 crc kubenswrapper[4794]: E0216 18:19:58.793658 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:19:59 crc kubenswrapper[4794]: E0216 18:19:59.794296 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:20:10 crc kubenswrapper[4794]: E0216 18:20:10.795820 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:20:11 crc kubenswrapper[4794]: E0216 18:20:11.794713 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:20:23 crc kubenswrapper[4794]: E0216 18:20:23.795671 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:20:25 crc kubenswrapper[4794]: E0216 18:20:25.797323 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:20:34 crc kubenswrapper[4794]: E0216 18:20:34.804967 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:20:40 crc kubenswrapper[4794]: E0216 18:20:40.798407 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:20:48 crc kubenswrapper[4794]: E0216 18:20:48.795696 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:20:53 crc kubenswrapper[4794]: E0216 18:20:53.793869 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:21:00 crc kubenswrapper[4794]: E0216 18:21:00.798706 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:21:07 crc kubenswrapper[4794]: E0216 18:21:07.795604 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:21:11 crc kubenswrapper[4794]: E0216 18:21:11.794670 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:21:20 crc kubenswrapper[4794]: I0216 18:21:20.140524 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 18:21:20 crc kubenswrapper[4794]: I0216 18:21:20.141087 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 18:21:20 crc kubenswrapper[4794]: E0216 18:21:20.795411 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:21:25 crc kubenswrapper[4794]: E0216 18:21:25.796420 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:21:34 crc kubenswrapper[4794]: E0216 18:21:34.800989 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:21:39 crc kubenswrapper[4794]: E0216 18:21:39.794391 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:21:49 crc kubenswrapper[4794]: E0216 18:21:49.794575 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:21:50 crc kubenswrapper[4794]: I0216 18:21:50.141032 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 18:21:50 crc kubenswrapper[4794]: I0216 18:21:50.141103 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 18:21:50 crc kubenswrapper[4794]: E0216 18:21:50.795396 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:21:51 crc kubenswrapper[4794]: I0216 18:21:51.854643 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cs97m"] Feb 16 18:21:51 crc kubenswrapper[4794]: I0216 18:21:51.859042 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cs97m" Feb 16 18:21:51 crc kubenswrapper[4794]: I0216 18:21:51.868007 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cs97m"] Feb 16 18:21:52 crc kubenswrapper[4794]: I0216 18:21:52.000003 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcc854a9-e724-466a-b5df-1a7736cb6d8b-catalog-content\") pod \"community-operators-cs97m\" (UID: \"dcc854a9-e724-466a-b5df-1a7736cb6d8b\") " pod="openshift-marketplace/community-operators-cs97m" Feb 16 18:21:52 crc kubenswrapper[4794]: I0216 18:21:52.000554 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcc854a9-e724-466a-b5df-1a7736cb6d8b-utilities\") pod \"community-operators-cs97m\" (UID: \"dcc854a9-e724-466a-b5df-1a7736cb6d8b\") " pod="openshift-marketplace/community-operators-cs97m" Feb 16 18:21:52 crc kubenswrapper[4794]: I0216 18:21:52.000699 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jtzb\" (UniqueName: \"kubernetes.io/projected/dcc854a9-e724-466a-b5df-1a7736cb6d8b-kube-api-access-4jtzb\") pod \"community-operators-cs97m\" (UID: \"dcc854a9-e724-466a-b5df-1a7736cb6d8b\") " pod="openshift-marketplace/community-operators-cs97m" Feb 16 18:21:52 crc kubenswrapper[4794]: I0216 18:21:52.072228 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wwnlq"] Feb 16 18:21:52 crc kubenswrapper[4794]: I0216 18:21:52.075187 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wwnlq" Feb 16 18:21:52 crc kubenswrapper[4794]: I0216 18:21:52.091871 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wwnlq"] Feb 16 18:21:52 crc kubenswrapper[4794]: I0216 18:21:52.102914 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcc854a9-e724-466a-b5df-1a7736cb6d8b-catalog-content\") pod \"community-operators-cs97m\" (UID: \"dcc854a9-e724-466a-b5df-1a7736cb6d8b\") " pod="openshift-marketplace/community-operators-cs97m" Feb 16 18:21:52 crc kubenswrapper[4794]: I0216 18:21:52.103385 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcc854a9-e724-466a-b5df-1a7736cb6d8b-utilities\") pod \"community-operators-cs97m\" (UID: \"dcc854a9-e724-466a-b5df-1a7736cb6d8b\") " pod="openshift-marketplace/community-operators-cs97m" Feb 16 18:21:52 crc kubenswrapper[4794]: I0216 18:21:52.103564 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jtzb\" (UniqueName: \"kubernetes.io/projected/dcc854a9-e724-466a-b5df-1a7736cb6d8b-kube-api-access-4jtzb\") pod \"community-operators-cs97m\" (UID: \"dcc854a9-e724-466a-b5df-1a7736cb6d8b\") " pod="openshift-marketplace/community-operators-cs97m" Feb 16 18:21:52 crc kubenswrapper[4794]: I0216 18:21:52.103599 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcc854a9-e724-466a-b5df-1a7736cb6d8b-catalog-content\") pod \"community-operators-cs97m\" (UID: \"dcc854a9-e724-466a-b5df-1a7736cb6d8b\") " pod="openshift-marketplace/community-operators-cs97m" Feb 16 18:21:52 crc kubenswrapper[4794]: I0216 18:21:52.103713 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcc854a9-e724-466a-b5df-1a7736cb6d8b-utilities\") pod \"community-operators-cs97m\" (UID: \"dcc854a9-e724-466a-b5df-1a7736cb6d8b\") " pod="openshift-marketplace/community-operators-cs97m" Feb 16 18:21:52 crc kubenswrapper[4794]: I0216 18:21:52.133097 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jtzb\" (UniqueName: \"kubernetes.io/projected/dcc854a9-e724-466a-b5df-1a7736cb6d8b-kube-api-access-4jtzb\") pod \"community-operators-cs97m\" (UID: \"dcc854a9-e724-466a-b5df-1a7736cb6d8b\") " pod="openshift-marketplace/community-operators-cs97m" Feb 16 18:21:52 crc kubenswrapper[4794]: I0216 18:21:52.181186 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cs97m" Feb 16 18:21:52 crc kubenswrapper[4794]: I0216 18:21:52.205974 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9717ba3f-a4bd-4fd5-8998-aff060455692-catalog-content\") pod \"redhat-operators-wwnlq\" (UID: \"9717ba3f-a4bd-4fd5-8998-aff060455692\") " pod="openshift-marketplace/redhat-operators-wwnlq" Feb 16 18:21:52 crc kubenswrapper[4794]: I0216 18:21:52.206173 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k68wn\" (UniqueName: \"kubernetes.io/projected/9717ba3f-a4bd-4fd5-8998-aff060455692-kube-api-access-k68wn\") pod \"redhat-operators-wwnlq\" (UID: \"9717ba3f-a4bd-4fd5-8998-aff060455692\") " pod="openshift-marketplace/redhat-operators-wwnlq" Feb 16 18:21:52 crc kubenswrapper[4794]: I0216 18:21:52.206233 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9717ba3f-a4bd-4fd5-8998-aff060455692-utilities\") pod \"redhat-operators-wwnlq\" (UID: \"9717ba3f-a4bd-4fd5-8998-aff060455692\") " pod="openshift-marketplace/redhat-operators-wwnlq" Feb 16 18:21:52 crc kubenswrapper[4794]: I0216 18:21:52.309676 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k68wn\" (UniqueName: \"kubernetes.io/projected/9717ba3f-a4bd-4fd5-8998-aff060455692-kube-api-access-k68wn\") pod \"redhat-operators-wwnlq\" (UID: \"9717ba3f-a4bd-4fd5-8998-aff060455692\") " pod="openshift-marketplace/redhat-operators-wwnlq" Feb 16 18:21:52 crc kubenswrapper[4794]: I0216 18:21:52.309781 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9717ba3f-a4bd-4fd5-8998-aff060455692-utilities\") pod \"redhat-operators-wwnlq\" (UID: \"9717ba3f-a4bd-4fd5-8998-aff060455692\") " pod="openshift-marketplace/redhat-operators-wwnlq" Feb 16 18:21:52 crc kubenswrapper[4794]: I0216 18:21:52.309940 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9717ba3f-a4bd-4fd5-8998-aff060455692-catalog-content\") pod \"redhat-operators-wwnlq\" (UID: \"9717ba3f-a4bd-4fd5-8998-aff060455692\") " pod="openshift-marketplace/redhat-operators-wwnlq" Feb 16 18:21:52 crc kubenswrapper[4794]: I0216 18:21:52.310397 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9717ba3f-a4bd-4fd5-8998-aff060455692-utilities\") pod \"redhat-operators-wwnlq\" (UID: \"9717ba3f-a4bd-4fd5-8998-aff060455692\") " pod="openshift-marketplace/redhat-operators-wwnlq" Feb 16 18:21:52 crc kubenswrapper[4794]: I0216 18:21:52.310510 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9717ba3f-a4bd-4fd5-8998-aff060455692-catalog-content\") pod \"redhat-operators-wwnlq\" (UID: \"9717ba3f-a4bd-4fd5-8998-aff060455692\") " pod="openshift-marketplace/redhat-operators-wwnlq" Feb 16 18:21:52 crc kubenswrapper[4794]: I0216 18:21:52.334248 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k68wn\" (UniqueName: \"kubernetes.io/projected/9717ba3f-a4bd-4fd5-8998-aff060455692-kube-api-access-k68wn\") pod \"redhat-operators-wwnlq\" (UID: \"9717ba3f-a4bd-4fd5-8998-aff060455692\") " pod="openshift-marketplace/redhat-operators-wwnlq" Feb 16 18:21:52 crc kubenswrapper[4794]: I0216 18:21:52.396145 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wwnlq" Feb 16 18:21:52 crc kubenswrapper[4794]: I0216 18:21:52.782004 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cs97m"] Feb 16 18:21:52 crc kubenswrapper[4794]: I0216 18:21:52.966050 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wwnlq"] Feb 16 18:21:52 crc kubenswrapper[4794]: W0216 18:21:52.968676 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9717ba3f_a4bd_4fd5_8998_aff060455692.slice/crio-13dd75a428971b4ad7ab8ab7dd0cbfb2fb6e66f4075bff15effcae2f4d70d354 WatchSource:0}: Error finding container 13dd75a428971b4ad7ab8ab7dd0cbfb2fb6e66f4075bff15effcae2f4d70d354: Status 404 returned error can't find the container with id 13dd75a428971b4ad7ab8ab7dd0cbfb2fb6e66f4075bff15effcae2f4d70d354 Feb 16 18:21:53 crc kubenswrapper[4794]: I0216 18:21:53.594846 4794 generic.go:334] "Generic (PLEG): container finished" podID="9717ba3f-a4bd-4fd5-8998-aff060455692" containerID="1ccef27dc8b6fe641b14f22058fe22af3320e40761bb1720ff1db4fa3c1e855f" exitCode=0 Feb 16 18:21:53 crc kubenswrapper[4794]: I0216 18:21:53.594898 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwnlq" event={"ID":"9717ba3f-a4bd-4fd5-8998-aff060455692","Type":"ContainerDied","Data":"1ccef27dc8b6fe641b14f22058fe22af3320e40761bb1720ff1db4fa3c1e855f"} Feb 16 18:21:53 crc kubenswrapper[4794]: I0216 18:21:53.595133 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwnlq" event={"ID":"9717ba3f-a4bd-4fd5-8998-aff060455692","Type":"ContainerStarted","Data":"13dd75a428971b4ad7ab8ab7dd0cbfb2fb6e66f4075bff15effcae2f4d70d354"} Feb 16 18:21:53 crc kubenswrapper[4794]: I0216 18:21:53.596696 4794 generic.go:334] "Generic (PLEG): container finished" podID="dcc854a9-e724-466a-b5df-1a7736cb6d8b" containerID="4a6b3cfa2412d46c89300413c978b00630af42a6ec85d73fd57dfcf29408139e" exitCode=0 Feb 16 18:21:53 crc kubenswrapper[4794]: I0216 18:21:53.597566 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cs97m" event={"ID":"dcc854a9-e724-466a-b5df-1a7736cb6d8b","Type":"ContainerDied","Data":"4a6b3cfa2412d46c89300413c978b00630af42a6ec85d73fd57dfcf29408139e"} Feb 16 18:21:53 crc kubenswrapper[4794]: I0216 18:21:53.597703 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cs97m" event={"ID":"dcc854a9-e724-466a-b5df-1a7736cb6d8b","Type":"ContainerStarted","Data":"2e98fcc51d35fe6def31607be404549542ebac6ae58b13408eee8aeeb19af9a0"} Feb 16 18:21:54 crc kubenswrapper[4794]: I0216 18:21:54.612288 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwnlq" event={"ID":"9717ba3f-a4bd-4fd5-8998-aff060455692","Type":"ContainerStarted","Data":"827510b5ad1a31d4a22c69ba61a2d919b5bb568578105738d111676418d4e567"} Feb 16 18:21:55 crc kubenswrapper[4794]: I0216 18:21:55.624040 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cs97m" event={"ID":"dcc854a9-e724-466a-b5df-1a7736cb6d8b","Type":"ContainerStarted","Data":"1c4803d6f1282b88fad2510c61cdec3c9f6c932bf975d110c3a45f9301ccb20e"} Feb 16 18:21:57 crc kubenswrapper[4794]: I0216 18:21:57.645538 4794 generic.go:334] "Generic (PLEG): container finished" podID="9717ba3f-a4bd-4fd5-8998-aff060455692" containerID="827510b5ad1a31d4a22c69ba61a2d919b5bb568578105738d111676418d4e567" exitCode=0 Feb 16 18:21:57 crc kubenswrapper[4794]: I0216 18:21:57.645602 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwnlq" event={"ID":"9717ba3f-a4bd-4fd5-8998-aff060455692","Type":"ContainerDied","Data":"827510b5ad1a31d4a22c69ba61a2d919b5bb568578105738d111676418d4e567"} Feb 16 18:21:58 crc kubenswrapper[4794]: I0216 18:21:58.666421 4794 generic.go:334] "Generic (PLEG): container finished" podID="dcc854a9-e724-466a-b5df-1a7736cb6d8b" containerID="1c4803d6f1282b88fad2510c61cdec3c9f6c932bf975d110c3a45f9301ccb20e" exitCode=0 Feb 16 18:21:58 crc kubenswrapper[4794]: I0216 18:21:58.666510 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cs97m" event={"ID":"dcc854a9-e724-466a-b5df-1a7736cb6d8b","Type":"ContainerDied","Data":"1c4803d6f1282b88fad2510c61cdec3c9f6c932bf975d110c3a45f9301ccb20e"} Feb 16 18:21:59 crc kubenswrapper[4794]: I0216 18:21:59.676854 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cs97m" event={"ID":"dcc854a9-e724-466a-b5df-1a7736cb6d8b","Type":"ContainerStarted","Data":"5b2bcd4f10ca28c1177837ee93e38f2e2d3750f2676e890e7dd7ecd8c621612d"} Feb 16 18:21:59 crc kubenswrapper[4794]: I0216 18:21:59.679741 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwnlq" event={"ID":"9717ba3f-a4bd-4fd5-8998-aff060455692","Type":"ContainerStarted","Data":"58a38057267906efba7152998b19c07c09527c0547119104a0ea8576c3347ec0"} Feb 16 18:21:59 crc kubenswrapper[4794]: I0216 18:21:59.697355 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cs97m" podStartSLOduration=3.191083344 podStartE2EDuration="8.697337068s" podCreationTimestamp="2026-02-16 18:21:51 +0000 UTC" firstStartedPulling="2026-02-16 18:21:53.598606735 +0000 UTC m=+4939.546701382" lastFinishedPulling="2026-02-16 18:21:59.104860459 +0000 UTC m=+4945.052955106" observedRunningTime="2026-02-16 18:21:59.693668644 +0000 UTC m=+4945.641763291" watchObservedRunningTime="2026-02-16 18:21:59.697337068 +0000 UTC m=+4945.645431715" Feb 16 18:21:59 crc kubenswrapper[4794]: I0216 18:21:59.718796 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wwnlq" podStartSLOduration=3.03700989 podStartE2EDuration="7.718774834s" podCreationTimestamp="2026-02-16 18:21:52 +0000 UTC" firstStartedPulling="2026-02-16 18:21:53.597209716 +0000 UTC m=+4939.545304363" lastFinishedPulling="2026-02-16 18:21:58.27897464 +0000 UTC m=+4944.227069307" observedRunningTime="2026-02-16 18:21:59.711952611 +0000 UTC m=+4945.660047258" watchObservedRunningTime="2026-02-16 18:21:59.718774834 +0000 UTC m=+4945.666869481" Feb 16 18:22:02 crc kubenswrapper[4794]: I0216 18:22:02.182067 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cs97m" Feb 16 18:22:02 crc kubenswrapper[4794]: I0216 18:22:02.182614 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cs97m" Feb 16 18:22:02 crc kubenswrapper[4794]: I0216 18:22:02.397015 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wwnlq" Feb 16 18:22:02 crc kubenswrapper[4794]: I0216 18:22:02.397203 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wwnlq" Feb 16 18:22:02 crc kubenswrapper[4794]: E0216 18:22:02.793325 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:22:03 crc kubenswrapper[4794]: I0216 18:22:03.252975 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-cs97m" podUID="dcc854a9-e724-466a-b5df-1a7736cb6d8b" containerName="registry-server" probeResult="failure" output=< Feb 16 18:22:03 crc kubenswrapper[4794]: timeout: failed to connect service ":50051" within 1s Feb 16 18:22:03 crc kubenswrapper[4794]: > Feb 16 18:22:03 crc kubenswrapper[4794]: I0216 18:22:03.449794 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wwnlq" podUID="9717ba3f-a4bd-4fd5-8998-aff060455692" containerName="registry-server" probeResult="failure" output=< Feb 16 18:22:03 crc kubenswrapper[4794]: timeout: failed to connect service ":50051" within 1s Feb 16 18:22:03 crc kubenswrapper[4794]: > Feb 16 18:22:04 crc kubenswrapper[4794]: E0216 18:22:04.801682 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:22:12 crc kubenswrapper[4794]: I0216 18:22:12.271720 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cs97m" Feb 16 18:22:12 crc kubenswrapper[4794]: I0216 18:22:12.355790 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cs97m" Feb 16 18:22:12 crc kubenswrapper[4794]: I0216 18:22:12.458881 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wwnlq" Feb 16 18:22:12 crc kubenswrapper[4794]: I0216 18:22:12.538601 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cs97m"] Feb 16 18:22:12 crc kubenswrapper[4794]: I0216 18:22:12.542024 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wwnlq" Feb 16 18:22:13 crc kubenswrapper[4794]: I0216 18:22:13.833228 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cs97m" podUID="dcc854a9-e724-466a-b5df-1a7736cb6d8b" containerName="registry-server" containerID="cri-o://5b2bcd4f10ca28c1177837ee93e38f2e2d3750f2676e890e7dd7ecd8c621612d" gracePeriod=2 Feb 16 18:22:14 crc kubenswrapper[4794]: I0216 18:22:14.355360 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cs97m" Feb 16 18:22:14 crc kubenswrapper[4794]: I0216 18:22:14.485437 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcc854a9-e724-466a-b5df-1a7736cb6d8b-catalog-content\") pod \"dcc854a9-e724-466a-b5df-1a7736cb6d8b\" (UID: \"dcc854a9-e724-466a-b5df-1a7736cb6d8b\") " Feb 16 18:22:14 crc kubenswrapper[4794]: I0216 18:22:14.485495 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jtzb\" (UniqueName: \"kubernetes.io/projected/dcc854a9-e724-466a-b5df-1a7736cb6d8b-kube-api-access-4jtzb\") pod \"dcc854a9-e724-466a-b5df-1a7736cb6d8b\" (UID: \"dcc854a9-e724-466a-b5df-1a7736cb6d8b\") " Feb 16 18:22:14 crc kubenswrapper[4794]: I0216 18:22:14.485577 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcc854a9-e724-466a-b5df-1a7736cb6d8b-utilities\") pod \"dcc854a9-e724-466a-b5df-1a7736cb6d8b\" (UID: \"dcc854a9-e724-466a-b5df-1a7736cb6d8b\") " Feb 16 18:22:14 crc kubenswrapper[4794]: I0216 18:22:14.487117 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcc854a9-e724-466a-b5df-1a7736cb6d8b-utilities" (OuterVolumeSpecName: "utilities") pod "dcc854a9-e724-466a-b5df-1a7736cb6d8b" (UID: "dcc854a9-e724-466a-b5df-1a7736cb6d8b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 18:22:14 crc kubenswrapper[4794]: I0216 18:22:14.493860 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcc854a9-e724-466a-b5df-1a7736cb6d8b-kube-api-access-4jtzb" (OuterVolumeSpecName: "kube-api-access-4jtzb") pod "dcc854a9-e724-466a-b5df-1a7736cb6d8b" (UID: "dcc854a9-e724-466a-b5df-1a7736cb6d8b"). InnerVolumeSpecName "kube-api-access-4jtzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 18:22:14 crc kubenswrapper[4794]: I0216 18:22:14.567749 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcc854a9-e724-466a-b5df-1a7736cb6d8b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dcc854a9-e724-466a-b5df-1a7736cb6d8b" (UID: "dcc854a9-e724-466a-b5df-1a7736cb6d8b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 18:22:14 crc kubenswrapper[4794]: I0216 18:22:14.588974 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jtzb\" (UniqueName: \"kubernetes.io/projected/dcc854a9-e724-466a-b5df-1a7736cb6d8b-kube-api-access-4jtzb\") on node \"crc\" DevicePath \"\"" Feb 16 18:22:14 crc kubenswrapper[4794]: I0216 18:22:14.589008 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcc854a9-e724-466a-b5df-1a7736cb6d8b-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 18:22:14 crc kubenswrapper[4794]: I0216 18:22:14.589020 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcc854a9-e724-466a-b5df-1a7736cb6d8b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 18:22:14 crc kubenswrapper[4794]: I0216 18:22:14.725455 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wwnlq"] Feb 16 18:22:14 crc kubenswrapper[4794]: I0216 18:22:14.725866 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wwnlq" podUID="9717ba3f-a4bd-4fd5-8998-aff060455692" containerName="registry-server" containerID="cri-o://58a38057267906efba7152998b19c07c09527c0547119104a0ea8576c3347ec0" gracePeriod=2 Feb 16 18:22:14 crc kubenswrapper[4794]: E0216 18:22:14.809849 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:22:14 crc kubenswrapper[4794]: I0216 18:22:14.849805 4794 generic.go:334] "Generic (PLEG): container finished" podID="dcc854a9-e724-466a-b5df-1a7736cb6d8b" containerID="5b2bcd4f10ca28c1177837ee93e38f2e2d3750f2676e890e7dd7ecd8c621612d" exitCode=0 Feb 16 18:22:14 crc kubenswrapper[4794]: I0216 18:22:14.849849 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cs97m" event={"ID":"dcc854a9-e724-466a-b5df-1a7736cb6d8b","Type":"ContainerDied","Data":"5b2bcd4f10ca28c1177837ee93e38f2e2d3750f2676e890e7dd7ecd8c621612d"} Feb 16 18:22:14 crc kubenswrapper[4794]: I0216 18:22:14.849885 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cs97m" event={"ID":"dcc854a9-e724-466a-b5df-1a7736cb6d8b","Type":"ContainerDied","Data":"2e98fcc51d35fe6def31607be404549542ebac6ae58b13408eee8aeeb19af9a0"} Feb 16 18:22:14 crc kubenswrapper[4794]: I0216 18:22:14.849904 4794 scope.go:117] "RemoveContainer" containerID="5b2bcd4f10ca28c1177837ee93e38f2e2d3750f2676e890e7dd7ecd8c621612d" Feb 16 18:22:14 crc kubenswrapper[4794]: I0216 18:22:14.851451 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cs97m" Feb 16 18:22:14 crc kubenswrapper[4794]: I0216 18:22:14.933352 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cs97m"] Feb 16 18:22:14 crc kubenswrapper[4794]: I0216 18:22:14.949758 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cs97m"] Feb 16 18:22:14 crc kubenswrapper[4794]: I0216 18:22:14.957260 4794 scope.go:117] "RemoveContainer" containerID="1c4803d6f1282b88fad2510c61cdec3c9f6c932bf975d110c3a45f9301ccb20e" Feb 16 18:22:15 crc kubenswrapper[4794]: I0216 18:22:15.040328 4794 scope.go:117] "RemoveContainer" containerID="4a6b3cfa2412d46c89300413c978b00630af42a6ec85d73fd57dfcf29408139e" Feb 16 18:22:15 crc kubenswrapper[4794]: I0216 18:22:15.100347 4794 scope.go:117] "RemoveContainer" containerID="5b2bcd4f10ca28c1177837ee93e38f2e2d3750f2676e890e7dd7ecd8c621612d" Feb 16 18:22:15 crc kubenswrapper[4794]: E0216 18:22:15.100905 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b2bcd4f10ca28c1177837ee93e38f2e2d3750f2676e890e7dd7ecd8c621612d\": container with ID starting with 5b2bcd4f10ca28c1177837ee93e38f2e2d3750f2676e890e7dd7ecd8c621612d not found: ID does not exist" containerID="5b2bcd4f10ca28c1177837ee93e38f2e2d3750f2676e890e7dd7ecd8c621612d" Feb 16 18:22:15 crc kubenswrapper[4794]: I0216 18:22:15.100960 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b2bcd4f10ca28c1177837ee93e38f2e2d3750f2676e890e7dd7ecd8c621612d"} err="failed to get container status \"5b2bcd4f10ca28c1177837ee93e38f2e2d3750f2676e890e7dd7ecd8c621612d\": rpc error: code = NotFound desc = could not find container \"5b2bcd4f10ca28c1177837ee93e38f2e2d3750f2676e890e7dd7ecd8c621612d\": container with ID starting with 5b2bcd4f10ca28c1177837ee93e38f2e2d3750f2676e890e7dd7ecd8c621612d not found: ID does not exist" Feb 16 18:22:15 crc kubenswrapper[4794]: I0216 18:22:15.100997 4794 scope.go:117] "RemoveContainer" containerID="1c4803d6f1282b88fad2510c61cdec3c9f6c932bf975d110c3a45f9301ccb20e" Feb 16 18:22:15 crc kubenswrapper[4794]: E0216 18:22:15.102880 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c4803d6f1282b88fad2510c61cdec3c9f6c932bf975d110c3a45f9301ccb20e\": container with ID starting with 1c4803d6f1282b88fad2510c61cdec3c9f6c932bf975d110c3a45f9301ccb20e not found: ID does not exist" containerID="1c4803d6f1282b88fad2510c61cdec3c9f6c932bf975d110c3a45f9301ccb20e" Feb 16 18:22:15 crc kubenswrapper[4794]: I0216 18:22:15.102931 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c4803d6f1282b88fad2510c61cdec3c9f6c932bf975d110c3a45f9301ccb20e"} err="failed to get container status \"1c4803d6f1282b88fad2510c61cdec3c9f6c932bf975d110c3a45f9301ccb20e\": rpc error: code = NotFound desc = could not find container \"1c4803d6f1282b88fad2510c61cdec3c9f6c932bf975d110c3a45f9301ccb20e\": container with ID starting with 1c4803d6f1282b88fad2510c61cdec3c9f6c932bf975d110c3a45f9301ccb20e not found: ID does not exist" Feb 16 18:22:15 crc kubenswrapper[4794]: I0216 18:22:15.102959 4794 scope.go:117] "RemoveContainer" containerID="4a6b3cfa2412d46c89300413c978b00630af42a6ec85d73fd57dfcf29408139e" Feb 16 18:22:15 crc kubenswrapper[4794]: E0216 18:22:15.110394 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a6b3cfa2412d46c89300413c978b00630af42a6ec85d73fd57dfcf29408139e\": container with ID starting with 4a6b3cfa2412d46c89300413c978b00630af42a6ec85d73fd57dfcf29408139e not found: ID does not exist" containerID="4a6b3cfa2412d46c89300413c978b00630af42a6ec85d73fd57dfcf29408139e" Feb 16 18:22:15 crc kubenswrapper[4794]: I0216 18:22:15.110455 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a6b3cfa2412d46c89300413c978b00630af42a6ec85d73fd57dfcf29408139e"} err="failed to get container status \"4a6b3cfa2412d46c89300413c978b00630af42a6ec85d73fd57dfcf29408139e\": rpc error: code = NotFound desc = could not find container \"4a6b3cfa2412d46c89300413c978b00630af42a6ec85d73fd57dfcf29408139e\": container with ID starting with 4a6b3cfa2412d46c89300413c978b00630af42a6ec85d73fd57dfcf29408139e not found: ID does not exist" Feb 16 18:22:15 crc kubenswrapper[4794]: I0216 18:22:15.344472 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wwnlq" Feb 16 18:22:15 crc kubenswrapper[4794]: I0216 18:22:15.513585 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9717ba3f-a4bd-4fd5-8998-aff060455692-utilities\") pod \"9717ba3f-a4bd-4fd5-8998-aff060455692\" (UID: \"9717ba3f-a4bd-4fd5-8998-aff060455692\") " Feb 16 18:22:15 crc kubenswrapper[4794]: I0216 18:22:15.513722 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9717ba3f-a4bd-4fd5-8998-aff060455692-catalog-content\") pod \"9717ba3f-a4bd-4fd5-8998-aff060455692\" (UID: \"9717ba3f-a4bd-4fd5-8998-aff060455692\") " Feb 16 18:22:15 crc kubenswrapper[4794]: I0216 18:22:15.513930 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k68wn\" (UniqueName: \"kubernetes.io/projected/9717ba3f-a4bd-4fd5-8998-aff060455692-kube-api-access-k68wn\") pod \"9717ba3f-a4bd-4fd5-8998-aff060455692\" (UID: \"9717ba3f-a4bd-4fd5-8998-aff060455692\") " Feb 16 18:22:15 crc kubenswrapper[4794]: I0216 18:22:15.515175 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9717ba3f-a4bd-4fd5-8998-aff060455692-utilities" (OuterVolumeSpecName: "utilities") pod "9717ba3f-a4bd-4fd5-8998-aff060455692" (UID: "9717ba3f-a4bd-4fd5-8998-aff060455692"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 18:22:15 crc kubenswrapper[4794]: I0216 18:22:15.534104 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9717ba3f-a4bd-4fd5-8998-aff060455692-kube-api-access-k68wn" (OuterVolumeSpecName: "kube-api-access-k68wn") pod "9717ba3f-a4bd-4fd5-8998-aff060455692" (UID: "9717ba3f-a4bd-4fd5-8998-aff060455692"). InnerVolumeSpecName "kube-api-access-k68wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 18:22:15 crc kubenswrapper[4794]: I0216 18:22:15.617531 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k68wn\" (UniqueName: \"kubernetes.io/projected/9717ba3f-a4bd-4fd5-8998-aff060455692-kube-api-access-k68wn\") on node \"crc\" DevicePath \"\"" Feb 16 18:22:15 crc kubenswrapper[4794]: I0216 18:22:15.617572 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9717ba3f-a4bd-4fd5-8998-aff060455692-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 18:22:15 crc kubenswrapper[4794]: I0216 18:22:15.650167 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9717ba3f-a4bd-4fd5-8998-aff060455692-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9717ba3f-a4bd-4fd5-8998-aff060455692" (UID: "9717ba3f-a4bd-4fd5-8998-aff060455692"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 18:22:15 crc kubenswrapper[4794]: I0216 18:22:15.720382 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9717ba3f-a4bd-4fd5-8998-aff060455692-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 18:22:15 crc kubenswrapper[4794]: I0216 18:22:15.864362 4794 generic.go:334] "Generic (PLEG): container finished" podID="9717ba3f-a4bd-4fd5-8998-aff060455692" containerID="58a38057267906efba7152998b19c07c09527c0547119104a0ea8576c3347ec0" exitCode=0 Feb 16 18:22:15 crc kubenswrapper[4794]: I0216 18:22:15.864446 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wwnlq" Feb 16 18:22:15 crc kubenswrapper[4794]: I0216 18:22:15.864482 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwnlq" event={"ID":"9717ba3f-a4bd-4fd5-8998-aff060455692","Type":"ContainerDied","Data":"58a38057267906efba7152998b19c07c09527c0547119104a0ea8576c3347ec0"} Feb 16 18:22:15 crc kubenswrapper[4794]: I0216 18:22:15.864571 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwnlq" event={"ID":"9717ba3f-a4bd-4fd5-8998-aff060455692","Type":"ContainerDied","Data":"13dd75a428971b4ad7ab8ab7dd0cbfb2fb6e66f4075bff15effcae2f4d70d354"} Feb 16 18:22:15 crc kubenswrapper[4794]: I0216 18:22:15.864616 4794 scope.go:117] "RemoveContainer" containerID="58a38057267906efba7152998b19c07c09527c0547119104a0ea8576c3347ec0" Feb 16 18:22:15 crc kubenswrapper[4794]: I0216 18:22:15.909072 4794 scope.go:117] "RemoveContainer" containerID="827510b5ad1a31d4a22c69ba61a2d919b5bb568578105738d111676418d4e567" Feb 16 18:22:15 crc kubenswrapper[4794]: I0216 18:22:15.918095 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wwnlq"] Feb 16 18:22:15 crc kubenswrapper[4794]: I0216 18:22:15.934683 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wwnlq"] Feb 16 18:22:15 crc kubenswrapper[4794]: I0216 18:22:15.943195 4794 scope.go:117] "RemoveContainer" containerID="1ccef27dc8b6fe641b14f22058fe22af3320e40761bb1720ff1db4fa3c1e855f" Feb 16 18:22:15 crc kubenswrapper[4794]: I0216 18:22:15.969564 4794 scope.go:117] "RemoveContainer" containerID="58a38057267906efba7152998b19c07c09527c0547119104a0ea8576c3347ec0" Feb 16 18:22:15 crc kubenswrapper[4794]: E0216 18:22:15.970125 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58a38057267906efba7152998b19c07c09527c0547119104a0ea8576c3347ec0\": container with ID starting with 58a38057267906efba7152998b19c07c09527c0547119104a0ea8576c3347ec0 not found: ID does not exist" containerID="58a38057267906efba7152998b19c07c09527c0547119104a0ea8576c3347ec0" Feb 16 18:22:15 crc kubenswrapper[4794]: I0216 18:22:15.970173 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58a38057267906efba7152998b19c07c09527c0547119104a0ea8576c3347ec0"} err="failed to get container status \"58a38057267906efba7152998b19c07c09527c0547119104a0ea8576c3347ec0\": rpc error: code = NotFound desc = could not find container \"58a38057267906efba7152998b19c07c09527c0547119104a0ea8576c3347ec0\": container with ID starting with 58a38057267906efba7152998b19c07c09527c0547119104a0ea8576c3347ec0 not found: ID does not exist" Feb 16 18:22:15 crc kubenswrapper[4794]: I0216 18:22:15.970203 4794 scope.go:117] "RemoveContainer" containerID="827510b5ad1a31d4a22c69ba61a2d919b5bb568578105738d111676418d4e567" Feb 16 18:22:15 crc kubenswrapper[4794]: E0216 18:22:15.970650 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"827510b5ad1a31d4a22c69ba61a2d919b5bb568578105738d111676418d4e567\": container with ID starting with 827510b5ad1a31d4a22c69ba61a2d919b5bb568578105738d111676418d4e567 not found: ID does not exist" containerID="827510b5ad1a31d4a22c69ba61a2d919b5bb568578105738d111676418d4e567" Feb 16 18:22:15 crc kubenswrapper[4794]: I0216 18:22:15.970751 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"827510b5ad1a31d4a22c69ba61a2d919b5bb568578105738d111676418d4e567"} err="failed to get container status \"827510b5ad1a31d4a22c69ba61a2d919b5bb568578105738d111676418d4e567\": rpc error: code = NotFound desc = could not find container \"827510b5ad1a31d4a22c69ba61a2d919b5bb568578105738d111676418d4e567\": container with ID starting with 827510b5ad1a31d4a22c69ba61a2d919b5bb568578105738d111676418d4e567 not found: ID does not exist" Feb 16 18:22:15 crc kubenswrapper[4794]: I0216 18:22:15.970774 4794 scope.go:117] "RemoveContainer" containerID="1ccef27dc8b6fe641b14f22058fe22af3320e40761bb1720ff1db4fa3c1e855f" Feb 16 18:22:15 crc kubenswrapper[4794]: E0216 18:22:15.971161 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ccef27dc8b6fe641b14f22058fe22af3320e40761bb1720ff1db4fa3c1e855f\": container with ID starting with 1ccef27dc8b6fe641b14f22058fe22af3320e40761bb1720ff1db4fa3c1e855f not found: ID does not exist" containerID="1ccef27dc8b6fe641b14f22058fe22af3320e40761bb1720ff1db4fa3c1e855f" Feb 16 18:22:15 crc kubenswrapper[4794]: I0216 18:22:15.971235 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ccef27dc8b6fe641b14f22058fe22af3320e40761bb1720ff1db4fa3c1e855f"} err="failed to get container status \"1ccef27dc8b6fe641b14f22058fe22af3320e40761bb1720ff1db4fa3c1e855f\": rpc error: code = NotFound desc = could not find container \"1ccef27dc8b6fe641b14f22058fe22af3320e40761bb1720ff1db4fa3c1e855f\": container with ID starting with 1ccef27dc8b6fe641b14f22058fe22af3320e40761bb1720ff1db4fa3c1e855f not found: ID does not exist" Feb 16 18:22:16 crc kubenswrapper[4794]: I0216 18:22:16.808899 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9717ba3f-a4bd-4fd5-8998-aff060455692" path="/var/lib/kubelet/pods/9717ba3f-a4bd-4fd5-8998-aff060455692/volumes" Feb 16 18:22:16 crc kubenswrapper[4794]: I0216 18:22:16.809794 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcc854a9-e724-466a-b5df-1a7736cb6d8b" path="/var/lib/kubelet/pods/dcc854a9-e724-466a-b5df-1a7736cb6d8b/volumes" Feb 16 18:22:17 crc kubenswrapper[4794]: E0216 18:22:17.794893 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:22:20 crc kubenswrapper[4794]: I0216 18:22:20.141336 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 18:22:20 crc kubenswrapper[4794]: I0216 18:22:20.141725 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 18:22:20 crc kubenswrapper[4794]: I0216 18:22:20.141784 4794 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" Feb 16 18:22:20 crc kubenswrapper[4794]: I0216 18:22:20.142835 4794 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"62a44f9cc7dcaeb9cc0d4e21b632c5aa1a13b6b5d5c47077cb926fe9f97c5b81"} pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 18:22:20 crc kubenswrapper[4794]: I0216 18:22:20.142945 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" containerID="cri-o://62a44f9cc7dcaeb9cc0d4e21b632c5aa1a13b6b5d5c47077cb926fe9f97c5b81" gracePeriod=600 Feb 16 18:22:20 crc kubenswrapper[4794]: E0216 18:22:20.290711 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:22:20 crc kubenswrapper[4794]: I0216 18:22:20.933113 4794 generic.go:334] "Generic (PLEG): container finished" podID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerID="62a44f9cc7dcaeb9cc0d4e21b632c5aa1a13b6b5d5c47077cb926fe9f97c5b81" exitCode=0 Feb 16 18:22:20 crc kubenswrapper[4794]: I0216 18:22:20.933211 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerDied","Data":"62a44f9cc7dcaeb9cc0d4e21b632c5aa1a13b6b5d5c47077cb926fe9f97c5b81"} Feb 16 18:22:20 crc kubenswrapper[4794]: I0216 18:22:20.933432 4794 scope.go:117] "RemoveContainer" containerID="6a86808fa24a97024716a4f82a177899dec831043626583733eb25cffc19e3bb" Feb 16 18:22:20 crc kubenswrapper[4794]: I0216 18:22:20.934275 4794 scope.go:117] "RemoveContainer" containerID="62a44f9cc7dcaeb9cc0d4e21b632c5aa1a13b6b5d5c47077cb926fe9f97c5b81" Feb 16 18:22:20 crc kubenswrapper[4794]: E0216 18:22:20.934702 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:22:28 crc kubenswrapper[4794]: E0216 18:22:28.796002 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:22:29 crc kubenswrapper[4794]: E0216 18:22:29.793805 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:22:36 crc kubenswrapper[4794]: I0216 18:22:36.792690 4794 scope.go:117] "RemoveContainer" containerID="62a44f9cc7dcaeb9cc0d4e21b632c5aa1a13b6b5d5c47077cb926fe9f97c5b81" Feb 16 18:22:36 crc kubenswrapper[4794]: E0216 18:22:36.793851 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:22:40 crc kubenswrapper[4794]: E0216 18:22:40.793726 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:22:41 crc kubenswrapper[4794]: I0216 18:22:41.793809 4794 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 18:22:41 crc kubenswrapper[4794]: E0216 18:22:41.875387 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 18:22:41 crc kubenswrapper[4794]: E0216 18:22:41.875455 4794 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 18:22:41 crc kubenswrapper[4794]: E0216 18:22:41.875633 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2h5l2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-7gcsf_openstack(c695f880-15cb-45b1-8545-60d8437ec631): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 18:22:41 crc kubenswrapper[4794]: E0216 18:22:41.877188 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:22:44 crc kubenswrapper[4794]: I0216 18:22:44.907414 4794 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","poddcc854a9-e724-466a-b5df-1a7736cb6d8b"] err="unable to destroy cgroup paths for cgroup [kubepods burstable poddcc854a9-e724-466a-b5df-1a7736cb6d8b] : Timed out while waiting for systemd to remove kubepods-burstable-poddcc854a9_e724_466a_b5df_1a7736cb6d8b.slice" Feb 16 18:22:47 crc kubenswrapper[4794]: I0216 18:22:47.791546 4794 scope.go:117] "RemoveContainer" containerID="62a44f9cc7dcaeb9cc0d4e21b632c5aa1a13b6b5d5c47077cb926fe9f97c5b81" Feb 16 18:22:47 crc kubenswrapper[4794]: E0216 18:22:47.792178 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:22:53 crc kubenswrapper[4794]: E0216 18:22:53.794943 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:22:55 crc kubenswrapper[4794]: E0216 18:22:55.795559 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:22:59 crc kubenswrapper[4794]: I0216 18:22:59.515350 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vtrht"] Feb 16 18:22:59 crc kubenswrapper[4794]: E0216 18:22:59.516557 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9717ba3f-a4bd-4fd5-8998-aff060455692" containerName="registry-server" Feb 16 18:22:59 crc kubenswrapper[4794]: I0216 18:22:59.516574 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="9717ba3f-a4bd-4fd5-8998-aff060455692" containerName="registry-server" Feb 16 18:22:59 crc kubenswrapper[4794]: E0216 18:22:59.516622 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9717ba3f-a4bd-4fd5-8998-aff060455692" containerName="extract-utilities" Feb 16 18:22:59 crc kubenswrapper[4794]: I0216 18:22:59.516633 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="9717ba3f-a4bd-4fd5-8998-aff060455692" containerName="extract-utilities" Feb 16 18:22:59 crc kubenswrapper[4794]: E0216 18:22:59.516653 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcc854a9-e724-466a-b5df-1a7736cb6d8b" containerName="extract-content" Feb 16 18:22:59 crc kubenswrapper[4794]: I0216 18:22:59.516662 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcc854a9-e724-466a-b5df-1a7736cb6d8b" containerName="extract-content" Feb 16 18:22:59 crc kubenswrapper[4794]: E0216 18:22:59.516681 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcc854a9-e724-466a-b5df-1a7736cb6d8b" containerName="extract-utilities" Feb 16 18:22:59 crc kubenswrapper[4794]: I0216 18:22:59.516687 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcc854a9-e724-466a-b5df-1a7736cb6d8b" containerName="extract-utilities" Feb 16 18:22:59 crc kubenswrapper[4794]: E0216 18:22:59.516698 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcc854a9-e724-466a-b5df-1a7736cb6d8b" containerName="registry-server" Feb 16 18:22:59 crc kubenswrapper[4794]: I0216 18:22:59.516706 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcc854a9-e724-466a-b5df-1a7736cb6d8b" containerName="registry-server" Feb 16 18:22:59 crc kubenswrapper[4794]: E0216 18:22:59.516731 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9717ba3f-a4bd-4fd5-8998-aff060455692" containerName="extract-content" Feb 16 18:22:59 crc kubenswrapper[4794]: I0216 18:22:59.516738 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="9717ba3f-a4bd-4fd5-8998-aff060455692" containerName="extract-content" Feb 16 18:22:59 crc kubenswrapper[4794]: I0216 18:22:59.517004 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcc854a9-e724-466a-b5df-1a7736cb6d8b" containerName="registry-server" Feb 16 18:22:59 crc kubenswrapper[4794]: I0216 18:22:59.517026 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="9717ba3f-a4bd-4fd5-8998-aff060455692" containerName="registry-server" Feb 16 18:22:59 crc kubenswrapper[4794]: I0216 18:22:59.519133 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vtrht" Feb 16 18:22:59 crc kubenswrapper[4794]: I0216 18:22:59.534324 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vtrht"] Feb 16 18:22:59 crc kubenswrapper[4794]: I0216 18:22:59.611480 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfb82\" (UniqueName: \"kubernetes.io/projected/81ae3257-fcd0-45d3-a675-26207fc94c78-kube-api-access-qfb82\") pod \"certified-operators-vtrht\" (UID: \"81ae3257-fcd0-45d3-a675-26207fc94c78\") " pod="openshift-marketplace/certified-operators-vtrht" Feb 16 18:22:59 crc kubenswrapper[4794]: I0216 18:22:59.611824 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81ae3257-fcd0-45d3-a675-26207fc94c78-utilities\") pod \"certified-operators-vtrht\" (UID: \"81ae3257-fcd0-45d3-a675-26207fc94c78\") " pod="openshift-marketplace/certified-operators-vtrht" Feb 16 18:22:59 crc kubenswrapper[4794]: I0216 18:22:59.611879 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81ae3257-fcd0-45d3-a675-26207fc94c78-catalog-content\") pod \"certified-operators-vtrht\" (UID: \"81ae3257-fcd0-45d3-a675-26207fc94c78\") " pod="openshift-marketplace/certified-operators-vtrht" Feb 16 18:22:59 crc kubenswrapper[4794]: I0216 18:22:59.712970 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfb82\" (UniqueName: \"kubernetes.io/projected/81ae3257-fcd0-45d3-a675-26207fc94c78-kube-api-access-qfb82\") pod \"certified-operators-vtrht\" (UID: \"81ae3257-fcd0-45d3-a675-26207fc94c78\") " pod="openshift-marketplace/certified-operators-vtrht" Feb 16 18:22:59 crc kubenswrapper[4794]: I0216 18:22:59.713373 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81ae3257-fcd0-45d3-a675-26207fc94c78-utilities\") pod \"certified-operators-vtrht\" (UID: \"81ae3257-fcd0-45d3-a675-26207fc94c78\") " pod="openshift-marketplace/certified-operators-vtrht" Feb 16 18:22:59 crc kubenswrapper[4794]: I0216 18:22:59.713428 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81ae3257-fcd0-45d3-a675-26207fc94c78-catalog-content\") pod \"certified-operators-vtrht\" (UID: \"81ae3257-fcd0-45d3-a675-26207fc94c78\") " pod="openshift-marketplace/certified-operators-vtrht" Feb 16 18:22:59 crc kubenswrapper[4794]: I0216 18:22:59.713974 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81ae3257-fcd0-45d3-a675-26207fc94c78-catalog-content\") pod \"certified-operators-vtrht\" (UID: \"81ae3257-fcd0-45d3-a675-26207fc94c78\") " pod="openshift-marketplace/certified-operators-vtrht" Feb 16 18:22:59 crc kubenswrapper[4794]: I0216 18:22:59.714214 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81ae3257-fcd0-45d3-a675-26207fc94c78-utilities\") pod \"certified-operators-vtrht\" (UID: \"81ae3257-fcd0-45d3-a675-26207fc94c78\") " pod="openshift-marketplace/certified-operators-vtrht" Feb 16 18:22:59 crc kubenswrapper[4794]: I0216 18:22:59.733322 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfb82\" (UniqueName: \"kubernetes.io/projected/81ae3257-fcd0-45d3-a675-26207fc94c78-kube-api-access-qfb82\") pod \"certified-operators-vtrht\" (UID: \"81ae3257-fcd0-45d3-a675-26207fc94c78\") " pod="openshift-marketplace/certified-operators-vtrht" Feb 16 18:22:59 crc kubenswrapper[4794]: I0216 18:22:59.840274 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vtrht" Feb 16 18:23:00 crc kubenswrapper[4794]: I0216 18:23:00.343626 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vtrht"] Feb 16 18:23:00 crc kubenswrapper[4794]: I0216 18:23:00.460482 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtrht" event={"ID":"81ae3257-fcd0-45d3-a675-26207fc94c78","Type":"ContainerStarted","Data":"1701a79e279aa625634cb1bb11e4534c94981031c2512780a1f3223213a8bd44"} Feb 16 18:23:01 crc kubenswrapper[4794]: I0216 18:23:01.472002 4794 generic.go:334] "Generic (PLEG): container finished" podID="81ae3257-fcd0-45d3-a675-26207fc94c78" containerID="2180f2da84b4fb48cdc474445db8fed1d17db196cf41689c61fcc3567644d85f" exitCode=0 Feb 16 18:23:01 crc kubenswrapper[4794]: I0216 18:23:01.472061 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtrht" event={"ID":"81ae3257-fcd0-45d3-a675-26207fc94c78","Type":"ContainerDied","Data":"2180f2da84b4fb48cdc474445db8fed1d17db196cf41689c61fcc3567644d85f"} Feb 16 18:23:01 crc kubenswrapper[4794]: I0216 18:23:01.792133 4794 scope.go:117] "RemoveContainer" containerID="62a44f9cc7dcaeb9cc0d4e21b632c5aa1a13b6b5d5c47077cb926fe9f97c5b81" Feb 16 18:23:01 crc kubenswrapper[4794]: E0216 18:23:01.792737 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:23:03 crc kubenswrapper[4794]: I0216 18:23:03.498437 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtrht" event={"ID":"81ae3257-fcd0-45d3-a675-26207fc94c78","Type":"ContainerStarted","Data":"507001ff07dd5f12538c0c88ac13a42bfbd7d3c0f04adb95b282486601eddd5e"} Feb 16 18:23:04 crc kubenswrapper[4794]: I0216 18:23:04.511142 4794 generic.go:334] "Generic (PLEG): container finished" podID="81ae3257-fcd0-45d3-a675-26207fc94c78" containerID="507001ff07dd5f12538c0c88ac13a42bfbd7d3c0f04adb95b282486601eddd5e" exitCode=0 Feb 16 18:23:04 crc kubenswrapper[4794]: I0216 18:23:04.511221 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtrht" event={"ID":"81ae3257-fcd0-45d3-a675-26207fc94c78","Type":"ContainerDied","Data":"507001ff07dd5f12538c0c88ac13a42bfbd7d3c0f04adb95b282486601eddd5e"} Feb 16 18:23:05 crc kubenswrapper[4794]: E0216 18:23:05.793709 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:23:06 crc kubenswrapper[4794]: I0216 18:23:06.536032 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtrht" event={"ID":"81ae3257-fcd0-45d3-a675-26207fc94c78","Type":"ContainerStarted","Data":"637edbc5f5aae875303e63e88901d00bf3a4a723c9f667297dd72aa98c8aa56f"} Feb 16 18:23:06 crc kubenswrapper[4794]: I0216 18:23:06.558129 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vtrht" podStartSLOduration=3.641436451 podStartE2EDuration="7.558105574s" podCreationTimestamp="2026-02-16 18:22:59 +0000 UTC" firstStartedPulling="2026-02-16 18:23:01.474294548 +0000 UTC m=+5007.422389195" lastFinishedPulling="2026-02-16 18:23:05.390963671 +0000 UTC m=+5011.339058318" observedRunningTime="2026-02-16 18:23:06.55546544 +0000 UTC m=+5012.503560087" watchObservedRunningTime="2026-02-16 18:23:06.558105574 +0000 UTC m=+5012.506200221" Feb 16 18:23:08 crc kubenswrapper[4794]: E0216 18:23:08.915699 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 18:23:08 crc kubenswrapper[4794]: E0216 18:23:08.916401 4794 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 18:23:08 crc kubenswrapper[4794]: E0216 18:23:08.916560 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59fh58dh6ch557h84h55ch564h5bh58fh5c8h5d4h584h669h667h569h59hd5hdbh9dh67ch5f9h59fh597h96h664h687h66dhfch5ddh5b7h88h59cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9v9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(8981f528-1f74-4d56-a93c-22860725b490): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 18:23:08 crc kubenswrapper[4794]: E0216 18:23:08.917804 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:23:09 crc kubenswrapper[4794]: I0216 18:23:09.840444 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vtrht" Feb 16 18:23:09 crc kubenswrapper[4794]: I0216 18:23:09.840628 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vtrht" Feb 16 18:23:09 crc kubenswrapper[4794]: I0216 18:23:09.890369 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vtrht" Feb 16 18:23:10 crc kubenswrapper[4794]: I0216 18:23:10.662138 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vtrht" Feb 16 18:23:10 crc kubenswrapper[4794]: I0216 18:23:10.724057 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vtrht"] Feb 16 18:23:12 crc kubenswrapper[4794]: I0216 18:23:12.612455 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vtrht" podUID="81ae3257-fcd0-45d3-a675-26207fc94c78" containerName="registry-server" containerID="cri-o://637edbc5f5aae875303e63e88901d00bf3a4a723c9f667297dd72aa98c8aa56f" gracePeriod=2 Feb 16 18:23:13 crc kubenswrapper[4794]: I0216 18:23:13.198375 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vtrht" Feb 16 18:23:13 crc kubenswrapper[4794]: I0216 18:23:13.382699 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81ae3257-fcd0-45d3-a675-26207fc94c78-catalog-content\") pod \"81ae3257-fcd0-45d3-a675-26207fc94c78\" (UID: \"81ae3257-fcd0-45d3-a675-26207fc94c78\") " Feb 16 18:23:13 crc kubenswrapper[4794]: I0216 18:23:13.382801 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfb82\" (UniqueName: \"kubernetes.io/projected/81ae3257-fcd0-45d3-a675-26207fc94c78-kube-api-access-qfb82\") pod \"81ae3257-fcd0-45d3-a675-26207fc94c78\" (UID: \"81ae3257-fcd0-45d3-a675-26207fc94c78\") " Feb 16 18:23:13 crc kubenswrapper[4794]: I0216 18:23:13.382856 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81ae3257-fcd0-45d3-a675-26207fc94c78-utilities\") pod \"81ae3257-fcd0-45d3-a675-26207fc94c78\" (UID: \"81ae3257-fcd0-45d3-a675-26207fc94c78\") " Feb 16 18:23:13 crc kubenswrapper[4794]: I0216 18:23:13.383887 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81ae3257-fcd0-45d3-a675-26207fc94c78-utilities" (OuterVolumeSpecName: "utilities") pod "81ae3257-fcd0-45d3-a675-26207fc94c78" (UID: "81ae3257-fcd0-45d3-a675-26207fc94c78"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 18:23:13 crc kubenswrapper[4794]: I0216 18:23:13.391141 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81ae3257-fcd0-45d3-a675-26207fc94c78-kube-api-access-qfb82" (OuterVolumeSpecName: "kube-api-access-qfb82") pod "81ae3257-fcd0-45d3-a675-26207fc94c78" (UID: "81ae3257-fcd0-45d3-a675-26207fc94c78"). InnerVolumeSpecName "kube-api-access-qfb82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 18:23:13 crc kubenswrapper[4794]: I0216 18:23:13.447895 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81ae3257-fcd0-45d3-a675-26207fc94c78-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81ae3257-fcd0-45d3-a675-26207fc94c78" (UID: "81ae3257-fcd0-45d3-a675-26207fc94c78"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 18:23:13 crc kubenswrapper[4794]: I0216 18:23:13.487227 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81ae3257-fcd0-45d3-a675-26207fc94c78-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 18:23:13 crc kubenswrapper[4794]: I0216 18:23:13.487277 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfb82\" (UniqueName: \"kubernetes.io/projected/81ae3257-fcd0-45d3-a675-26207fc94c78-kube-api-access-qfb82\") on node \"crc\" DevicePath \"\"" Feb 16 18:23:13 crc kubenswrapper[4794]: I0216 18:23:13.487293 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81ae3257-fcd0-45d3-a675-26207fc94c78-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 18:23:13 crc kubenswrapper[4794]: I0216 18:23:13.629183 4794 generic.go:334] "Generic (PLEG): container finished" podID="81ae3257-fcd0-45d3-a675-26207fc94c78" containerID="637edbc5f5aae875303e63e88901d00bf3a4a723c9f667297dd72aa98c8aa56f" exitCode=0 Feb 16 18:23:13 crc kubenswrapper[4794]: I0216 18:23:13.629262 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vtrht" Feb 16 18:23:13 crc kubenswrapper[4794]: I0216 18:23:13.629262 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtrht" event={"ID":"81ae3257-fcd0-45d3-a675-26207fc94c78","Type":"ContainerDied","Data":"637edbc5f5aae875303e63e88901d00bf3a4a723c9f667297dd72aa98c8aa56f"} Feb 16 18:23:13 crc kubenswrapper[4794]: I0216 18:23:13.629383 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtrht" event={"ID":"81ae3257-fcd0-45d3-a675-26207fc94c78","Type":"ContainerDied","Data":"1701a79e279aa625634cb1bb11e4534c94981031c2512780a1f3223213a8bd44"} Feb 16 18:23:13 crc kubenswrapper[4794]: I0216 18:23:13.629425 4794 scope.go:117] "RemoveContainer" containerID="637edbc5f5aae875303e63e88901d00bf3a4a723c9f667297dd72aa98c8aa56f" Feb 16 18:23:13 crc kubenswrapper[4794]: I0216 18:23:13.653977 4794 scope.go:117] "RemoveContainer" containerID="507001ff07dd5f12538c0c88ac13a42bfbd7d3c0f04adb95b282486601eddd5e" Feb 16 18:23:13 crc kubenswrapper[4794]: I0216 18:23:13.698008 4794 scope.go:117] "RemoveContainer" containerID="2180f2da84b4fb48cdc474445db8fed1d17db196cf41689c61fcc3567644d85f" Feb 16 18:23:13 crc kubenswrapper[4794]: I0216 18:23:13.699810 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vtrht"] Feb 16 18:23:13 crc kubenswrapper[4794]: I0216 18:23:13.719916 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vtrht"] Feb 16 18:23:13 crc kubenswrapper[4794]: I0216 18:23:13.747896 4794 scope.go:117] "RemoveContainer" containerID="637edbc5f5aae875303e63e88901d00bf3a4a723c9f667297dd72aa98c8aa56f" Feb 16 18:23:13 crc kubenswrapper[4794]: E0216 18:23:13.748418 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"637edbc5f5aae875303e63e88901d00bf3a4a723c9f667297dd72aa98c8aa56f\": container with ID starting with 637edbc5f5aae875303e63e88901d00bf3a4a723c9f667297dd72aa98c8aa56f not found: ID does not exist" containerID="637edbc5f5aae875303e63e88901d00bf3a4a723c9f667297dd72aa98c8aa56f" Feb 16 18:23:13 crc kubenswrapper[4794]: I0216 18:23:13.748488 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"637edbc5f5aae875303e63e88901d00bf3a4a723c9f667297dd72aa98c8aa56f"} err="failed to get container status \"637edbc5f5aae875303e63e88901d00bf3a4a723c9f667297dd72aa98c8aa56f\": rpc error: code = NotFound desc = could not find container \"637edbc5f5aae875303e63e88901d00bf3a4a723c9f667297dd72aa98c8aa56f\": container with ID starting with 637edbc5f5aae875303e63e88901d00bf3a4a723c9f667297dd72aa98c8aa56f not found: ID does not exist" Feb 16 18:23:13 crc kubenswrapper[4794]: I0216 18:23:13.748519 4794 scope.go:117] "RemoveContainer" containerID="507001ff07dd5f12538c0c88ac13a42bfbd7d3c0f04adb95b282486601eddd5e" Feb 16 18:23:13 crc kubenswrapper[4794]: E0216 18:23:13.748803 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"507001ff07dd5f12538c0c88ac13a42bfbd7d3c0f04adb95b282486601eddd5e\": container with ID starting with 507001ff07dd5f12538c0c88ac13a42bfbd7d3c0f04adb95b282486601eddd5e not found: ID does not exist" containerID="507001ff07dd5f12538c0c88ac13a42bfbd7d3c0f04adb95b282486601eddd5e" Feb 16 18:23:13 crc kubenswrapper[4794]: I0216 18:23:13.748825 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"507001ff07dd5f12538c0c88ac13a42bfbd7d3c0f04adb95b282486601eddd5e"} err="failed to get container status \"507001ff07dd5f12538c0c88ac13a42bfbd7d3c0f04adb95b282486601eddd5e\": rpc error: code = NotFound desc = could not find container \"507001ff07dd5f12538c0c88ac13a42bfbd7d3c0f04adb95b282486601eddd5e\": container with ID starting with 507001ff07dd5f12538c0c88ac13a42bfbd7d3c0f04adb95b282486601eddd5e not found: ID does not exist" Feb 16 18:23:13 crc kubenswrapper[4794]: I0216 18:23:13.748839 4794 scope.go:117] "RemoveContainer" containerID="2180f2da84b4fb48cdc474445db8fed1d17db196cf41689c61fcc3567644d85f" Feb 16 18:23:13 crc kubenswrapper[4794]: E0216 18:23:13.749068 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2180f2da84b4fb48cdc474445db8fed1d17db196cf41689c61fcc3567644d85f\": container with ID starting with 2180f2da84b4fb48cdc474445db8fed1d17db196cf41689c61fcc3567644d85f not found: ID does not exist" containerID="2180f2da84b4fb48cdc474445db8fed1d17db196cf41689c61fcc3567644d85f" Feb 16 18:23:13 crc kubenswrapper[4794]: I0216 18:23:13.749086 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2180f2da84b4fb48cdc474445db8fed1d17db196cf41689c61fcc3567644d85f"} err="failed to get container status \"2180f2da84b4fb48cdc474445db8fed1d17db196cf41689c61fcc3567644d85f\": rpc error: code = NotFound desc = could not find container \"2180f2da84b4fb48cdc474445db8fed1d17db196cf41689c61fcc3567644d85f\": container with ID starting with 2180f2da84b4fb48cdc474445db8fed1d17db196cf41689c61fcc3567644d85f not found: ID does not exist" Feb 16 18:23:14 crc kubenswrapper[4794]: I0216 18:23:14.810473 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81ae3257-fcd0-45d3-a675-26207fc94c78" path="/var/lib/kubelet/pods/81ae3257-fcd0-45d3-a675-26207fc94c78/volumes" Feb 16 18:23:16 crc kubenswrapper[4794]: I0216 18:23:16.792348 4794 scope.go:117] "RemoveContainer" containerID="62a44f9cc7dcaeb9cc0d4e21b632c5aa1a13b6b5d5c47077cb926fe9f97c5b81" Feb 16 18:23:16 crc kubenswrapper[4794]: E0216 18:23:16.793411 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:23:18 crc kubenswrapper[4794]: E0216 18:23:18.794904 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:23:23 crc kubenswrapper[4794]: E0216 18:23:23.793802 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:23:27 crc kubenswrapper[4794]: I0216 18:23:27.792822 4794 scope.go:117] "RemoveContainer" containerID="62a44f9cc7dcaeb9cc0d4e21b632c5aa1a13b6b5d5c47077cb926fe9f97c5b81" Feb 16 18:23:27 crc kubenswrapper[4794]: E0216 18:23:27.793816 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:23:32 crc kubenswrapper[4794]: E0216 18:23:32.795816 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:23:37 crc kubenswrapper[4794]: E0216 18:23:37.795817 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:23:42 crc kubenswrapper[4794]: I0216 18:23:42.792124 4794 scope.go:117] "RemoveContainer" containerID="62a44f9cc7dcaeb9cc0d4e21b632c5aa1a13b6b5d5c47077cb926fe9f97c5b81" Feb 16 18:23:42 crc kubenswrapper[4794]: E0216 18:23:42.793364 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:23:46 crc kubenswrapper[4794]: E0216 18:23:46.795636 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:23:52 crc kubenswrapper[4794]: E0216 18:23:52.795296 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:23:53 crc kubenswrapper[4794]: I0216 18:23:53.792825 4794 scope.go:117] "RemoveContainer" containerID="62a44f9cc7dcaeb9cc0d4e21b632c5aa1a13b6b5d5c47077cb926fe9f97c5b81" Feb 16 18:23:53 crc kubenswrapper[4794]: E0216 18:23:53.794003 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:24:00 crc kubenswrapper[4794]: E0216 18:24:00.795015 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:24:03 crc kubenswrapper[4794]: E0216 18:24:03.795863 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:24:04 crc kubenswrapper[4794]: I0216 18:24:04.814566 4794 scope.go:117] "RemoveContainer" containerID="62a44f9cc7dcaeb9cc0d4e21b632c5aa1a13b6b5d5c47077cb926fe9f97c5b81" Feb 16 18:24:04 crc kubenswrapper[4794]: E0216 18:24:04.820808 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:24:12 crc kubenswrapper[4794]: E0216 18:24:12.795059 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:24:16 crc kubenswrapper[4794]: I0216 18:24:16.792911 4794 scope.go:117] "RemoveContainer" containerID="62a44f9cc7dcaeb9cc0d4e21b632c5aa1a13b6b5d5c47077cb926fe9f97c5b81" Feb 16 18:24:16 crc kubenswrapper[4794]: E0216 18:24:16.793846 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:24:16 crc kubenswrapper[4794]: E0216 18:24:16.795222 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:24:24 crc kubenswrapper[4794]: E0216 18:24:24.800598 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:24:27 crc kubenswrapper[4794]: I0216 18:24:27.792420 4794 scope.go:117] "RemoveContainer" containerID="62a44f9cc7dcaeb9cc0d4e21b632c5aa1a13b6b5d5c47077cb926fe9f97c5b81" Feb 16 18:24:27 crc kubenswrapper[4794]: E0216 18:24:27.794654 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:24:30 crc kubenswrapper[4794]: E0216 18:24:30.795821 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:24:39 crc kubenswrapper[4794]: E0216 18:24:39.794774 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:24:41 crc kubenswrapper[4794]: I0216 18:24:41.791984 4794 scope.go:117] "RemoveContainer" containerID="62a44f9cc7dcaeb9cc0d4e21b632c5aa1a13b6b5d5c47077cb926fe9f97c5b81" Feb 16 18:24:41 crc kubenswrapper[4794]: E0216 18:24:41.793342 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:24:41 crc kubenswrapper[4794]: E0216 18:24:41.793771 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:24:52 crc kubenswrapper[4794]: I0216 18:24:52.797162 4794 scope.go:117] "RemoveContainer" containerID="62a44f9cc7dcaeb9cc0d4e21b632c5aa1a13b6b5d5c47077cb926fe9f97c5b81" Feb 16 18:24:52 crc kubenswrapper[4794]: E0216 18:24:52.801966 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:24:54 crc kubenswrapper[4794]: E0216 18:24:54.804674 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:24:54 crc kubenswrapper[4794]: E0216 18:24:54.805480 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:25:03 crc kubenswrapper[4794]: I0216 18:25:03.791875 4794 scope.go:117] "RemoveContainer" containerID="62a44f9cc7dcaeb9cc0d4e21b632c5aa1a13b6b5d5c47077cb926fe9f97c5b81" Feb 16 18:25:03 crc kubenswrapper[4794]: E0216 18:25:03.792492 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:25:07 crc kubenswrapper[4794]: E0216 18:25:07.794914 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:25:08 crc kubenswrapper[4794]: E0216 18:25:08.795789 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:25:18 crc kubenswrapper[4794]: I0216 18:25:18.791560 4794 scope.go:117] "RemoveContainer" containerID="62a44f9cc7dcaeb9cc0d4e21b632c5aa1a13b6b5d5c47077cb926fe9f97c5b81" Feb 16 18:25:18 crc kubenswrapper[4794]: E0216 18:25:18.792565 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:25:20 crc kubenswrapper[4794]: E0216 18:25:20.797917 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:25:23 crc kubenswrapper[4794]: E0216 18:25:23.795126 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:25:30 crc kubenswrapper[4794]: I0216 18:25:30.791282 4794 scope.go:117] "RemoveContainer" containerID="62a44f9cc7dcaeb9cc0d4e21b632c5aa1a13b6b5d5c47077cb926fe9f97c5b81" Feb 16 18:25:30 crc kubenswrapper[4794]: E0216 18:25:30.791988 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:25:34 crc kubenswrapper[4794]: E0216 18:25:34.801095 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:25:35 crc kubenswrapper[4794]: E0216 18:25:35.795039 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:25:43 crc kubenswrapper[4794]: I0216 18:25:43.791953 4794 scope.go:117] "RemoveContainer" containerID="62a44f9cc7dcaeb9cc0d4e21b632c5aa1a13b6b5d5c47077cb926fe9f97c5b81" Feb 16 18:25:43 crc kubenswrapper[4794]: E0216 18:25:43.793068 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:25:49 crc kubenswrapper[4794]: E0216 18:25:49.793127 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:25:49 crc kubenswrapper[4794]: E0216 18:25:49.793807 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:25:50 crc kubenswrapper[4794]: I0216 18:25:50.725250 4794 generic.go:334] "Generic (PLEG): container finished" podID="60fab4d9-75ee-41a4-8a19-11f232514267" containerID="c7fc766b4254346928003c7d677315be911d23e0a6918e860f4c4aee214c78f4" exitCode=2 Feb 16 18:25:50 crc kubenswrapper[4794]: I0216 18:25:50.725598 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q7gkk" event={"ID":"60fab4d9-75ee-41a4-8a19-11f232514267","Type":"ContainerDied","Data":"c7fc766b4254346928003c7d677315be911d23e0a6918e860f4c4aee214c78f4"} Feb 16 18:25:52 crc kubenswrapper[4794]: I0216 18:25:52.247473 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q7gkk" Feb 16 18:25:52 crc kubenswrapper[4794]: I0216 18:25:52.328895 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh5rm\" (UniqueName: \"kubernetes.io/projected/60fab4d9-75ee-41a4-8a19-11f232514267-kube-api-access-vh5rm\") pod \"60fab4d9-75ee-41a4-8a19-11f232514267\" (UID: \"60fab4d9-75ee-41a4-8a19-11f232514267\") " Feb 16 18:25:52 crc kubenswrapper[4794]: I0216 18:25:52.328962 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60fab4d9-75ee-41a4-8a19-11f232514267-inventory\") pod \"60fab4d9-75ee-41a4-8a19-11f232514267\" (UID: \"60fab4d9-75ee-41a4-8a19-11f232514267\") " Feb 16 18:25:52 crc kubenswrapper[4794]: I0216 18:25:52.329258 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60fab4d9-75ee-41a4-8a19-11f232514267-ssh-key-openstack-edpm-ipam\") pod \"60fab4d9-75ee-41a4-8a19-11f232514267\" (UID: \"60fab4d9-75ee-41a4-8a19-11f232514267\") " Feb 16 18:25:52 crc kubenswrapper[4794]: I0216 18:25:52.346731 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60fab4d9-75ee-41a4-8a19-11f232514267-kube-api-access-vh5rm" (OuterVolumeSpecName: "kube-api-access-vh5rm") pod "60fab4d9-75ee-41a4-8a19-11f232514267" (UID: "60fab4d9-75ee-41a4-8a19-11f232514267"). InnerVolumeSpecName "kube-api-access-vh5rm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 18:25:52 crc kubenswrapper[4794]: I0216 18:25:52.366431 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60fab4d9-75ee-41a4-8a19-11f232514267-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "60fab4d9-75ee-41a4-8a19-11f232514267" (UID: "60fab4d9-75ee-41a4-8a19-11f232514267"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 18:25:52 crc kubenswrapper[4794]: I0216 18:25:52.383696 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60fab4d9-75ee-41a4-8a19-11f232514267-inventory" (OuterVolumeSpecName: "inventory") pod "60fab4d9-75ee-41a4-8a19-11f232514267" (UID: "60fab4d9-75ee-41a4-8a19-11f232514267"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 18:25:52 crc kubenswrapper[4794]: I0216 18:25:52.433375 4794 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60fab4d9-75ee-41a4-8a19-11f232514267-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 18:25:52 crc kubenswrapper[4794]: I0216 18:25:52.433430 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh5rm\" (UniqueName: \"kubernetes.io/projected/60fab4d9-75ee-41a4-8a19-11f232514267-kube-api-access-vh5rm\") on node \"crc\" DevicePath \"\"" Feb 16 18:25:52 crc kubenswrapper[4794]: I0216 18:25:52.433450 4794 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60fab4d9-75ee-41a4-8a19-11f232514267-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 18:25:52 crc kubenswrapper[4794]: I0216 18:25:52.759769 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q7gkk" event={"ID":"60fab4d9-75ee-41a4-8a19-11f232514267","Type":"ContainerDied","Data":"2356b34b57c74e73826ef81542ea0b870ff7f4b380e6fc3224156aae8a8776f9"} Feb 16 18:25:52 crc kubenswrapper[4794]: I0216 18:25:52.759835 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2356b34b57c74e73826ef81542ea0b870ff7f4b380e6fc3224156aae8a8776f9" Feb 16 18:25:52 crc kubenswrapper[4794]: I0216 18:25:52.759848 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q7gkk" Feb 16 18:25:57 crc kubenswrapper[4794]: I0216 18:25:57.792361 4794 scope.go:117] "RemoveContainer" containerID="62a44f9cc7dcaeb9cc0d4e21b632c5aa1a13b6b5d5c47077cb926fe9f97c5b81" Feb 16 18:25:57 crc kubenswrapper[4794]: E0216 18:25:57.793403 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:26:01 crc kubenswrapper[4794]: E0216 18:26:01.795132 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:26:02 crc kubenswrapper[4794]: E0216 18:26:02.799500 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:26:11 crc kubenswrapper[4794]: I0216 18:26:11.792689 4794 scope.go:117] "RemoveContainer" containerID="62a44f9cc7dcaeb9cc0d4e21b632c5aa1a13b6b5d5c47077cb926fe9f97c5b81" Feb 16 18:26:11 crc kubenswrapper[4794]: E0216 18:26:11.794065 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:26:16 crc kubenswrapper[4794]: E0216 18:26:16.795123 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:26:17 crc kubenswrapper[4794]: E0216 18:26:17.793752 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:26:23 crc kubenswrapper[4794]: I0216 18:26:23.792126 4794 scope.go:117] "RemoveContainer" containerID="62a44f9cc7dcaeb9cc0d4e21b632c5aa1a13b6b5d5c47077cb926fe9f97c5b81" Feb 16 18:26:23 crc kubenswrapper[4794]: E0216 18:26:23.792882 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:26:30 crc kubenswrapper[4794]: E0216 18:26:30.795137 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:26:31 crc kubenswrapper[4794]: E0216 18:26:31.794570 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:26:35 crc kubenswrapper[4794]: I0216 18:26:35.791624 4794 scope.go:117] "RemoveContainer" containerID="62a44f9cc7dcaeb9cc0d4e21b632c5aa1a13b6b5d5c47077cb926fe9f97c5b81" Feb 16 18:26:35 crc kubenswrapper[4794]: E0216 18:26:35.792419 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:26:43 crc kubenswrapper[4794]: E0216 18:26:43.794388 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:26:46 crc kubenswrapper[4794]: E0216 18:26:46.794589 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:26:48 crc kubenswrapper[4794]: I0216 18:26:48.842221 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9bvp9/must-gather-d6pd7"] Feb 16 18:26:48 crc kubenswrapper[4794]: E0216 18:26:48.843367 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ae3257-fcd0-45d3-a675-26207fc94c78" containerName="extract-content" Feb 16 18:26:48 crc kubenswrapper[4794]: I0216 18:26:48.843384 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ae3257-fcd0-45d3-a675-26207fc94c78" containerName="extract-content" Feb 16 18:26:48 crc kubenswrapper[4794]: E0216 18:26:48.843407 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ae3257-fcd0-45d3-a675-26207fc94c78" containerName="registry-server" Feb 16 18:26:48 crc kubenswrapper[4794]: I0216 18:26:48.843414 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ae3257-fcd0-45d3-a675-26207fc94c78" containerName="registry-server" Feb 16 18:26:48 crc kubenswrapper[4794]: E0216 18:26:48.843450 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ae3257-fcd0-45d3-a675-26207fc94c78" containerName="extract-utilities" Feb 16 18:26:48 crc kubenswrapper[4794]: I0216 18:26:48.843459 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ae3257-fcd0-45d3-a675-26207fc94c78" containerName="extract-utilities" Feb 16 18:26:48 crc kubenswrapper[4794]: E0216 18:26:48.843490 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60fab4d9-75ee-41a4-8a19-11f232514267" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 18:26:48 crc kubenswrapper[4794]: I0216 18:26:48.843499 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="60fab4d9-75ee-41a4-8a19-11f232514267" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 18:26:48 crc kubenswrapper[4794]: I0216 18:26:48.843767 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="60fab4d9-75ee-41a4-8a19-11f232514267" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 18:26:48 crc kubenswrapper[4794]: I0216 18:26:48.844000 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="81ae3257-fcd0-45d3-a675-26207fc94c78" containerName="registry-server" Feb 16 18:26:48 crc kubenswrapper[4794]: I0216 18:26:48.845744 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9bvp9/must-gather-d6pd7" Feb 16 18:26:48 crc kubenswrapper[4794]: I0216 18:26:48.855228 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-9bvp9"/"openshift-service-ca.crt" Feb 16 18:26:48 crc kubenswrapper[4794]: I0216 18:26:48.855901 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-9bvp9"/"kube-root-ca.crt" Feb 16 18:26:48 crc kubenswrapper[4794]: I0216 18:26:48.857269 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-9bvp9"/"default-dockercfg-4ktw2" Feb 16 18:26:48 crc kubenswrapper[4794]: I0216 18:26:48.885522 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bbbb8431-488f-40c2-9166-28f5399b1253-must-gather-output\") pod \"must-gather-d6pd7\" (UID: \"bbbb8431-488f-40c2-9166-28f5399b1253\") " pod="openshift-must-gather-9bvp9/must-gather-d6pd7" Feb 16 18:26:48 crc kubenswrapper[4794]: I0216 18:26:48.885708 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcw2m\" (UniqueName: \"kubernetes.io/projected/bbbb8431-488f-40c2-9166-28f5399b1253-kube-api-access-hcw2m\") pod \"must-gather-d6pd7\" (UID: \"bbbb8431-488f-40c2-9166-28f5399b1253\") " pod="openshift-must-gather-9bvp9/must-gather-d6pd7" Feb 16 18:26:48 crc kubenswrapper[4794]: I0216 18:26:48.901721 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-9bvp9/must-gather-d6pd7"] Feb 16 18:26:48 crc kubenswrapper[4794]: I0216 18:26:48.987476 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcw2m\" (UniqueName: \"kubernetes.io/projected/bbbb8431-488f-40c2-9166-28f5399b1253-kube-api-access-hcw2m\") pod \"must-gather-d6pd7\" (UID: \"bbbb8431-488f-40c2-9166-28f5399b1253\") " pod="openshift-must-gather-9bvp9/must-gather-d6pd7" Feb 16 18:26:48 crc kubenswrapper[4794]: I0216 18:26:48.987742 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bbbb8431-488f-40c2-9166-28f5399b1253-must-gather-output\") pod \"must-gather-d6pd7\" (UID: \"bbbb8431-488f-40c2-9166-28f5399b1253\") " pod="openshift-must-gather-9bvp9/must-gather-d6pd7" Feb 16 18:26:48 crc kubenswrapper[4794]: I0216 18:26:48.988178 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bbbb8431-488f-40c2-9166-28f5399b1253-must-gather-output\") pod \"must-gather-d6pd7\" (UID: \"bbbb8431-488f-40c2-9166-28f5399b1253\") " pod="openshift-must-gather-9bvp9/must-gather-d6pd7" Feb 16 18:26:49 crc kubenswrapper[4794]: I0216 18:26:49.011475 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcw2m\" (UniqueName: \"kubernetes.io/projected/bbbb8431-488f-40c2-9166-28f5399b1253-kube-api-access-hcw2m\") pod \"must-gather-d6pd7\" (UID: \"bbbb8431-488f-40c2-9166-28f5399b1253\") " pod="openshift-must-gather-9bvp9/must-gather-d6pd7" Feb 16 18:26:49 crc kubenswrapper[4794]: I0216 18:26:49.170218 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9bvp9/must-gather-d6pd7" Feb 16 18:26:49 crc kubenswrapper[4794]: I0216 18:26:49.664955 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-9bvp9/must-gather-d6pd7"] Feb 16 18:26:50 crc kubenswrapper[4794]: I0216 18:26:50.473297 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9bvp9/must-gather-d6pd7" event={"ID":"bbbb8431-488f-40c2-9166-28f5399b1253","Type":"ContainerStarted","Data":"fa2c9f9d36ae673373f720de7bef5ec7ce776bddc20c97069ce305ee8ff1f491"} Feb 16 18:26:50 crc kubenswrapper[4794]: I0216 18:26:50.791900 4794 scope.go:117] "RemoveContainer" containerID="62a44f9cc7dcaeb9cc0d4e21b632c5aa1a13b6b5d5c47077cb926fe9f97c5b81" Feb 16 18:26:50 crc kubenswrapper[4794]: E0216 18:26:50.792270 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:26:55 crc kubenswrapper[4794]: E0216 18:26:55.797574 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:26:58 crc kubenswrapper[4794]: I0216 18:26:58.599523 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9bvp9/must-gather-d6pd7" event={"ID":"bbbb8431-488f-40c2-9166-28f5399b1253","Type":"ContainerStarted","Data":"87e8fd85c08d8e7dcdb892f28f2897267e7ab03e075c3e72eacd572d309d1f55"} Feb 16 18:26:58 crc kubenswrapper[4794]: I0216 18:26:58.599951 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9bvp9/must-gather-d6pd7" event={"ID":"bbbb8431-488f-40c2-9166-28f5399b1253","Type":"ContainerStarted","Data":"1cfe5ea54e4c83749b053beec5e1f137149ed83e762008161b895423f0392205"} Feb 16 18:26:58 crc kubenswrapper[4794]: I0216 18:26:58.627507 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-9bvp9/must-gather-d6pd7" podStartSLOduration=2.402687232 podStartE2EDuration="10.627489341s" podCreationTimestamp="2026-02-16 18:26:48 +0000 UTC" firstStartedPulling="2026-02-16 18:26:49.674145945 +0000 UTC m=+5235.622240592" lastFinishedPulling="2026-02-16 18:26:57.898948054 +0000 UTC m=+5243.847042701" observedRunningTime="2026-02-16 18:26:58.616087278 +0000 UTC m=+5244.564181925" watchObservedRunningTime="2026-02-16 18:26:58.627489341 +0000 UTC m=+5244.575583988" Feb 16 18:27:01 crc kubenswrapper[4794]: E0216 18:27:01.794972 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:27:03 crc kubenswrapper[4794]: I0216 18:27:03.984020 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9bvp9/crc-debug-t7w4s"] Feb 16 18:27:03 crc kubenswrapper[4794]: I0216 18:27:03.988448 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9bvp9/crc-debug-t7w4s" Feb 16 18:27:04 crc kubenswrapper[4794]: I0216 18:27:04.051770 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75wbr\" (UniqueName: \"kubernetes.io/projected/172d430c-ad05-4d18-9937-7d2f996ed198-kube-api-access-75wbr\") pod \"crc-debug-t7w4s\" (UID: \"172d430c-ad05-4d18-9937-7d2f996ed198\") " pod="openshift-must-gather-9bvp9/crc-debug-t7w4s" Feb 16 18:27:04 crc kubenswrapper[4794]: I0216 18:27:04.052701 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/172d430c-ad05-4d18-9937-7d2f996ed198-host\") pod \"crc-debug-t7w4s\" (UID: \"172d430c-ad05-4d18-9937-7d2f996ed198\") " pod="openshift-must-gather-9bvp9/crc-debug-t7w4s" Feb 16 18:27:04 crc kubenswrapper[4794]: I0216 18:27:04.166779 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/172d430c-ad05-4d18-9937-7d2f996ed198-host\") pod \"crc-debug-t7w4s\" (UID: \"172d430c-ad05-4d18-9937-7d2f996ed198\") " pod="openshift-must-gather-9bvp9/crc-debug-t7w4s" Feb 16 18:27:04 crc kubenswrapper[4794]: I0216 18:27:04.166893 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75wbr\" (UniqueName: \"kubernetes.io/projected/172d430c-ad05-4d18-9937-7d2f996ed198-kube-api-access-75wbr\") pod \"crc-debug-t7w4s\" (UID: \"172d430c-ad05-4d18-9937-7d2f996ed198\") " pod="openshift-must-gather-9bvp9/crc-debug-t7w4s" Feb 16 18:27:04 crc kubenswrapper[4794]: I0216 18:27:04.167275 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/172d430c-ad05-4d18-9937-7d2f996ed198-host\") pod \"crc-debug-t7w4s\" (UID: \"172d430c-ad05-4d18-9937-7d2f996ed198\") " pod="openshift-must-gather-9bvp9/crc-debug-t7w4s" Feb 16 18:27:04 crc kubenswrapper[4794]: I0216 18:27:04.207258 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75wbr\" (UniqueName: \"kubernetes.io/projected/172d430c-ad05-4d18-9937-7d2f996ed198-kube-api-access-75wbr\") pod \"crc-debug-t7w4s\" (UID: \"172d430c-ad05-4d18-9937-7d2f996ed198\") " pod="openshift-must-gather-9bvp9/crc-debug-t7w4s" Feb 16 18:27:04 crc kubenswrapper[4794]: I0216 18:27:04.328568 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9bvp9/crc-debug-t7w4s" Feb 16 18:27:04 crc kubenswrapper[4794]: W0216 18:27:04.786434 4794 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod172d430c_ad05_4d18_9937_7d2f996ed198.slice/crio-d56e654a05ff0d1361bec7a7628cd37c40eda391d5b0c94ef5df406f49c4d9cb WatchSource:0}: Error finding container d56e654a05ff0d1361bec7a7628cd37c40eda391d5b0c94ef5df406f49c4d9cb: Status 404 returned error can't find the container with id d56e654a05ff0d1361bec7a7628cd37c40eda391d5b0c94ef5df406f49c4d9cb Feb 16 18:27:05 crc kubenswrapper[4794]: I0216 18:27:05.674548 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9bvp9/crc-debug-t7w4s" event={"ID":"172d430c-ad05-4d18-9937-7d2f996ed198","Type":"ContainerStarted","Data":"d56e654a05ff0d1361bec7a7628cd37c40eda391d5b0c94ef5df406f49c4d9cb"} Feb 16 18:27:05 crc kubenswrapper[4794]: I0216 18:27:05.792968 4794 scope.go:117] "RemoveContainer" containerID="62a44f9cc7dcaeb9cc0d4e21b632c5aa1a13b6b5d5c47077cb926fe9f97c5b81" Feb 16 18:27:05 crc kubenswrapper[4794]: E0216 18:27:05.793257 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:27:07 crc kubenswrapper[4794]: E0216 18:27:07.793539 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:27:14 crc kubenswrapper[4794]: E0216 18:27:14.800851 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:27:16 crc kubenswrapper[4794]: I0216 18:27:16.838924 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9bvp9/crc-debug-t7w4s" event={"ID":"172d430c-ad05-4d18-9937-7d2f996ed198","Type":"ContainerStarted","Data":"8c95b4e2e7ba852aeebca2873778d056b0e459be34a54a4950def911f31bbf05"} Feb 16 18:27:16 crc kubenswrapper[4794]: I0216 18:27:16.866737 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-9bvp9/crc-debug-t7w4s" podStartSLOduration=2.866156627 podStartE2EDuration="13.866712318s" podCreationTimestamp="2026-02-16 18:27:03 +0000 UTC" firstStartedPulling="2026-02-16 18:27:04.788367012 +0000 UTC m=+5250.736461659" lastFinishedPulling="2026-02-16 18:27:15.788922703 +0000 UTC m=+5261.737017350" observedRunningTime="2026-02-16 18:27:16.852227629 +0000 UTC m=+5262.800322276" watchObservedRunningTime="2026-02-16 18:27:16.866712318 +0000 UTC m=+5262.814806965" Feb 16 18:27:17 crc kubenswrapper[4794]: I0216 18:27:17.792192 4794 scope.go:117] "RemoveContainer" containerID="62a44f9cc7dcaeb9cc0d4e21b632c5aa1a13b6b5d5c47077cb926fe9f97c5b81" Feb 16 18:27:17 crc kubenswrapper[4794]: E0216 18:27:17.792837 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:27:18 crc kubenswrapper[4794]: E0216 18:27:18.794465 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:27:25 crc kubenswrapper[4794]: E0216 18:27:25.801060 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:27:30 crc kubenswrapper[4794]: I0216 18:27:30.792182 4794 scope.go:117] "RemoveContainer" containerID="62a44f9cc7dcaeb9cc0d4e21b632c5aa1a13b6b5d5c47077cb926fe9f97c5b81" Feb 16 18:27:32 crc kubenswrapper[4794]: I0216 18:27:31.992970 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerStarted","Data":"860677b1eb456be903bb112f204623b2baf083d1d49cf8922af98fec72c85451"} Feb 16 18:27:32 crc kubenswrapper[4794]: E0216 18:27:32.793492 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:27:33 crc kubenswrapper[4794]: I0216 18:27:33.004863 4794 generic.go:334] "Generic (PLEG): container finished" podID="172d430c-ad05-4d18-9937-7d2f996ed198" containerID="8c95b4e2e7ba852aeebca2873778d056b0e459be34a54a4950def911f31bbf05" exitCode=0 Feb 16 18:27:33 crc kubenswrapper[4794]: I0216 18:27:33.004905 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9bvp9/crc-debug-t7w4s" event={"ID":"172d430c-ad05-4d18-9937-7d2f996ed198","Type":"ContainerDied","Data":"8c95b4e2e7ba852aeebca2873778d056b0e459be34a54a4950def911f31bbf05"} Feb 16 18:27:34 crc kubenswrapper[4794]: I0216 18:27:34.137891 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9bvp9/crc-debug-t7w4s" Feb 16 18:27:34 crc kubenswrapper[4794]: I0216 18:27:34.186427 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9bvp9/crc-debug-t7w4s"] Feb 16 18:27:34 crc kubenswrapper[4794]: I0216 18:27:34.194772 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9bvp9/crc-debug-t7w4s"] Feb 16 18:27:34 crc kubenswrapper[4794]: I0216 18:27:34.303201 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/172d430c-ad05-4d18-9937-7d2f996ed198-host\") pod \"172d430c-ad05-4d18-9937-7d2f996ed198\" (UID: \"172d430c-ad05-4d18-9937-7d2f996ed198\") " Feb 16 18:27:34 crc kubenswrapper[4794]: I0216 18:27:34.303349 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/172d430c-ad05-4d18-9937-7d2f996ed198-host" (OuterVolumeSpecName: "host") pod "172d430c-ad05-4d18-9937-7d2f996ed198" (UID: "172d430c-ad05-4d18-9937-7d2f996ed198"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 18:27:34 crc kubenswrapper[4794]: I0216 18:27:34.303842 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75wbr\" (UniqueName: \"kubernetes.io/projected/172d430c-ad05-4d18-9937-7d2f996ed198-kube-api-access-75wbr\") pod \"172d430c-ad05-4d18-9937-7d2f996ed198\" (UID: \"172d430c-ad05-4d18-9937-7d2f996ed198\") " Feb 16 18:27:34 crc kubenswrapper[4794]: I0216 18:27:34.304908 4794 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/172d430c-ad05-4d18-9937-7d2f996ed198-host\") on node \"crc\" DevicePath \"\"" Feb 16 18:27:34 crc kubenswrapper[4794]: I0216 18:27:34.310914 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/172d430c-ad05-4d18-9937-7d2f996ed198-kube-api-access-75wbr" (OuterVolumeSpecName: "kube-api-access-75wbr") pod "172d430c-ad05-4d18-9937-7d2f996ed198" (UID: "172d430c-ad05-4d18-9937-7d2f996ed198"). InnerVolumeSpecName "kube-api-access-75wbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 18:27:34 crc kubenswrapper[4794]: I0216 18:27:34.407331 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75wbr\" (UniqueName: \"kubernetes.io/projected/172d430c-ad05-4d18-9937-7d2f996ed198-kube-api-access-75wbr\") on node \"crc\" DevicePath \"\"" Feb 16 18:27:34 crc kubenswrapper[4794]: I0216 18:27:34.809441 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="172d430c-ad05-4d18-9937-7d2f996ed198" path="/var/lib/kubelet/pods/172d430c-ad05-4d18-9937-7d2f996ed198/volumes" Feb 16 18:27:35 crc kubenswrapper[4794]: I0216 18:27:35.026185 4794 scope.go:117] "RemoveContainer" containerID="8c95b4e2e7ba852aeebca2873778d056b0e459be34a54a4950def911f31bbf05" Feb 16 18:27:35 crc kubenswrapper[4794]: I0216 18:27:35.026264 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9bvp9/crc-debug-t7w4s" Feb 16 18:27:35 crc kubenswrapper[4794]: I0216 18:27:35.359945 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9bvp9/crc-debug-ts2cn"] Feb 16 18:27:35 crc kubenswrapper[4794]: E0216 18:27:35.361087 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="172d430c-ad05-4d18-9937-7d2f996ed198" containerName="container-00" Feb 16 18:27:35 crc kubenswrapper[4794]: I0216 18:27:35.361104 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="172d430c-ad05-4d18-9937-7d2f996ed198" containerName="container-00" Feb 16 18:27:35 crc kubenswrapper[4794]: I0216 18:27:35.361708 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="172d430c-ad05-4d18-9937-7d2f996ed198" containerName="container-00" Feb 16 18:27:35 crc kubenswrapper[4794]: I0216 18:27:35.377767 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9bvp9/crc-debug-ts2cn" Feb 16 18:27:35 crc kubenswrapper[4794]: I0216 18:27:35.534809 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pm7d\" (UniqueName: \"kubernetes.io/projected/2248fae2-49c1-44d9-9589-e5fe8d70d038-kube-api-access-8pm7d\") pod \"crc-debug-ts2cn\" (UID: \"2248fae2-49c1-44d9-9589-e5fe8d70d038\") " pod="openshift-must-gather-9bvp9/crc-debug-ts2cn" Feb 16 18:27:35 crc kubenswrapper[4794]: I0216 18:27:35.534928 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2248fae2-49c1-44d9-9589-e5fe8d70d038-host\") pod \"crc-debug-ts2cn\" (UID: \"2248fae2-49c1-44d9-9589-e5fe8d70d038\") " pod="openshift-must-gather-9bvp9/crc-debug-ts2cn" Feb 16 18:27:35 crc kubenswrapper[4794]: I0216 18:27:35.637655 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2248fae2-49c1-44d9-9589-e5fe8d70d038-host\") pod \"crc-debug-ts2cn\" (UID: \"2248fae2-49c1-44d9-9589-e5fe8d70d038\") " pod="openshift-must-gather-9bvp9/crc-debug-ts2cn" Feb 16 18:27:35 crc kubenswrapper[4794]: I0216 18:27:35.637839 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2248fae2-49c1-44d9-9589-e5fe8d70d038-host\") pod \"crc-debug-ts2cn\" (UID: \"2248fae2-49c1-44d9-9589-e5fe8d70d038\") " pod="openshift-must-gather-9bvp9/crc-debug-ts2cn" Feb 16 18:27:35 crc kubenswrapper[4794]: I0216 18:27:35.637883 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pm7d\" (UniqueName: \"kubernetes.io/projected/2248fae2-49c1-44d9-9589-e5fe8d70d038-kube-api-access-8pm7d\") pod \"crc-debug-ts2cn\" (UID: \"2248fae2-49c1-44d9-9589-e5fe8d70d038\") " pod="openshift-must-gather-9bvp9/crc-debug-ts2cn" Feb 16 18:27:35 crc kubenswrapper[4794]: I0216 18:27:35.661018 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pm7d\" (UniqueName: \"kubernetes.io/projected/2248fae2-49c1-44d9-9589-e5fe8d70d038-kube-api-access-8pm7d\") pod \"crc-debug-ts2cn\" (UID: \"2248fae2-49c1-44d9-9589-e5fe8d70d038\") " pod="openshift-must-gather-9bvp9/crc-debug-ts2cn" Feb 16 18:27:35 crc kubenswrapper[4794]: I0216 18:27:35.753988 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9bvp9/crc-debug-ts2cn" Feb 16 18:27:36 crc kubenswrapper[4794]: I0216 18:27:36.047256 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9bvp9/crc-debug-ts2cn" event={"ID":"2248fae2-49c1-44d9-9589-e5fe8d70d038","Type":"ContainerStarted","Data":"d88fdb7b40dcdb29837752231d5391aa04f731833488ccb2d801f0ea2436f85f"} Feb 16 18:27:36 crc kubenswrapper[4794]: E0216 18:27:36.794364 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:27:37 crc kubenswrapper[4794]: I0216 18:27:37.062031 4794 generic.go:334] "Generic (PLEG): container finished" podID="2248fae2-49c1-44d9-9589-e5fe8d70d038" containerID="92223ef2ee776482c4ef5c8a9b3038c4a950ce94065cc5bc63545833d1f4fb55" exitCode=1 Feb 16 18:27:37 crc kubenswrapper[4794]: I0216 18:27:37.062111 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9bvp9/crc-debug-ts2cn" event={"ID":"2248fae2-49c1-44d9-9589-e5fe8d70d038","Type":"ContainerDied","Data":"92223ef2ee776482c4ef5c8a9b3038c4a950ce94065cc5bc63545833d1f4fb55"} Feb 16 18:27:37 crc kubenswrapper[4794]: I0216 18:27:37.106334 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9bvp9/crc-debug-ts2cn"] Feb 16 18:27:37 crc kubenswrapper[4794]: I0216 18:27:37.115999 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9bvp9/crc-debug-ts2cn"] Feb 16 18:27:38 crc kubenswrapper[4794]: I0216 18:27:38.192463 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9bvp9/crc-debug-ts2cn" Feb 16 18:27:38 crc kubenswrapper[4794]: I0216 18:27:38.301103 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pm7d\" (UniqueName: \"kubernetes.io/projected/2248fae2-49c1-44d9-9589-e5fe8d70d038-kube-api-access-8pm7d\") pod \"2248fae2-49c1-44d9-9589-e5fe8d70d038\" (UID: \"2248fae2-49c1-44d9-9589-e5fe8d70d038\") " Feb 16 18:27:38 crc kubenswrapper[4794]: I0216 18:27:38.301286 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2248fae2-49c1-44d9-9589-e5fe8d70d038-host\") pod \"2248fae2-49c1-44d9-9589-e5fe8d70d038\" (UID: \"2248fae2-49c1-44d9-9589-e5fe8d70d038\") " Feb 16 18:27:38 crc kubenswrapper[4794]: I0216 18:27:38.301354 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2248fae2-49c1-44d9-9589-e5fe8d70d038-host" (OuterVolumeSpecName: "host") pod "2248fae2-49c1-44d9-9589-e5fe8d70d038" (UID: "2248fae2-49c1-44d9-9589-e5fe8d70d038"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 18:27:38 crc kubenswrapper[4794]: I0216 18:27:38.302131 4794 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2248fae2-49c1-44d9-9589-e5fe8d70d038-host\") on node \"crc\" DevicePath \"\"" Feb 16 18:27:38 crc kubenswrapper[4794]: I0216 18:27:38.306978 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2248fae2-49c1-44d9-9589-e5fe8d70d038-kube-api-access-8pm7d" (OuterVolumeSpecName: "kube-api-access-8pm7d") pod "2248fae2-49c1-44d9-9589-e5fe8d70d038" (UID: "2248fae2-49c1-44d9-9589-e5fe8d70d038"). InnerVolumeSpecName "kube-api-access-8pm7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 18:27:38 crc kubenswrapper[4794]: I0216 18:27:38.404474 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pm7d\" (UniqueName: \"kubernetes.io/projected/2248fae2-49c1-44d9-9589-e5fe8d70d038-kube-api-access-8pm7d\") on node \"crc\" DevicePath \"\"" Feb 16 18:27:38 crc kubenswrapper[4794]: I0216 18:27:38.804611 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2248fae2-49c1-44d9-9589-e5fe8d70d038" path="/var/lib/kubelet/pods/2248fae2-49c1-44d9-9589-e5fe8d70d038/volumes" Feb 16 18:27:39 crc kubenswrapper[4794]: I0216 18:27:39.082041 4794 scope.go:117] "RemoveContainer" containerID="92223ef2ee776482c4ef5c8a9b3038c4a950ce94065cc5bc63545833d1f4fb55" Feb 16 18:27:39 crc kubenswrapper[4794]: I0216 18:27:39.082158 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9bvp9/crc-debug-ts2cn" Feb 16 18:27:44 crc kubenswrapper[4794]: E0216 18:27:44.635163 4794 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod172d430c_ad05_4d18_9937_7d2f996ed198.slice/crio-d56e654a05ff0d1361bec7a7628cd37c40eda391d5b0c94ef5df406f49c4d9cb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod172d430c_ad05_4d18_9937_7d2f996ed198.slice\": RecentStats: unable to find data in memory cache]" Feb 16 18:27:44 crc kubenswrapper[4794]: I0216 18:27:44.967820 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dh7px"] Feb 16 18:27:44 crc kubenswrapper[4794]: E0216 18:27:44.968611 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2248fae2-49c1-44d9-9589-e5fe8d70d038" containerName="container-00" Feb 16 18:27:44 crc kubenswrapper[4794]: I0216 18:27:44.968624 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="2248fae2-49c1-44d9-9589-e5fe8d70d038" containerName="container-00" Feb 16 18:27:44 crc kubenswrapper[4794]: I0216 18:27:44.968870 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="2248fae2-49c1-44d9-9589-e5fe8d70d038" containerName="container-00" Feb 16 18:27:44 crc kubenswrapper[4794]: I0216 18:27:44.971971 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dh7px" Feb 16 18:27:44 crc kubenswrapper[4794]: I0216 18:27:44.983086 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dh7px"] Feb 16 18:27:45 crc kubenswrapper[4794]: I0216 18:27:45.062830 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e13d33e9-b0a0-47c4-8edc-83091aa2010d-utilities\") pod \"redhat-marketplace-dh7px\" (UID: \"e13d33e9-b0a0-47c4-8edc-83091aa2010d\") " pod="openshift-marketplace/redhat-marketplace-dh7px" Feb 16 18:27:45 crc kubenswrapper[4794]: I0216 18:27:45.062902 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e13d33e9-b0a0-47c4-8edc-83091aa2010d-catalog-content\") pod \"redhat-marketplace-dh7px\" (UID: \"e13d33e9-b0a0-47c4-8edc-83091aa2010d\") " pod="openshift-marketplace/redhat-marketplace-dh7px" Feb 16 18:27:45 crc kubenswrapper[4794]: I0216 18:27:45.063094 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz6w7\" (UniqueName: \"kubernetes.io/projected/e13d33e9-b0a0-47c4-8edc-83091aa2010d-kube-api-access-cz6w7\") pod \"redhat-marketplace-dh7px\" (UID: \"e13d33e9-b0a0-47c4-8edc-83091aa2010d\") " pod="openshift-marketplace/redhat-marketplace-dh7px" Feb 16 18:27:45 crc kubenswrapper[4794]: I0216 18:27:45.165323 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz6w7\" (UniqueName: \"kubernetes.io/projected/e13d33e9-b0a0-47c4-8edc-83091aa2010d-kube-api-access-cz6w7\") pod \"redhat-marketplace-dh7px\" (UID: \"e13d33e9-b0a0-47c4-8edc-83091aa2010d\") " pod="openshift-marketplace/redhat-marketplace-dh7px" Feb 16 18:27:45 crc kubenswrapper[4794]: I0216 18:27:45.165479 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e13d33e9-b0a0-47c4-8edc-83091aa2010d-utilities\") pod \"redhat-marketplace-dh7px\" (UID: \"e13d33e9-b0a0-47c4-8edc-83091aa2010d\") " pod="openshift-marketplace/redhat-marketplace-dh7px" Feb 16 18:27:45 crc kubenswrapper[4794]: I0216 18:27:45.165556 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e13d33e9-b0a0-47c4-8edc-83091aa2010d-catalog-content\") pod \"redhat-marketplace-dh7px\" (UID: \"e13d33e9-b0a0-47c4-8edc-83091aa2010d\") " pod="openshift-marketplace/redhat-marketplace-dh7px" Feb 16 18:27:45 crc kubenswrapper[4794]: I0216 18:27:45.165968 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e13d33e9-b0a0-47c4-8edc-83091aa2010d-utilities\") pod \"redhat-marketplace-dh7px\" (UID: \"e13d33e9-b0a0-47c4-8edc-83091aa2010d\") " pod="openshift-marketplace/redhat-marketplace-dh7px" Feb 16 18:27:45 crc kubenswrapper[4794]: I0216 18:27:45.165975 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e13d33e9-b0a0-47c4-8edc-83091aa2010d-catalog-content\") pod \"redhat-marketplace-dh7px\" (UID: \"e13d33e9-b0a0-47c4-8edc-83091aa2010d\") " pod="openshift-marketplace/redhat-marketplace-dh7px" Feb 16 18:27:45 crc kubenswrapper[4794]: I0216 18:27:45.184222 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz6w7\" (UniqueName: \"kubernetes.io/projected/e13d33e9-b0a0-47c4-8edc-83091aa2010d-kube-api-access-cz6w7\") pod \"redhat-marketplace-dh7px\" (UID: \"e13d33e9-b0a0-47c4-8edc-83091aa2010d\") " pod="openshift-marketplace/redhat-marketplace-dh7px" Feb 16 18:27:45 crc kubenswrapper[4794]: I0216 18:27:45.302150 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dh7px" Feb 16 18:27:45 crc kubenswrapper[4794]: I0216 18:27:45.850522 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dh7px"] Feb 16 18:27:46 crc kubenswrapper[4794]: I0216 18:27:46.178428 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dh7px" event={"ID":"e13d33e9-b0a0-47c4-8edc-83091aa2010d","Type":"ContainerStarted","Data":"f22a4a20c9be132d93db10d8880a18ace770e962d9f4adfd04e4a0407d053a3f"} Feb 16 18:27:46 crc kubenswrapper[4794]: I0216 18:27:46.796961 4794 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 18:27:46 crc kubenswrapper[4794]: E0216 18:27:46.929210 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 18:27:46 crc kubenswrapper[4794]: E0216 18:27:46.929499 4794 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 18:27:46 crc kubenswrapper[4794]: E0216 18:27:46.929628 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2h5l2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-7gcsf_openstack(c695f880-15cb-45b1-8545-60d8437ec631): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 18:27:46 crc kubenswrapper[4794]: E0216 18:27:46.930803 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:27:47 crc kubenswrapper[4794]: I0216 18:27:47.194551 4794 generic.go:334] "Generic (PLEG): container finished" podID="e13d33e9-b0a0-47c4-8edc-83091aa2010d" containerID="65761e3430ce82fd119193c27cdef1e5326ea61bdcb1825fca509add8ece0c8a" exitCode=0 Feb 16 18:27:47 crc kubenswrapper[4794]: I0216 18:27:47.194601 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dh7px" event={"ID":"e13d33e9-b0a0-47c4-8edc-83091aa2010d","Type":"ContainerDied","Data":"65761e3430ce82fd119193c27cdef1e5326ea61bdcb1825fca509add8ece0c8a"} Feb 16 18:27:47 crc kubenswrapper[4794]: E0216 18:27:47.854108 4794 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod172d430c_ad05_4d18_9937_7d2f996ed198.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod172d430c_ad05_4d18_9937_7d2f996ed198.slice/crio-d56e654a05ff0d1361bec7a7628cd37c40eda391d5b0c94ef5df406f49c4d9cb\": RecentStats: unable to find data in memory cache]" Feb 16 18:27:48 crc kubenswrapper[4794]: E0216 18:27:48.106742 4794 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod172d430c_ad05_4d18_9937_7d2f996ed198.slice/crio-d56e654a05ff0d1361bec7a7628cd37c40eda391d5b0c94ef5df406f49c4d9cb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod172d430c_ad05_4d18_9937_7d2f996ed198.slice\": RecentStats: unable to find data in memory cache]" Feb 16 18:27:48 crc kubenswrapper[4794]: E0216 18:27:48.108914 4794 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod172d430c_ad05_4d18_9937_7d2f996ed198.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod172d430c_ad05_4d18_9937_7d2f996ed198.slice/crio-d56e654a05ff0d1361bec7a7628cd37c40eda391d5b0c94ef5df406f49c4d9cb\": RecentStats: unable to find data in memory cache]" Feb 16 18:27:49 crc kubenswrapper[4794]: I0216 18:27:49.223946 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dh7px" event={"ID":"e13d33e9-b0a0-47c4-8edc-83091aa2010d","Type":"ContainerStarted","Data":"fc39c9ac8ed2c840288f5370128253fd2fba253c09201d4529c984d928c81f59"} Feb 16 18:27:49 crc kubenswrapper[4794]: E0216 18:27:49.792699 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:27:50 crc kubenswrapper[4794]: I0216 18:27:50.243866 4794 generic.go:334] "Generic (PLEG): container finished" podID="e13d33e9-b0a0-47c4-8edc-83091aa2010d" containerID="fc39c9ac8ed2c840288f5370128253fd2fba253c09201d4529c984d928c81f59" exitCode=0 Feb 16 18:27:50 crc kubenswrapper[4794]: I0216 18:27:50.244930 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dh7px" event={"ID":"e13d33e9-b0a0-47c4-8edc-83091aa2010d","Type":"ContainerDied","Data":"fc39c9ac8ed2c840288f5370128253fd2fba253c09201d4529c984d928c81f59"} Feb 16 18:27:51 crc kubenswrapper[4794]: I0216 18:27:51.258247 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dh7px" event={"ID":"e13d33e9-b0a0-47c4-8edc-83091aa2010d","Type":"ContainerStarted","Data":"98e7d1acbd46b898be3cc81c3996f88f1d44197e679e217bccf21a506c979e77"} Feb 16 18:27:51 crc kubenswrapper[4794]: I0216 18:27:51.279925 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dh7px" podStartSLOduration=3.785208559 podStartE2EDuration="7.279907757s" podCreationTimestamp="2026-02-16 18:27:44 +0000 UTC" firstStartedPulling="2026-02-16 18:27:47.197593268 +0000 UTC m=+5293.145687915" lastFinishedPulling="2026-02-16 18:27:50.692292466 +0000 UTC m=+5296.640387113" observedRunningTime="2026-02-16 18:27:51.2740153 +0000 UTC m=+5297.222109947" watchObservedRunningTime="2026-02-16 18:27:51.279907757 +0000 UTC m=+5297.228002404" Feb 16 18:27:54 crc kubenswrapper[4794]: E0216 18:27:54.945223 4794 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod172d430c_ad05_4d18_9937_7d2f996ed198.slice/crio-d56e654a05ff0d1361bec7a7628cd37c40eda391d5b0c94ef5df406f49c4d9cb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod172d430c_ad05_4d18_9937_7d2f996ed198.slice\": RecentStats: unable to find data in memory cache]" Feb 16 18:27:55 crc kubenswrapper[4794]: I0216 18:27:55.302509 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dh7px" Feb 16 18:27:55 crc kubenswrapper[4794]: I0216 18:27:55.302564 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dh7px" Feb 16 18:27:56 crc kubenswrapper[4794]: I0216 18:27:56.379921 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-dh7px" podUID="e13d33e9-b0a0-47c4-8edc-83091aa2010d" containerName="registry-server" probeResult="failure" output=< Feb 16 18:27:56 crc kubenswrapper[4794]: timeout: failed to connect service ":50051" within 1s Feb 16 18:27:56 crc kubenswrapper[4794]: > Feb 16 18:27:58 crc kubenswrapper[4794]: E0216 18:27:58.793902 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:28:00 crc kubenswrapper[4794]: E0216 18:28:00.794116 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:28:02 crc kubenswrapper[4794]: E0216 18:28:02.851037 4794 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod172d430c_ad05_4d18_9937_7d2f996ed198.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod172d430c_ad05_4d18_9937_7d2f996ed198.slice/crio-d56e654a05ff0d1361bec7a7628cd37c40eda391d5b0c94ef5df406f49c4d9cb\": RecentStats: unable to find data in memory cache]" Feb 16 18:28:04 crc kubenswrapper[4794]: E0216 18:28:04.991336 4794 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod172d430c_ad05_4d18_9937_7d2f996ed198.slice/crio-d56e654a05ff0d1361bec7a7628cd37c40eda391d5b0c94ef5df406f49c4d9cb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod172d430c_ad05_4d18_9937_7d2f996ed198.slice\": RecentStats: unable to find data in memory cache]" Feb 16 18:28:05 crc kubenswrapper[4794]: I0216 18:28:05.382354 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dh7px" Feb 16 18:28:05 crc kubenswrapper[4794]: I0216 18:28:05.470050 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dh7px" Feb 16 18:28:05 crc kubenswrapper[4794]: I0216 18:28:05.625883 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dh7px"] Feb 16 18:28:06 crc kubenswrapper[4794]: I0216 18:28:06.442954 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dh7px" podUID="e13d33e9-b0a0-47c4-8edc-83091aa2010d" containerName="registry-server" containerID="cri-o://98e7d1acbd46b898be3cc81c3996f88f1d44197e679e217bccf21a506c979e77" gracePeriod=2 Feb 16 18:28:06 crc kubenswrapper[4794]: I0216 18:28:06.997863 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dh7px" Feb 16 18:28:07 crc kubenswrapper[4794]: I0216 18:28:07.186088 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e13d33e9-b0a0-47c4-8edc-83091aa2010d-catalog-content\") pod \"e13d33e9-b0a0-47c4-8edc-83091aa2010d\" (UID: \"e13d33e9-b0a0-47c4-8edc-83091aa2010d\") " Feb 16 18:28:07 crc kubenswrapper[4794]: I0216 18:28:07.186319 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e13d33e9-b0a0-47c4-8edc-83091aa2010d-utilities\") pod \"e13d33e9-b0a0-47c4-8edc-83091aa2010d\" (UID: \"e13d33e9-b0a0-47c4-8edc-83091aa2010d\") " Feb 16 18:28:07 crc kubenswrapper[4794]: I0216 18:28:07.186409 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cz6w7\" (UniqueName: \"kubernetes.io/projected/e13d33e9-b0a0-47c4-8edc-83091aa2010d-kube-api-access-cz6w7\") pod \"e13d33e9-b0a0-47c4-8edc-83091aa2010d\" (UID: \"e13d33e9-b0a0-47c4-8edc-83091aa2010d\") " Feb 16 18:28:07 crc kubenswrapper[4794]: I0216 18:28:07.187386 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e13d33e9-b0a0-47c4-8edc-83091aa2010d-utilities" (OuterVolumeSpecName: "utilities") pod "e13d33e9-b0a0-47c4-8edc-83091aa2010d" (UID: "e13d33e9-b0a0-47c4-8edc-83091aa2010d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 18:28:07 crc kubenswrapper[4794]: I0216 18:28:07.201229 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e13d33e9-b0a0-47c4-8edc-83091aa2010d-kube-api-access-cz6w7" (OuterVolumeSpecName: "kube-api-access-cz6w7") pod "e13d33e9-b0a0-47c4-8edc-83091aa2010d" (UID: "e13d33e9-b0a0-47c4-8edc-83091aa2010d"). InnerVolumeSpecName "kube-api-access-cz6w7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 18:28:07 crc kubenswrapper[4794]: I0216 18:28:07.225079 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e13d33e9-b0a0-47c4-8edc-83091aa2010d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e13d33e9-b0a0-47c4-8edc-83091aa2010d" (UID: "e13d33e9-b0a0-47c4-8edc-83091aa2010d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 18:28:07 crc kubenswrapper[4794]: I0216 18:28:07.289664 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e13d33e9-b0a0-47c4-8edc-83091aa2010d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 18:28:07 crc kubenswrapper[4794]: I0216 18:28:07.289700 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e13d33e9-b0a0-47c4-8edc-83091aa2010d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 18:28:07 crc kubenswrapper[4794]: I0216 18:28:07.289714 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cz6w7\" (UniqueName: \"kubernetes.io/projected/e13d33e9-b0a0-47c4-8edc-83091aa2010d-kube-api-access-cz6w7\") on node \"crc\" DevicePath \"\"" Feb 16 18:28:07 crc kubenswrapper[4794]: I0216 18:28:07.464557 4794 generic.go:334] "Generic (PLEG): container finished" podID="e13d33e9-b0a0-47c4-8edc-83091aa2010d" containerID="98e7d1acbd46b898be3cc81c3996f88f1d44197e679e217bccf21a506c979e77" exitCode=0 Feb 16 18:28:07 crc kubenswrapper[4794]: I0216 18:28:07.464656 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dh7px" event={"ID":"e13d33e9-b0a0-47c4-8edc-83091aa2010d","Type":"ContainerDied","Data":"98e7d1acbd46b898be3cc81c3996f88f1d44197e679e217bccf21a506c979e77"} Feb 16 18:28:07 crc kubenswrapper[4794]: I0216 18:28:07.464758 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dh7px" event={"ID":"e13d33e9-b0a0-47c4-8edc-83091aa2010d","Type":"ContainerDied","Data":"f22a4a20c9be132d93db10d8880a18ace770e962d9f4adfd04e4a0407d053a3f"} Feb 16 18:28:07 crc kubenswrapper[4794]: I0216 18:28:07.464772 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dh7px" Feb 16 18:28:07 crc kubenswrapper[4794]: I0216 18:28:07.464805 4794 scope.go:117] "RemoveContainer" containerID="98e7d1acbd46b898be3cc81c3996f88f1d44197e679e217bccf21a506c979e77" Feb 16 18:28:07 crc kubenswrapper[4794]: I0216 18:28:07.502886 4794 scope.go:117] "RemoveContainer" containerID="fc39c9ac8ed2c840288f5370128253fd2fba253c09201d4529c984d928c81f59" Feb 16 18:28:07 crc kubenswrapper[4794]: I0216 18:28:07.549252 4794 scope.go:117] "RemoveContainer" containerID="65761e3430ce82fd119193c27cdef1e5326ea61bdcb1825fca509add8ece0c8a" Feb 16 18:28:07 crc kubenswrapper[4794]: I0216 18:28:07.556376 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dh7px"] Feb 16 18:28:07 crc kubenswrapper[4794]: I0216 18:28:07.572374 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dh7px"] Feb 16 18:28:07 crc kubenswrapper[4794]: I0216 18:28:07.624852 4794 scope.go:117] "RemoveContainer" containerID="98e7d1acbd46b898be3cc81c3996f88f1d44197e679e217bccf21a506c979e77" Feb 16 18:28:07 crc kubenswrapper[4794]: E0216 18:28:07.625371 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98e7d1acbd46b898be3cc81c3996f88f1d44197e679e217bccf21a506c979e77\": container with ID starting with 98e7d1acbd46b898be3cc81c3996f88f1d44197e679e217bccf21a506c979e77 not found: ID does not exist" containerID="98e7d1acbd46b898be3cc81c3996f88f1d44197e679e217bccf21a506c979e77" Feb 16 18:28:07 crc kubenswrapper[4794]: I0216 18:28:07.625430 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98e7d1acbd46b898be3cc81c3996f88f1d44197e679e217bccf21a506c979e77"} err="failed to get container status \"98e7d1acbd46b898be3cc81c3996f88f1d44197e679e217bccf21a506c979e77\": rpc error: code = NotFound desc = could not find container \"98e7d1acbd46b898be3cc81c3996f88f1d44197e679e217bccf21a506c979e77\": container with ID starting with 98e7d1acbd46b898be3cc81c3996f88f1d44197e679e217bccf21a506c979e77 not found: ID does not exist" Feb 16 18:28:07 crc kubenswrapper[4794]: I0216 18:28:07.625471 4794 scope.go:117] "RemoveContainer" containerID="fc39c9ac8ed2c840288f5370128253fd2fba253c09201d4529c984d928c81f59" Feb 16 18:28:07 crc kubenswrapper[4794]: E0216 18:28:07.626053 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc39c9ac8ed2c840288f5370128253fd2fba253c09201d4529c984d928c81f59\": container with ID starting with fc39c9ac8ed2c840288f5370128253fd2fba253c09201d4529c984d928c81f59 not found: ID does not exist" containerID="fc39c9ac8ed2c840288f5370128253fd2fba253c09201d4529c984d928c81f59" Feb 16 18:28:07 crc kubenswrapper[4794]: I0216 18:28:07.626079 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc39c9ac8ed2c840288f5370128253fd2fba253c09201d4529c984d928c81f59"} err="failed to get container status \"fc39c9ac8ed2c840288f5370128253fd2fba253c09201d4529c984d928c81f59\": rpc error: code = NotFound desc = could not find container \"fc39c9ac8ed2c840288f5370128253fd2fba253c09201d4529c984d928c81f59\": container with ID starting with fc39c9ac8ed2c840288f5370128253fd2fba253c09201d4529c984d928c81f59 not found: ID does not exist" Feb 16 18:28:07 crc kubenswrapper[4794]: I0216 18:28:07.626095 4794 scope.go:117] "RemoveContainer" containerID="65761e3430ce82fd119193c27cdef1e5326ea61bdcb1825fca509add8ece0c8a" Feb 16 18:28:07 crc kubenswrapper[4794]: E0216 18:28:07.626501 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65761e3430ce82fd119193c27cdef1e5326ea61bdcb1825fca509add8ece0c8a\": container with ID starting with 65761e3430ce82fd119193c27cdef1e5326ea61bdcb1825fca509add8ece0c8a not found: ID does not exist" containerID="65761e3430ce82fd119193c27cdef1e5326ea61bdcb1825fca509add8ece0c8a" Feb 16 18:28:07 crc kubenswrapper[4794]: I0216 18:28:07.626527 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65761e3430ce82fd119193c27cdef1e5326ea61bdcb1825fca509add8ece0c8a"} err="failed to get container status \"65761e3430ce82fd119193c27cdef1e5326ea61bdcb1825fca509add8ece0c8a\": rpc error: code = NotFound desc = could not find container \"65761e3430ce82fd119193c27cdef1e5326ea61bdcb1825fca509add8ece0c8a\": container with ID starting with 65761e3430ce82fd119193c27cdef1e5326ea61bdcb1825fca509add8ece0c8a not found: ID does not exist" Feb 16 18:28:08 crc kubenswrapper[4794]: I0216 18:28:08.811478 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e13d33e9-b0a0-47c4-8edc-83091aa2010d" path="/var/lib/kubelet/pods/e13d33e9-b0a0-47c4-8edc-83091aa2010d/volumes" Feb 16 18:28:09 crc kubenswrapper[4794]: E0216 18:28:09.793295 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:28:12 crc kubenswrapper[4794]: E0216 18:28:12.922114 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 18:28:12 crc kubenswrapper[4794]: E0216 18:28:12.922470 4794 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 18:28:12 crc kubenswrapper[4794]: E0216 18:28:12.922641 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59fh58dh6ch557h84h55ch564h5bh58fh5c8h5d4h584h669h667h569h59hd5hdbh9dh67ch5f9h59fh597h96h664h687h66dhfch5ddh5b7h88h59cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9v9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(8981f528-1f74-4d56-a93c-22860725b490): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 18:28:12 crc kubenswrapper[4794]: E0216 18:28:12.924434 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:28:15 crc kubenswrapper[4794]: E0216 18:28:15.370314 4794 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod172d430c_ad05_4d18_9937_7d2f996ed198.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod172d430c_ad05_4d18_9937_7d2f996ed198.slice/crio-d56e654a05ff0d1361bec7a7628cd37c40eda391d5b0c94ef5df406f49c4d9cb\": RecentStats: unable to find data in memory cache]" Feb 16 18:28:17 crc kubenswrapper[4794]: E0216 18:28:17.849135 4794 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod172d430c_ad05_4d18_9937_7d2f996ed198.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod172d430c_ad05_4d18_9937_7d2f996ed198.slice/crio-d56e654a05ff0d1361bec7a7628cd37c40eda391d5b0c94ef5df406f49c4d9cb\": RecentStats: unable to find data in memory cache]" Feb 16 18:28:23 crc kubenswrapper[4794]: E0216 18:28:23.794117 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:28:23 crc kubenswrapper[4794]: E0216 18:28:23.794371 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:28:25 crc kubenswrapper[4794]: E0216 18:28:25.723986 4794 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod172d430c_ad05_4d18_9937_7d2f996ed198.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod172d430c_ad05_4d18_9937_7d2f996ed198.slice/crio-d56e654a05ff0d1361bec7a7628cd37c40eda391d5b0c94ef5df406f49c4d9cb\": RecentStats: unable to find data in memory cache]" Feb 16 18:28:33 crc kubenswrapper[4794]: E0216 18:28:33.143982 4794 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod172d430c_ad05_4d18_9937_7d2f996ed198.slice/crio-d56e654a05ff0d1361bec7a7628cd37c40eda391d5b0c94ef5df406f49c4d9cb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod172d430c_ad05_4d18_9937_7d2f996ed198.slice\": RecentStats: unable to find data in memory cache]" Feb 16 18:28:34 crc kubenswrapper[4794]: E0216 18:28:34.839905 4794 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/c7641f92dab2bc9479e7105fa7649ccb1d0db030842b9a14e5792f2fcad0ecc2/diff" to get inode usage: stat /var/lib/containers/storage/overlay/c7641f92dab2bc9479e7105fa7649ccb1d0db030842b9a14e5792f2fcad0ecc2/diff: no such file or directory, extraDiskErr: Feb 16 18:28:35 crc kubenswrapper[4794]: E0216 18:28:35.794871 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:28:35 crc kubenswrapper[4794]: E0216 18:28:35.796140 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:28:44 crc kubenswrapper[4794]: I0216 18:28:44.180662 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_cd26d451-60ee-4078-a937-5c4969efc977/aodh-api/0.log" Feb 16 18:28:44 crc kubenswrapper[4794]: I0216 18:28:44.376182 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_cd26d451-60ee-4078-a937-5c4969efc977/aodh-evaluator/0.log" Feb 16 18:28:44 crc kubenswrapper[4794]: I0216 18:28:44.418600 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_cd26d451-60ee-4078-a937-5c4969efc977/aodh-listener/0.log" Feb 16 18:28:44 crc kubenswrapper[4794]: I0216 18:28:44.534481 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6c7c9b8d66-vz9b5_42a40424-c14f-4779-ac7c-d2c5828db304/barbican-api/0.log" Feb 16 18:28:44 crc kubenswrapper[4794]: I0216 18:28:44.537048 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_cd26d451-60ee-4078-a937-5c4969efc977/aodh-notifier/0.log" Feb 16 18:28:44 crc kubenswrapper[4794]: I0216 18:28:44.609430 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6c7c9b8d66-vz9b5_42a40424-c14f-4779-ac7c-d2c5828db304/barbican-api-log/0.log" Feb 16 18:28:44 crc kubenswrapper[4794]: I0216 18:28:44.745137 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-796f585bbb-7grdw_f2beacbf-4b81-4375-be49-872edd3d0d9d/barbican-keystone-listener/0.log" Feb 16 18:28:44 crc kubenswrapper[4794]: I0216 18:28:44.832192 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-796f585bbb-7grdw_f2beacbf-4b81-4375-be49-872edd3d0d9d/barbican-keystone-listener-log/0.log" Feb 16 18:28:44 crc kubenswrapper[4794]: I0216 18:28:44.945271 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-8df6f765f-hzfz6_20d47909-0796-4ee7-8209-9c30ae86ff2f/barbican-worker/0.log" Feb 16 18:28:45 crc kubenswrapper[4794]: I0216 18:28:45.014948 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-8df6f765f-hzfz6_20d47909-0796-4ee7-8209-9c30ae86ff2f/barbican-worker-log/0.log" Feb 16 18:28:45 crc kubenswrapper[4794]: I0216 18:28:45.095516 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-kzl44_00aac5cd-2d06-4021-9d8d-5724b2ad87bc/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 18:28:45 crc kubenswrapper[4794]: I0216 18:28:45.311731 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8981f528-1f74-4d56-a93c-22860725b490/ceilometer-notification-agent/0.log" Feb 16 18:28:45 crc kubenswrapper[4794]: I0216 18:28:45.355275 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8981f528-1f74-4d56-a93c-22860725b490/proxy-httpd/0.log" Feb 16 18:28:45 crc kubenswrapper[4794]: I0216 18:28:45.443212 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8981f528-1f74-4d56-a93c-22860725b490/sg-core/0.log" Feb 16 18:28:45 crc kubenswrapper[4794]: I0216 18:28:45.562779 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_58f60884-ce4b-47ac-8720-dd812acdc8a8/cinder-api-log/0.log" Feb 16 18:28:45 crc kubenswrapper[4794]: I0216 18:28:45.594067 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_58f60884-ce4b-47ac-8720-dd812acdc8a8/cinder-api/0.log" Feb 16 18:28:45 crc kubenswrapper[4794]: I0216 18:28:45.821925 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_3342e2cd-2d8f-4dee-be8e-86c60e81ba81/cinder-scheduler/0.log" Feb 16 18:28:45 crc kubenswrapper[4794]: I0216 18:28:45.981083 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_3342e2cd-2d8f-4dee-be8e-86c60e81ba81/probe/0.log" Feb 16 18:28:46 crc kubenswrapper[4794]: I0216 18:28:46.061951 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-97495_00b864cb-0f2d-4ff9-ab38-0463ac283e01/init/0.log" Feb 16 18:28:46 crc kubenswrapper[4794]: I0216 18:28:46.322196 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-97495_00b864cb-0f2d-4ff9-ab38-0463ac283e01/init/0.log" Feb 16 18:28:46 crc kubenswrapper[4794]: I0216 18:28:46.322567 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-97495_00b864cb-0f2d-4ff9-ab38-0463ac283e01/dnsmasq-dns/0.log" Feb 16 18:28:46 crc kubenswrapper[4794]: I0216 18:28:46.376320 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-4l9qg_d4f1b91c-811a-4a5a-ba6f-fcc833d2fd12/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 18:28:46 crc kubenswrapper[4794]: I0216 18:28:46.523532 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-8s26h_7566f2a1-be5c-4ab7-8639-e162712a8ea4/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 18:28:46 crc kubenswrapper[4794]: I0216 18:28:46.650309 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-9h9h7_7694359c-dd70-4640-bcc6-2ed4377e5cbb/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 18:28:46 crc kubenswrapper[4794]: I0216 18:28:46.811017 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-c2qcd_1acb8748-d3eb-4984-91a5-2f2b43926abf/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 18:28:46 crc kubenswrapper[4794]: I0216 18:28:46.858558 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-cjr9z_25576ab9-760b-40e6-b7c7-866fbb7ed70c/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 18:28:47 crc kubenswrapper[4794]: I0216 18:28:47.052294 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-q7gkk_60fab4d9-75ee-41a4-8a19-11f232514267/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 18:28:47 crc kubenswrapper[4794]: I0216 18:28:47.115415 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-wt7qf_8e0581f8-9225-4111-9249-c8b122cb33d3/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 18:28:47 crc kubenswrapper[4794]: I0216 18:28:47.278132 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_4db1f19d-64b2-439f-a763-ab694b3e2953/glance-httpd/0.log" Feb 16 18:28:47 crc kubenswrapper[4794]: I0216 18:28:47.335866 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_4db1f19d-64b2-439f-a763-ab694b3e2953/glance-log/0.log" Feb 16 18:28:47 crc kubenswrapper[4794]: I0216 18:28:47.523826 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_446c165c-d077-4e2c-a902-ee7d1961edc6/glance-httpd/0.log" Feb 16 18:28:47 crc kubenswrapper[4794]: I0216 18:28:47.524059 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_446c165c-d077-4e2c-a902-ee7d1961edc6/glance-log/0.log" Feb 16 18:28:47 crc kubenswrapper[4794]: E0216 18:28:47.794104 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:28:48 crc kubenswrapper[4794]: I0216 18:28:48.185587 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-868454c84d-mwnsk_57584011-2a08-4edd-a53a-fa54541cfc82/heat-api/0.log" Feb 16 18:28:48 crc kubenswrapper[4794]: I0216 18:28:48.342964 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-849cbf9447-6chxp_ccd75b14-da33-40cb-ace9-fae71c629d01/heat-cfnapi/0.log" Feb 16 18:28:48 crc kubenswrapper[4794]: I0216 18:28:48.982351 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-547586545-c5624_c0403b0e-4120-4eb9-b7ed-dcfafb224d46/heat-engine/0.log" Feb 16 18:28:49 crc kubenswrapper[4794]: I0216 18:28:49.058986 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5bfdb47d5f-nhr7b_0e6e652d-e656-43a1-9272-bc48d55d7c35/keystone-api/0.log" Feb 16 18:28:49 crc kubenswrapper[4794]: I0216 18:28:49.072928 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29521081-8bkn4_4e46afa6-6711-47b2-88ca-b2b185d690e7/keystone-cron/0.log" Feb 16 18:28:49 crc kubenswrapper[4794]: I0216 18:28:49.269792 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_4100ccdc-4397-45ed-8c44-e877abeb689c/kube-state-metrics/0.log" Feb 16 18:28:49 crc kubenswrapper[4794]: I0216 18:28:49.623086 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_14a7777c-3957-4591-959c-746e1557c309/mysqld-exporter/0.log" Feb 16 18:28:49 crc kubenswrapper[4794]: I0216 18:28:49.641145 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6cdff78ddf-hj4zf_7419e1b3-c58c-499d-bed5-5b8404f50c31/neutron-api/0.log" Feb 16 18:28:49 crc kubenswrapper[4794]: I0216 18:28:49.775319 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6cdff78ddf-hj4zf_7419e1b3-c58c-499d-bed5-5b8404f50c31/neutron-httpd/0.log" Feb 16 18:28:49 crc kubenswrapper[4794]: E0216 18:28:49.794959 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:28:50 crc kubenswrapper[4794]: I0216 18:28:50.116322 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_3e7d5a0f-a988-41f2-8e63-6a3fccddbacc/nova-api-log/0.log" Feb 16 18:28:50 crc kubenswrapper[4794]: I0216 18:28:50.243635 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_7beb845f-ab40-4f39-82eb-dff623435a03/nova-cell0-conductor-conductor/0.log" Feb 16 18:28:50 crc kubenswrapper[4794]: I0216 18:28:50.412051 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_3e7d5a0f-a988-41f2-8e63-6a3fccddbacc/nova-api-api/0.log" Feb 16 18:28:51 crc kubenswrapper[4794]: I0216 18:28:51.076608 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_33cde066-6417-44f1-9bd6-53ceb52a577b/nova-cell1-conductor-conductor/0.log" Feb 16 18:28:51 crc kubenswrapper[4794]: I0216 18:28:51.219415 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_2952b970-259a-4f23-b3bc-614d5e88a6d1/nova-cell1-novncproxy-novncproxy/0.log" Feb 16 18:28:51 crc kubenswrapper[4794]: I0216 18:28:51.354622 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_759dd9df-054e-4675-b614-d6cf32280981/nova-metadata-log/0.log" Feb 16 18:28:51 crc kubenswrapper[4794]: I0216 18:28:51.532459 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_d18ce339-9b99-485a-8bff-1aa4bbf31dd7/nova-scheduler-scheduler/0.log" Feb 16 18:28:51 crc kubenswrapper[4794]: I0216 18:28:51.596783 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_927505a3-c47f-4b5a-ac60-d35b0140edfe/mysql-bootstrap/0.log" Feb 16 18:28:52 crc kubenswrapper[4794]: I0216 18:28:52.006326 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_927505a3-c47f-4b5a-ac60-d35b0140edfe/galera/0.log" Feb 16 18:28:52 crc kubenswrapper[4794]: I0216 18:28:52.061091 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_927505a3-c47f-4b5a-ac60-d35b0140edfe/mysql-bootstrap/0.log" Feb 16 18:28:52 crc kubenswrapper[4794]: I0216 18:28:52.270116 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c07f58cd-ea21-4cb3-a3db-0d184c3628bd/mysql-bootstrap/0.log" Feb 16 18:28:52 crc kubenswrapper[4794]: I0216 18:28:52.374607 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c07f58cd-ea21-4cb3-a3db-0d184c3628bd/mysql-bootstrap/0.log" Feb 16 18:28:52 crc kubenswrapper[4794]: I0216 18:28:52.472147 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c07f58cd-ea21-4cb3-a3db-0d184c3628bd/galera/0.log" Feb 16 18:28:52 crc kubenswrapper[4794]: I0216 18:28:52.566184 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_11c6449f-ae59-4210-9c59-bafcbb116ed8/openstackclient/0.log" Feb 16 18:28:52 crc kubenswrapper[4794]: I0216 18:28:52.744885 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-frfcd_e6ba4ad1-ede1-49d7-a317-8f6d71134947/ovn-controller/0.log" Feb 16 18:28:52 crc kubenswrapper[4794]: I0216 18:28:52.896712 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-b25d2_6a5d8158-28e2-414b-8ebd-abce9aa4b12d/openstack-network-exporter/0.log" Feb 16 18:28:53 crc kubenswrapper[4794]: I0216 18:28:53.087823 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jgbgf_004145a2-867a-43fd-be9d-ad53806a1c19/ovsdb-server-init/0.log" Feb 16 18:28:53 crc kubenswrapper[4794]: I0216 18:28:53.158128 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_759dd9df-054e-4675-b614-d6cf32280981/nova-metadata-metadata/0.log" Feb 16 18:28:53 crc kubenswrapper[4794]: I0216 18:28:53.270165 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jgbgf_004145a2-867a-43fd-be9d-ad53806a1c19/ovs-vswitchd/0.log" Feb 16 18:28:53 crc kubenswrapper[4794]: I0216 18:28:53.294152 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jgbgf_004145a2-867a-43fd-be9d-ad53806a1c19/ovsdb-server/0.log" Feb 16 18:28:53 crc kubenswrapper[4794]: I0216 18:28:53.313145 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jgbgf_004145a2-867a-43fd-be9d-ad53806a1c19/ovsdb-server-init/0.log" Feb 16 18:28:53 crc kubenswrapper[4794]: I0216 18:28:53.502543 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b05015a0-b648-4ebd-a7f1-2621e125504e/openstack-network-exporter/0.log" Feb 16 18:28:53 crc kubenswrapper[4794]: I0216 18:28:53.527567 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b05015a0-b648-4ebd-a7f1-2621e125504e/ovn-northd/0.log" Feb 16 18:28:53 crc kubenswrapper[4794]: I0216 18:28:53.619295 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8528fad2-4c8a-4171-92a6-eb31e80d0f2e/openstack-network-exporter/0.log" Feb 16 18:28:53 crc kubenswrapper[4794]: I0216 18:28:53.711422 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8528fad2-4c8a-4171-92a6-eb31e80d0f2e/ovsdbserver-nb/0.log" Feb 16 18:28:53 crc kubenswrapper[4794]: I0216 18:28:53.745382 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_eab559f8-3130-43e5-bbf7-cf980cb15a56/openstack-network-exporter/0.log" Feb 16 18:28:53 crc kubenswrapper[4794]: I0216 18:28:53.889902 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_eab559f8-3130-43e5-bbf7-cf980cb15a56/ovsdbserver-sb/0.log" Feb 16 18:28:54 crc kubenswrapper[4794]: I0216 18:28:54.080532 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-57f99b44dd-9kw4m_e70af0d8-dad3-4bab-bfea-e82fef6b308e/placement-api/0.log" Feb 16 18:28:54 crc kubenswrapper[4794]: I0216 18:28:54.089048 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-57f99b44dd-9kw4m_e70af0d8-dad3-4bab-bfea-e82fef6b308e/placement-log/0.log" Feb 16 18:28:54 crc kubenswrapper[4794]: I0216 18:28:54.231679 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_11e05321-0f2f-4688-abd5-0e3a019bf530/init-config-reloader/0.log" Feb 16 18:28:54 crc kubenswrapper[4794]: I0216 18:28:54.400712 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_11e05321-0f2f-4688-abd5-0e3a019bf530/init-config-reloader/0.log" Feb 16 18:28:54 crc kubenswrapper[4794]: I0216 18:28:54.492379 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_11e05321-0f2f-4688-abd5-0e3a019bf530/thanos-sidecar/0.log" Feb 16 18:28:54 crc kubenswrapper[4794]: I0216 18:28:54.509761 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_11e05321-0f2f-4688-abd5-0e3a019bf530/prometheus/0.log" Feb 16 18:28:54 crc kubenswrapper[4794]: I0216 18:28:54.521729 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_11e05321-0f2f-4688-abd5-0e3a019bf530/config-reloader/0.log" Feb 16 18:28:54 crc kubenswrapper[4794]: I0216 18:28:54.679503 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d805784a-6606-49cf-a441-4e17697ab5ea/setup-container/0.log" Feb 16 18:28:54 crc kubenswrapper[4794]: I0216 18:28:54.886560 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d805784a-6606-49cf-a441-4e17697ab5ea/setup-container/0.log" Feb 16 18:28:55 crc kubenswrapper[4794]: I0216 18:28:55.075815 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d805784a-6606-49cf-a441-4e17697ab5ea/rabbitmq/0.log" Feb 16 18:28:55 crc kubenswrapper[4794]: I0216 18:28:55.129850 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ba133018-dec1-47aa-92e3-a0e3440dec49/setup-container/0.log" Feb 16 18:28:55 crc kubenswrapper[4794]: I0216 18:28:55.384918 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_b487594f-298c-477a-bd90-487d9f072b6e/setup-container/0.log" Feb 16 18:28:55 crc kubenswrapper[4794]: I0216 18:28:55.450550 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ba133018-dec1-47aa-92e3-a0e3440dec49/setup-container/0.log" Feb 16 18:28:55 crc kubenswrapper[4794]: I0216 18:28:55.458660 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ba133018-dec1-47aa-92e3-a0e3440dec49/rabbitmq/0.log" Feb 16 18:28:55 crc kubenswrapper[4794]: I0216 18:28:55.629167 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_b487594f-298c-477a-bd90-487d9f072b6e/setup-container/0.log" Feb 16 18:28:55 crc kubenswrapper[4794]: I0216 18:28:55.706255 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_b487594f-298c-477a-bd90-487d9f072b6e/rabbitmq/0.log" Feb 16 18:28:55 crc kubenswrapper[4794]: I0216 18:28:55.735759 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_f02565a7-c476-4aa0-a4b4-bb7caacb4ec7/setup-container/0.log" Feb 16 18:28:55 crc kubenswrapper[4794]: I0216 18:28:55.935696 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_f02565a7-c476-4aa0-a4b4-bb7caacb4ec7/rabbitmq/0.log" Feb 16 18:28:55 crc kubenswrapper[4794]: I0216 18:28:55.948618 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_f02565a7-c476-4aa0-a4b4-bb7caacb4ec7/setup-container/0.log" Feb 16 18:28:55 crc kubenswrapper[4794]: I0216 18:28:55.956522 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-2l6mb_4679cdf0-0e90-4126-91b5-5411ea4d9452/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 18:28:56 crc kubenswrapper[4794]: I0216 18:28:56.147994 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-6tj6w_08d7a50c-a4ea-45cd-81d7-d962bc1921d5/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 18:28:56 crc kubenswrapper[4794]: I0216 18:28:56.455655 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6cb67474dc-d4tmw_cd56173e-c7f0-4309-97a9-4bd89c7704f3/proxy-httpd/0.log" Feb 16 18:28:56 crc kubenswrapper[4794]: I0216 18:28:56.468585 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6cb67474dc-d4tmw_cd56173e-c7f0-4309-97a9-4bd89c7704f3/proxy-server/0.log" Feb 16 18:28:56 crc kubenswrapper[4794]: I0216 18:28:56.486936 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-w2gs8_84dc223e-f01c-424c-802a-3e1a5ad819be/swift-ring-rebalance/0.log" Feb 16 18:28:56 crc kubenswrapper[4794]: I0216 18:28:56.634545 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54acc9db-6bd7-463f-8637-6aa39ed3eb11/account-auditor/0.log" Feb 16 18:28:56 crc kubenswrapper[4794]: I0216 18:28:56.740195 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54acc9db-6bd7-463f-8637-6aa39ed3eb11/account-replicator/0.log" Feb 16 18:28:56 crc kubenswrapper[4794]: I0216 18:28:56.757437 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54acc9db-6bd7-463f-8637-6aa39ed3eb11/account-reaper/0.log" Feb 16 18:28:56 crc kubenswrapper[4794]: I0216 18:28:56.861346 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54acc9db-6bd7-463f-8637-6aa39ed3eb11/account-server/0.log" Feb 16 18:28:56 crc kubenswrapper[4794]: I0216 18:28:56.902519 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54acc9db-6bd7-463f-8637-6aa39ed3eb11/container-auditor/0.log" Feb 16 18:28:56 crc kubenswrapper[4794]: I0216 18:28:56.962209 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54acc9db-6bd7-463f-8637-6aa39ed3eb11/container-server/0.log" Feb 16 18:28:57 crc kubenswrapper[4794]: I0216 18:28:57.021114 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54acc9db-6bd7-463f-8637-6aa39ed3eb11/container-replicator/0.log" Feb 16 18:28:57 crc kubenswrapper[4794]: I0216 18:28:57.059969 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54acc9db-6bd7-463f-8637-6aa39ed3eb11/container-updater/0.log" Feb 16 18:28:57 crc kubenswrapper[4794]: I0216 18:28:57.166801 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54acc9db-6bd7-463f-8637-6aa39ed3eb11/object-auditor/0.log" Feb 16 18:28:57 crc kubenswrapper[4794]: I0216 18:28:57.211127 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54acc9db-6bd7-463f-8637-6aa39ed3eb11/object-expirer/0.log" Feb 16 18:28:57 crc kubenswrapper[4794]: I0216 18:28:57.311590 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54acc9db-6bd7-463f-8637-6aa39ed3eb11/object-server/0.log" Feb 16 18:28:57 crc kubenswrapper[4794]: I0216 18:28:57.311891 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54acc9db-6bd7-463f-8637-6aa39ed3eb11/object-replicator/0.log" Feb 16 18:28:57 crc kubenswrapper[4794]: I0216 18:28:57.405773 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54acc9db-6bd7-463f-8637-6aa39ed3eb11/object-updater/0.log" Feb 16 18:28:57 crc kubenswrapper[4794]: I0216 18:28:57.437546 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54acc9db-6bd7-463f-8637-6aa39ed3eb11/rsync/0.log" Feb 16 18:28:57 crc kubenswrapper[4794]: I0216 18:28:57.555508 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_54acc9db-6bd7-463f-8637-6aa39ed3eb11/swift-recon-cron/0.log" Feb 16 18:29:00 crc kubenswrapper[4794]: E0216 18:29:00.793500 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:29:02 crc kubenswrapper[4794]: I0216 18:29:02.007809 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_8695c855-a285-408e-a018-ee0060a832e1/memcached/0.log" Feb 16 18:29:04 crc kubenswrapper[4794]: E0216 18:29:04.803713 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:29:15 crc kubenswrapper[4794]: E0216 18:29:15.795224 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:29:15 crc kubenswrapper[4794]: E0216 18:29:15.795323 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:29:27 crc kubenswrapper[4794]: I0216 18:29:27.693410 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98gswkr_2ff4cec4-468f-41bf-a84a-4cdbc3e236fb/util/0.log" Feb 16 18:29:27 crc kubenswrapper[4794]: I0216 18:29:27.964953 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98gswkr_2ff4cec4-468f-41bf-a84a-4cdbc3e236fb/util/0.log" Feb 16 18:29:27 crc kubenswrapper[4794]: I0216 18:29:27.995354 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98gswkr_2ff4cec4-468f-41bf-a84a-4cdbc3e236fb/pull/0.log" Feb 16 18:29:28 crc kubenswrapper[4794]: I0216 18:29:28.042197 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98gswkr_2ff4cec4-468f-41bf-a84a-4cdbc3e236fb/pull/0.log" Feb 16 18:29:28 crc kubenswrapper[4794]: I0216 18:29:28.218650 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98gswkr_2ff4cec4-468f-41bf-a84a-4cdbc3e236fb/pull/0.log" Feb 16 18:29:28 crc kubenswrapper[4794]: I0216 18:29:28.225796 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98gswkr_2ff4cec4-468f-41bf-a84a-4cdbc3e236fb/extract/0.log" Feb 16 18:29:28 crc kubenswrapper[4794]: I0216 18:29:28.248055 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98gswkr_2ff4cec4-468f-41bf-a84a-4cdbc3e236fb/util/0.log" Feb 16 18:29:28 crc kubenswrapper[4794]: I0216 18:29:28.791736 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-cq6lq_5d75b8f6-2376-48f7-90eb-de0bec6cf251/manager/0.log" Feb 16 18:29:28 crc kubenswrapper[4794]: E0216 18:29:28.793951 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:29:29 crc kubenswrapper[4794]: I0216 18:29:29.309275 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-w7smz_becddb1f-01f4-4141-a6da-86771dcf2c70/manager/0.log" Feb 16 18:29:29 crc kubenswrapper[4794]: I0216 18:29:29.642285 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-t22f4_5616fc58-e868-46a9-bad9-58cb130759de/manager/0.log" Feb 16 18:29:29 crc kubenswrapper[4794]: I0216 18:29:29.814309 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-r5hls_2f2fd1c7-b7ec-4807-a859-b1d5efb8c58e/manager/0.log" Feb 16 18:29:30 crc kubenswrapper[4794]: I0216 18:29:30.404948 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-5dghp_eaa0af70-cd40-4e75-9ddf-83a5a2190d83/manager/0.log" Feb 16 18:29:30 crc kubenswrapper[4794]: I0216 18:29:30.589057 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-45nh7_3da72c4e-1963-406a-9dff-f0bc43f154bd/manager/0.log" Feb 16 18:29:30 crc kubenswrapper[4794]: E0216 18:29:30.795522 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:29:30 crc kubenswrapper[4794]: I0216 18:29:30.981000 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-4djph_ef354ee7-16e4-4b4d-98c5-0f08fc370717/manager/0.log" Feb 16 18:29:31 crc kubenswrapper[4794]: I0216 18:29:31.118994 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-x99jf_bba3e236-f18b-4293-b517-897936db8b05/manager/0.log" Feb 16 18:29:31 crc kubenswrapper[4794]: I0216 18:29:31.315353 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-f4x45_a78b821e-c246-42b4-9576-603f0889965f/manager/0.log" Feb 16 18:29:31 crc kubenswrapper[4794]: I0216 18:29:31.418558 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-wcdnq_b428664f-1819-45d4-8040-1c0c35e31c5d/manager/0.log" Feb 16 18:29:31 crc kubenswrapper[4794]: I0216 18:29:31.621190 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-p59dn_7eaab997-2552-42b3-b638-a92220374d2d/manager/0.log" Feb 16 18:29:31 crc kubenswrapper[4794]: I0216 18:29:31.787822 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-q5f7s_44bf8e87-8212-4680-bcdc-bf1ca6d94d35/manager/0.log" Feb 16 18:29:32 crc kubenswrapper[4794]: I0216 18:29:32.047468 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9ch4rq9_b50615c5-2b75-4b07-9f72-4c70baa57bf3/manager/0.log" Feb 16 18:29:32 crc kubenswrapper[4794]: I0216 18:29:32.467364 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6f655b9d6d-cn7sg_873461df-875e-4238-89df-41d618d290bc/operator/0.log" Feb 16 18:29:33 crc kubenswrapper[4794]: I0216 18:29:33.069033 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-wrrdh_48aadbe0-8241-422e-a086-b1e1c0d2d9bd/registry-server/0.log" Feb 16 18:29:33 crc kubenswrapper[4794]: I0216 18:29:33.494081 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-tttfw_f4ca9db4-7b81-4c54-b6df-f5c4a8475a15/manager/0.log" Feb 16 18:29:33 crc kubenswrapper[4794]: I0216 18:29:33.753371 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-g5kbr_c566e561-8069-4311-a79f-71130f9b54d7/manager/0.log" Feb 16 18:29:33 crc kubenswrapper[4794]: I0216 18:29:33.965824 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-64c7v_89a0d9ab-217b-4bc4-ad65-6a66001fe891/operator/0.log" Feb 16 18:29:34 crc kubenswrapper[4794]: I0216 18:29:34.197510 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-b5tg6_4d912db4-c2c9-4103-ba2c-26f1dc0cc4a6/manager/0.log" Feb 16 18:29:34 crc kubenswrapper[4794]: I0216 18:29:34.966322 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6f58b764dd-9nlr2_3f66b30a-9191-494c-9d74-86e92acdc455/manager/0.log" Feb 16 18:29:34 crc kubenswrapper[4794]: I0216 18:29:34.977279 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-s79lr_f7637fa0-4e0c-41e1-a8e7-ba9442495cfc/manager/0.log" Feb 16 18:29:35 crc kubenswrapper[4794]: I0216 18:29:35.185355 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-946dc_116e8deb-7236-4751-95ee-9b839f228f55/manager/0.log" Feb 16 18:29:35 crc kubenswrapper[4794]: I0216 18:29:35.298245 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5884f785c-9wnws_ba76f31a-473e-48b7-873a-a2251f664d4b/manager/0.log" Feb 16 18:29:35 crc kubenswrapper[4794]: I0216 18:29:35.372907 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-qnr9g_0ae6e41c-d0dc-4437-8b0c-1cd271cdbd6f/manager/0.log" Feb 16 18:29:39 crc kubenswrapper[4794]: E0216 18:29:39.793274 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:29:40 crc kubenswrapper[4794]: I0216 18:29:40.816648 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-j56v5_f7f924f9-9e09-4b23-91f2-7ac446f44405/manager/0.log" Feb 16 18:29:43 crc kubenswrapper[4794]: E0216 18:29:43.794133 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:29:50 crc kubenswrapper[4794]: I0216 18:29:50.140526 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 18:29:50 crc kubenswrapper[4794]: I0216 18:29:50.141016 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 18:29:51 crc kubenswrapper[4794]: E0216 18:29:51.794769 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:29:56 crc kubenswrapper[4794]: E0216 18:29:56.794247 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:30:00 crc kubenswrapper[4794]: I0216 18:30:00.146923 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521110-hkwdn"] Feb 16 18:30:00 crc kubenswrapper[4794]: E0216 18:30:00.151584 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e13d33e9-b0a0-47c4-8edc-83091aa2010d" containerName="extract-content" Feb 16 18:30:00 crc kubenswrapper[4794]: I0216 18:30:00.151611 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="e13d33e9-b0a0-47c4-8edc-83091aa2010d" containerName="extract-content" Feb 16 18:30:00 crc kubenswrapper[4794]: E0216 18:30:00.151623 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e13d33e9-b0a0-47c4-8edc-83091aa2010d" containerName="registry-server" Feb 16 18:30:00 crc kubenswrapper[4794]: I0216 18:30:00.151630 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="e13d33e9-b0a0-47c4-8edc-83091aa2010d" containerName="registry-server" Feb 16 18:30:00 crc kubenswrapper[4794]: E0216 18:30:00.151643 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e13d33e9-b0a0-47c4-8edc-83091aa2010d" containerName="extract-utilities" Feb 16 18:30:00 crc kubenswrapper[4794]: I0216 18:30:00.151649 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="e13d33e9-b0a0-47c4-8edc-83091aa2010d" containerName="extract-utilities" Feb 16 18:30:00 crc kubenswrapper[4794]: I0216 18:30:00.151890 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="e13d33e9-b0a0-47c4-8edc-83091aa2010d" containerName="registry-server" Feb 16 18:30:00 crc kubenswrapper[4794]: I0216 18:30:00.152698 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521110-hkwdn" Feb 16 18:30:00 crc kubenswrapper[4794]: I0216 18:30:00.159211 4794 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 18:30:00 crc kubenswrapper[4794]: I0216 18:30:00.160334 4794 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 18:30:00 crc kubenswrapper[4794]: I0216 18:30:00.173358 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521110-hkwdn"] Feb 16 18:30:00 crc kubenswrapper[4794]: I0216 18:30:00.293110 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48j2c\" (UniqueName: \"kubernetes.io/projected/7a1210cd-f20a-4557-9dd2-a8d7b5007093-kube-api-access-48j2c\") pod \"collect-profiles-29521110-hkwdn\" (UID: \"7a1210cd-f20a-4557-9dd2-a8d7b5007093\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521110-hkwdn" Feb 16 18:30:00 crc kubenswrapper[4794]: I0216 18:30:00.293258 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a1210cd-f20a-4557-9dd2-a8d7b5007093-secret-volume\") pod \"collect-profiles-29521110-hkwdn\" (UID: \"7a1210cd-f20a-4557-9dd2-a8d7b5007093\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521110-hkwdn" Feb 16 18:30:00 crc kubenswrapper[4794]: I0216 18:30:00.293317 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a1210cd-f20a-4557-9dd2-a8d7b5007093-config-volume\") pod \"collect-profiles-29521110-hkwdn\" (UID: \"7a1210cd-f20a-4557-9dd2-a8d7b5007093\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521110-hkwdn" Feb 16 18:30:00 crc kubenswrapper[4794]: I0216 18:30:00.395347 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48j2c\" (UniqueName: \"kubernetes.io/projected/7a1210cd-f20a-4557-9dd2-a8d7b5007093-kube-api-access-48j2c\") pod \"collect-profiles-29521110-hkwdn\" (UID: \"7a1210cd-f20a-4557-9dd2-a8d7b5007093\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521110-hkwdn" Feb 16 18:30:00 crc kubenswrapper[4794]: I0216 18:30:00.395483 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a1210cd-f20a-4557-9dd2-a8d7b5007093-secret-volume\") pod \"collect-profiles-29521110-hkwdn\" (UID: \"7a1210cd-f20a-4557-9dd2-a8d7b5007093\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521110-hkwdn" Feb 16 18:30:00 crc kubenswrapper[4794]: I0216 18:30:00.395533 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a1210cd-f20a-4557-9dd2-a8d7b5007093-config-volume\") pod \"collect-profiles-29521110-hkwdn\" (UID: \"7a1210cd-f20a-4557-9dd2-a8d7b5007093\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521110-hkwdn" Feb 16 18:30:00 crc kubenswrapper[4794]: I0216 18:30:00.396522 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a1210cd-f20a-4557-9dd2-a8d7b5007093-config-volume\") pod \"collect-profiles-29521110-hkwdn\" (UID: \"7a1210cd-f20a-4557-9dd2-a8d7b5007093\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521110-hkwdn" Feb 16 18:30:00 crc kubenswrapper[4794]: I0216 18:30:00.400745 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a1210cd-f20a-4557-9dd2-a8d7b5007093-secret-volume\") pod \"collect-profiles-29521110-hkwdn\" (UID: \"7a1210cd-f20a-4557-9dd2-a8d7b5007093\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521110-hkwdn" Feb 16 18:30:00 crc kubenswrapper[4794]: I0216 18:30:00.411322 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48j2c\" (UniqueName: \"kubernetes.io/projected/7a1210cd-f20a-4557-9dd2-a8d7b5007093-kube-api-access-48j2c\") pod \"collect-profiles-29521110-hkwdn\" (UID: \"7a1210cd-f20a-4557-9dd2-a8d7b5007093\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521110-hkwdn" Feb 16 18:30:00 crc kubenswrapper[4794]: I0216 18:30:00.473377 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521110-hkwdn" Feb 16 18:30:00 crc kubenswrapper[4794]: I0216 18:30:00.519034 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-s8r99_08547bee-d06e-467b-8be7-db65e24c7e49/control-plane-machine-set-operator/0.log" Feb 16 18:30:00 crc kubenswrapper[4794]: I0216 18:30:00.737909 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-2rjhr_c033bce4-9921-49ec-bda6-ba7f79647c00/kube-rbac-proxy/0.log" Feb 16 18:30:00 crc kubenswrapper[4794]: I0216 18:30:00.809004 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-2rjhr_c033bce4-9921-49ec-bda6-ba7f79647c00/machine-api-operator/0.log" Feb 16 18:30:01 crc kubenswrapper[4794]: I0216 18:30:01.049409 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521110-hkwdn"] Feb 16 18:30:01 crc kubenswrapper[4794]: I0216 18:30:01.790457 4794 generic.go:334] "Generic (PLEG): container finished" podID="7a1210cd-f20a-4557-9dd2-a8d7b5007093" containerID="db3262f5053fb591b49c373343cc8cb751e4f627275b43550cdc217688472e69" exitCode=0 Feb 16 18:30:01 crc kubenswrapper[4794]: I0216 18:30:01.790563 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521110-hkwdn" event={"ID":"7a1210cd-f20a-4557-9dd2-a8d7b5007093","Type":"ContainerDied","Data":"db3262f5053fb591b49c373343cc8cb751e4f627275b43550cdc217688472e69"} Feb 16 18:30:01 crc kubenswrapper[4794]: I0216 18:30:01.790820 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521110-hkwdn" event={"ID":"7a1210cd-f20a-4557-9dd2-a8d7b5007093","Type":"ContainerStarted","Data":"431f21e8a95399865e1f4a09315b0ce33fd894f82039560b97961c98a9da0887"} Feb 16 18:30:03 crc kubenswrapper[4794]: I0216 18:30:03.205716 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521110-hkwdn" Feb 16 18:30:03 crc kubenswrapper[4794]: I0216 18:30:03.376336 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48j2c\" (UniqueName: \"kubernetes.io/projected/7a1210cd-f20a-4557-9dd2-a8d7b5007093-kube-api-access-48j2c\") pod \"7a1210cd-f20a-4557-9dd2-a8d7b5007093\" (UID: \"7a1210cd-f20a-4557-9dd2-a8d7b5007093\") " Feb 16 18:30:03 crc kubenswrapper[4794]: I0216 18:30:03.376426 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a1210cd-f20a-4557-9dd2-a8d7b5007093-secret-volume\") pod \"7a1210cd-f20a-4557-9dd2-a8d7b5007093\" (UID: \"7a1210cd-f20a-4557-9dd2-a8d7b5007093\") " Feb 16 18:30:03 crc kubenswrapper[4794]: I0216 18:30:03.376780 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a1210cd-f20a-4557-9dd2-a8d7b5007093-config-volume\") pod \"7a1210cd-f20a-4557-9dd2-a8d7b5007093\" (UID: \"7a1210cd-f20a-4557-9dd2-a8d7b5007093\") " Feb 16 18:30:03 crc kubenswrapper[4794]: I0216 18:30:03.378045 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a1210cd-f20a-4557-9dd2-a8d7b5007093-config-volume" (OuterVolumeSpecName: "config-volume") pod "7a1210cd-f20a-4557-9dd2-a8d7b5007093" (UID: "7a1210cd-f20a-4557-9dd2-a8d7b5007093"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 18:30:03 crc kubenswrapper[4794]: I0216 18:30:03.383486 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a1210cd-f20a-4557-9dd2-a8d7b5007093-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7a1210cd-f20a-4557-9dd2-a8d7b5007093" (UID: "7a1210cd-f20a-4557-9dd2-a8d7b5007093"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 18:30:03 crc kubenswrapper[4794]: I0216 18:30:03.385059 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a1210cd-f20a-4557-9dd2-a8d7b5007093-kube-api-access-48j2c" (OuterVolumeSpecName: "kube-api-access-48j2c") pod "7a1210cd-f20a-4557-9dd2-a8d7b5007093" (UID: "7a1210cd-f20a-4557-9dd2-a8d7b5007093"). InnerVolumeSpecName "kube-api-access-48j2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 18:30:03 crc kubenswrapper[4794]: I0216 18:30:03.479586 4794 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a1210cd-f20a-4557-9dd2-a8d7b5007093-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 18:30:03 crc kubenswrapper[4794]: I0216 18:30:03.479835 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48j2c\" (UniqueName: \"kubernetes.io/projected/7a1210cd-f20a-4557-9dd2-a8d7b5007093-kube-api-access-48j2c\") on node \"crc\" DevicePath \"\"" Feb 16 18:30:03 crc kubenswrapper[4794]: I0216 18:30:03.479846 4794 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a1210cd-f20a-4557-9dd2-a8d7b5007093-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 18:30:03 crc kubenswrapper[4794]: I0216 18:30:03.808511 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521110-hkwdn" event={"ID":"7a1210cd-f20a-4557-9dd2-a8d7b5007093","Type":"ContainerDied","Data":"431f21e8a95399865e1f4a09315b0ce33fd894f82039560b97961c98a9da0887"} Feb 16 18:30:03 crc kubenswrapper[4794]: I0216 18:30:03.808547 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="431f21e8a95399865e1f4a09315b0ce33fd894f82039560b97961c98a9da0887" Feb 16 18:30:03 crc kubenswrapper[4794]: I0216 18:30:03.808566 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521110-hkwdn" Feb 16 18:30:04 crc kubenswrapper[4794]: I0216 18:30:04.291213 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521065-dhm45"] Feb 16 18:30:04 crc kubenswrapper[4794]: I0216 18:30:04.304463 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521065-dhm45"] Feb 16 18:30:04 crc kubenswrapper[4794]: I0216 18:30:04.813827 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19ddd02e-dace-4ced-807f-11c9b908350c" path="/var/lib/kubelet/pods/19ddd02e-dace-4ced-807f-11c9b908350c/volumes" Feb 16 18:30:06 crc kubenswrapper[4794]: E0216 18:30:06.796807 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:30:07 crc kubenswrapper[4794]: E0216 18:30:07.796012 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:30:15 crc kubenswrapper[4794]: I0216 18:30:15.600810 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-kr8wx_66dd19a7-a89d-4a32-9c65-8b24e4b01363/cert-manager-controller/0.log" Feb 16 18:30:15 crc kubenswrapper[4794]: I0216 18:30:15.739252 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-l8dwz_c6c81378-1dc6-496a-946f-b403a2dc0260/cert-manager-cainjector/0.log" Feb 16 18:30:15 crc kubenswrapper[4794]: I0216 18:30:15.844375 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-q75lx_9b1afc0d-a17d-4891-8e30-c1b1edf3deab/cert-manager-webhook/0.log" Feb 16 18:30:19 crc kubenswrapper[4794]: E0216 18:30:19.794055 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:30:20 crc kubenswrapper[4794]: I0216 18:30:20.140557 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 18:30:20 crc kubenswrapper[4794]: I0216 18:30:20.140800 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 18:30:20 crc kubenswrapper[4794]: E0216 18:30:20.810157 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:30:32 crc kubenswrapper[4794]: I0216 18:30:32.240913 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-gcn4p_016d1c49-2466-4430-9c2d-5402c0c46fe3/nmstate-console-plugin/0.log" Feb 16 18:30:32 crc kubenswrapper[4794]: I0216 18:30:32.458893 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-9wmw5_fb8ce142-2a32-4900-9a3b-7534607c176c/kube-rbac-proxy/0.log" Feb 16 18:30:32 crc kubenswrapper[4794]: I0216 18:30:32.504591 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-v7cfc_eeb5a012-73db-4509-a3b5-35c56601ce33/nmstate-handler/0.log" Feb 16 18:30:32 crc kubenswrapper[4794]: I0216 18:30:32.540701 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-9wmw5_fb8ce142-2a32-4900-9a3b-7534607c176c/nmstate-metrics/0.log" Feb 16 18:30:32 crc kubenswrapper[4794]: I0216 18:30:32.939024 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-nzzr7_fb9092cd-fa5f-47de-9e4b-331f73e49c35/nmstate-operator/0.log" Feb 16 18:30:33 crc kubenswrapper[4794]: I0216 18:30:33.069172 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-n99t7_541b811d-ad2c-43e3-aa09-82833010ec62/nmstate-webhook/0.log" Feb 16 18:30:33 crc kubenswrapper[4794]: E0216 18:30:33.793008 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:30:34 crc kubenswrapper[4794]: E0216 18:30:34.801126 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:30:43 crc kubenswrapper[4794]: I0216 18:30:43.595635 4794 scope.go:117] "RemoveContainer" containerID="b34af399380d09782265c8f88c5d967841b6b23f394168bb5b3bcf9ab785c64d" Feb 16 18:30:47 crc kubenswrapper[4794]: E0216 18:30:47.793911 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:30:47 crc kubenswrapper[4794]: E0216 18:30:47.794822 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:30:49 crc kubenswrapper[4794]: I0216 18:30:49.617030 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-8499595899-t6s7p_1a441979-8971-4f00-9a49-0dbd7d90d537/kube-rbac-proxy/0.log" Feb 16 18:30:49 crc kubenswrapper[4794]: I0216 18:30:49.898522 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-8499595899-t6s7p_1a441979-8971-4f00-9a49-0dbd7d90d537/manager/0.log" Feb 16 18:30:50 crc kubenswrapper[4794]: I0216 18:30:50.140677 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 18:30:50 crc kubenswrapper[4794]: I0216 18:30:50.141037 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 18:30:50 crc kubenswrapper[4794]: I0216 18:30:50.141080 4794 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" Feb 16 18:30:50 crc kubenswrapper[4794]: I0216 18:30:50.141977 4794 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"860677b1eb456be903bb112f204623b2baf083d1d49cf8922af98fec72c85451"} pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 18:30:50 crc kubenswrapper[4794]: I0216 18:30:50.142037 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" containerID="cri-o://860677b1eb456be903bb112f204623b2baf083d1d49cf8922af98fec72c85451" gracePeriod=600 Feb 16 18:30:50 crc kubenswrapper[4794]: I0216 18:30:50.366967 4794 generic.go:334] "Generic (PLEG): container finished" podID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerID="860677b1eb456be903bb112f204623b2baf083d1d49cf8922af98fec72c85451" exitCode=0 Feb 16 18:30:50 crc kubenswrapper[4794]: I0216 18:30:50.367023 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerDied","Data":"860677b1eb456be903bb112f204623b2baf083d1d49cf8922af98fec72c85451"} Feb 16 18:30:50 crc kubenswrapper[4794]: I0216 18:30:50.367072 4794 scope.go:117] "RemoveContainer" containerID="62a44f9cc7dcaeb9cc0d4e21b632c5aa1a13b6b5d5c47077cb926fe9f97c5b81" Feb 16 18:30:51 crc kubenswrapper[4794]: I0216 18:30:51.379507 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerStarted","Data":"4e15aeb50f8bb6767198248710b849e8c1c7d3ea7cc1b2f2b495b90676480032"} Feb 16 18:30:58 crc kubenswrapper[4794]: E0216 18:30:58.796471 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:31:00 crc kubenswrapper[4794]: E0216 18:31:00.794470 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:31:05 crc kubenswrapper[4794]: I0216 18:31:05.972035 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-v7bg9_d26ffaf2-c9d0-459c-8ec3-6cc3a72b0cd4/prometheus-operator/0.log" Feb 16 18:31:06 crc kubenswrapper[4794]: I0216 18:31:06.180030 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf_0ca9bb6d-4f89-469a-aff2-3ecb9dcc814b/prometheus-operator-admission-webhook/0.log" Feb 16 18:31:06 crc kubenswrapper[4794]: I0216 18:31:06.263113 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568_3a52f6b4-d7fb-4eab-92f6-8dee7c495f6a/prometheus-operator-admission-webhook/0.log" Feb 16 18:31:06 crc kubenswrapper[4794]: I0216 18:31:06.388169 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-d85pd_bf8a1703-ef5d-4314-92ff-0a4f21d863ca/operator/0.log" Feb 16 18:31:06 crc kubenswrapper[4794]: I0216 18:31:06.492735 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-xwlnp_c51ad8ee-4b16-4ddc-89a6-d63e4e5abf53/observability-ui-dashboards/0.log" Feb 16 18:31:06 crc kubenswrapper[4794]: I0216 18:31:06.593605 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-tq9qc_33908c91-9542-47cd-9530-dfe7b104e79e/perses-operator/0.log" Feb 16 18:31:10 crc kubenswrapper[4794]: E0216 18:31:10.794631 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:31:12 crc kubenswrapper[4794]: E0216 18:31:12.793353 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:31:24 crc kubenswrapper[4794]: E0216 18:31:24.801641 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:31:25 crc kubenswrapper[4794]: I0216 18:31:25.354465 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-c769fd969-nscl4_33b57aff-006a-45ac-8936-d763e799be70/cluster-logging-operator/0.log" Feb 16 18:31:25 crc kubenswrapper[4794]: I0216 18:31:25.562317 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-z59t9_fcda750d-2cf9-47c5-a47a-fdc01b82e986/collector/0.log" Feb 16 18:31:25 crc kubenswrapper[4794]: I0216 18:31:25.621325 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_1972cc9c-56ea-410c-859f-e179b114fca7/loki-compactor/0.log" Feb 16 18:31:25 crc kubenswrapper[4794]: I0216 18:31:25.754724 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5d5548c9f5-zvg2f_284971a6-d034-4e31-b64b-4e842d877aed/loki-distributor/0.log" Feb 16 18:31:25 crc kubenswrapper[4794]: E0216 18:31:25.793536 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:31:25 crc kubenswrapper[4794]: I0216 18:31:25.866944 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5db5847d75-dzs5f_032057e1-9a2f-40a9-931a-9ff902e0abeb/gateway/0.log" Feb 16 18:31:25 crc kubenswrapper[4794]: I0216 18:31:25.934402 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5db5847d75-dzs5f_032057e1-9a2f-40a9-931a-9ff902e0abeb/opa/0.log" Feb 16 18:31:25 crc kubenswrapper[4794]: I0216 18:31:25.955862 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5db5847d75-whsqk_9d2f1ecd-980b-430c-8ed1-e83406722170/gateway/0.log" Feb 16 18:31:26 crc kubenswrapper[4794]: I0216 18:31:26.291707 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_3d3b5209-1436-45d6-9131-ad623f14e8f3/loki-index-gateway/0.log" Feb 16 18:31:26 crc kubenswrapper[4794]: I0216 18:31:26.317809 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5db5847d75-whsqk_9d2f1ecd-980b-430c-8ed1-e83406722170/opa/0.log" Feb 16 18:31:26 crc kubenswrapper[4794]: I0216 18:31:26.540685 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76bf7b6d45-cm8fj_0814a3c5-3284-4e33-b3cc-4b4163bbcaa1/loki-querier/0.log" Feb 16 18:31:26 crc kubenswrapper[4794]: I0216 18:31:26.550109 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_0a80879a-09d1-4346-bfd5-9dd30ed900f7/loki-ingester/0.log" Feb 16 18:31:26 crc kubenswrapper[4794]: I0216 18:31:26.712537 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-6d6859c548-4dmjf_5447b950-1b55-4b40-8f6f-5fde1e6fdf58/loki-query-frontend/0.log" Feb 16 18:31:38 crc kubenswrapper[4794]: E0216 18:31:38.794368 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:31:39 crc kubenswrapper[4794]: E0216 18:31:39.794493 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:31:45 crc kubenswrapper[4794]: I0216 18:31:45.291475 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-qmm5b_533c1ec2-44e4-4a34-8f40-5ca4dd3527db/controller/0.log" Feb 16 18:31:45 crc kubenswrapper[4794]: I0216 18:31:45.323509 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-qmm5b_533c1ec2-44e4-4a34-8f40-5ca4dd3527db/kube-rbac-proxy/0.log" Feb 16 18:31:45 crc kubenswrapper[4794]: I0216 18:31:45.516543 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sjmrc_b432c0dc-a16b-408b-b760-08c20e6a6e05/cp-frr-files/0.log" Feb 16 18:31:45 crc kubenswrapper[4794]: I0216 18:31:45.711503 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sjmrc_b432c0dc-a16b-408b-b760-08c20e6a6e05/cp-reloader/0.log" Feb 16 18:31:45 crc kubenswrapper[4794]: I0216 18:31:45.719732 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sjmrc_b432c0dc-a16b-408b-b760-08c20e6a6e05/cp-frr-files/0.log" Feb 16 18:31:45 crc kubenswrapper[4794]: I0216 18:31:45.751594 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sjmrc_b432c0dc-a16b-408b-b760-08c20e6a6e05/cp-reloader/0.log" Feb 16 18:31:45 crc kubenswrapper[4794]: I0216 18:31:45.768896 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sjmrc_b432c0dc-a16b-408b-b760-08c20e6a6e05/cp-metrics/0.log" Feb 16 18:31:45 crc kubenswrapper[4794]: I0216 18:31:45.927880 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sjmrc_b432c0dc-a16b-408b-b760-08c20e6a6e05/cp-frr-files/0.log" Feb 16 18:31:45 crc kubenswrapper[4794]: I0216 18:31:45.967763 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sjmrc_b432c0dc-a16b-408b-b760-08c20e6a6e05/cp-reloader/0.log" Feb 16 18:31:45 crc kubenswrapper[4794]: I0216 18:31:45.967917 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sjmrc_b432c0dc-a16b-408b-b760-08c20e6a6e05/cp-metrics/0.log" Feb 16 18:31:45 crc kubenswrapper[4794]: I0216 18:31:45.983992 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sjmrc_b432c0dc-a16b-408b-b760-08c20e6a6e05/cp-metrics/0.log" Feb 16 18:31:46 crc kubenswrapper[4794]: I0216 18:31:46.210952 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sjmrc_b432c0dc-a16b-408b-b760-08c20e6a6e05/cp-reloader/0.log" Feb 16 18:31:46 crc kubenswrapper[4794]: I0216 18:31:46.225900 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sjmrc_b432c0dc-a16b-408b-b760-08c20e6a6e05/cp-frr-files/0.log" Feb 16 18:31:46 crc kubenswrapper[4794]: I0216 18:31:46.227162 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sjmrc_b432c0dc-a16b-408b-b760-08c20e6a6e05/controller/0.log" Feb 16 18:31:46 crc kubenswrapper[4794]: I0216 18:31:46.295752 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sjmrc_b432c0dc-a16b-408b-b760-08c20e6a6e05/cp-metrics/0.log" Feb 16 18:31:46 crc kubenswrapper[4794]: I0216 18:31:46.467638 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sjmrc_b432c0dc-a16b-408b-b760-08c20e6a6e05/frr-metrics/0.log" Feb 16 18:31:46 crc kubenswrapper[4794]: I0216 18:31:46.498185 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sjmrc_b432c0dc-a16b-408b-b760-08c20e6a6e05/kube-rbac-proxy/0.log" Feb 16 18:31:46 crc kubenswrapper[4794]: I0216 18:31:46.530666 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sjmrc_b432c0dc-a16b-408b-b760-08c20e6a6e05/kube-rbac-proxy-frr/0.log" Feb 16 18:31:46 crc kubenswrapper[4794]: I0216 18:31:46.720599 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sjmrc_b432c0dc-a16b-408b-b760-08c20e6a6e05/reloader/0.log" Feb 16 18:31:46 crc kubenswrapper[4794]: I0216 18:31:46.798856 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-pv9br_06b0fb65-95c5-4a34-ae4e-d787cf10733c/frr-k8s-webhook-server/0.log" Feb 16 18:31:47 crc kubenswrapper[4794]: I0216 18:31:47.050935 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7cfd877d99-ln65b_336e2f2e-feed-48c4-8ef5-26630fbf649b/manager/0.log" Feb 16 18:31:47 crc kubenswrapper[4794]: I0216 18:31:47.168837 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6c9857685-shg96_e9e1f0f5-927b-4cc7-94c3-130c0a320750/webhook-server/0.log" Feb 16 18:31:47 crc kubenswrapper[4794]: I0216 18:31:47.375126 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-pkjkp_0863f5e7-b46f-45a6-866e-a445bddeeed2/kube-rbac-proxy/0.log" Feb 16 18:31:47 crc kubenswrapper[4794]: I0216 18:31:47.987737 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-pkjkp_0863f5e7-b46f-45a6-866e-a445bddeeed2/speaker/0.log" Feb 16 18:31:48 crc kubenswrapper[4794]: I0216 18:31:48.042741 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sjmrc_b432c0dc-a16b-408b-b760-08c20e6a6e05/frr/0.log" Feb 16 18:31:50 crc kubenswrapper[4794]: E0216 18:31:50.796911 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:31:52 crc kubenswrapper[4794]: E0216 18:31:52.794421 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:31:59 crc kubenswrapper[4794]: I0216 18:31:59.286568 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6chx6"] Feb 16 18:31:59 crc kubenswrapper[4794]: E0216 18:31:59.287462 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a1210cd-f20a-4557-9dd2-a8d7b5007093" containerName="collect-profiles" Feb 16 18:31:59 crc kubenswrapper[4794]: I0216 18:31:59.287478 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a1210cd-f20a-4557-9dd2-a8d7b5007093" containerName="collect-profiles" Feb 16 18:31:59 crc kubenswrapper[4794]: I0216 18:31:59.287717 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a1210cd-f20a-4557-9dd2-a8d7b5007093" containerName="collect-profiles" Feb 16 18:31:59 crc kubenswrapper[4794]: I0216 18:31:59.289247 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6chx6" Feb 16 18:31:59 crc kubenswrapper[4794]: I0216 18:31:59.309015 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6chx6"] Feb 16 18:31:59 crc kubenswrapper[4794]: I0216 18:31:59.398118 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78a5e60a-6d42-4965-98d8-bfd752a92270-utilities\") pod \"redhat-operators-6chx6\" (UID: \"78a5e60a-6d42-4965-98d8-bfd752a92270\") " pod="openshift-marketplace/redhat-operators-6chx6" Feb 16 18:31:59 crc kubenswrapper[4794]: I0216 18:31:59.398212 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78a5e60a-6d42-4965-98d8-bfd752a92270-catalog-content\") pod \"redhat-operators-6chx6\" (UID: \"78a5e60a-6d42-4965-98d8-bfd752a92270\") " pod="openshift-marketplace/redhat-operators-6chx6" Feb 16 18:31:59 crc kubenswrapper[4794]: I0216 18:31:59.398530 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz2wj\" (UniqueName: \"kubernetes.io/projected/78a5e60a-6d42-4965-98d8-bfd752a92270-kube-api-access-sz2wj\") pod \"redhat-operators-6chx6\" (UID: \"78a5e60a-6d42-4965-98d8-bfd752a92270\") " pod="openshift-marketplace/redhat-operators-6chx6" Feb 16 18:31:59 crc kubenswrapper[4794]: I0216 18:31:59.500568 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78a5e60a-6d42-4965-98d8-bfd752a92270-utilities\") pod \"redhat-operators-6chx6\" (UID: \"78a5e60a-6d42-4965-98d8-bfd752a92270\") " pod="openshift-marketplace/redhat-operators-6chx6" Feb 16 18:31:59 crc kubenswrapper[4794]: I0216 18:31:59.500699 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78a5e60a-6d42-4965-98d8-bfd752a92270-catalog-content\") pod \"redhat-operators-6chx6\" (UID: \"78a5e60a-6d42-4965-98d8-bfd752a92270\") " pod="openshift-marketplace/redhat-operators-6chx6" Feb 16 18:31:59 crc kubenswrapper[4794]: I0216 18:31:59.500807 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz2wj\" (UniqueName: \"kubernetes.io/projected/78a5e60a-6d42-4965-98d8-bfd752a92270-kube-api-access-sz2wj\") pod \"redhat-operators-6chx6\" (UID: \"78a5e60a-6d42-4965-98d8-bfd752a92270\") " pod="openshift-marketplace/redhat-operators-6chx6" Feb 16 18:31:59 crc kubenswrapper[4794]: I0216 18:31:59.501180 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78a5e60a-6d42-4965-98d8-bfd752a92270-utilities\") pod \"redhat-operators-6chx6\" (UID: \"78a5e60a-6d42-4965-98d8-bfd752a92270\") " pod="openshift-marketplace/redhat-operators-6chx6" Feb 16 18:31:59 crc kubenswrapper[4794]: I0216 18:31:59.501522 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78a5e60a-6d42-4965-98d8-bfd752a92270-catalog-content\") pod \"redhat-operators-6chx6\" (UID: \"78a5e60a-6d42-4965-98d8-bfd752a92270\") " pod="openshift-marketplace/redhat-operators-6chx6" Feb 16 18:31:59 crc kubenswrapper[4794]: I0216 18:31:59.530254 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz2wj\" (UniqueName: \"kubernetes.io/projected/78a5e60a-6d42-4965-98d8-bfd752a92270-kube-api-access-sz2wj\") pod \"redhat-operators-6chx6\" (UID: \"78a5e60a-6d42-4965-98d8-bfd752a92270\") " pod="openshift-marketplace/redhat-operators-6chx6" Feb 16 18:31:59 crc kubenswrapper[4794]: I0216 18:31:59.610227 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6chx6" Feb 16 18:32:00 crc kubenswrapper[4794]: I0216 18:32:00.171768 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6chx6"] Feb 16 18:32:01 crc kubenswrapper[4794]: I0216 18:32:01.182890 4794 generic.go:334] "Generic (PLEG): container finished" podID="78a5e60a-6d42-4965-98d8-bfd752a92270" containerID="c8fb71761427052cc9ee6ca9ee6455933bed6c6ed0180fd8763f5c5035b6dc3e" exitCode=0 Feb 16 18:32:01 crc kubenswrapper[4794]: I0216 18:32:01.182936 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6chx6" event={"ID":"78a5e60a-6d42-4965-98d8-bfd752a92270","Type":"ContainerDied","Data":"c8fb71761427052cc9ee6ca9ee6455933bed6c6ed0180fd8763f5c5035b6dc3e"} Feb 16 18:32:01 crc kubenswrapper[4794]: I0216 18:32:01.183600 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6chx6" event={"ID":"78a5e60a-6d42-4965-98d8-bfd752a92270","Type":"ContainerStarted","Data":"70058f2730c17f0922a124bbf8dcd8a1eb707fecf9ae6b42d7a2055d0962c8d6"} Feb 16 18:32:02 crc kubenswrapper[4794]: I0216 18:32:02.199427 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6chx6" event={"ID":"78a5e60a-6d42-4965-98d8-bfd752a92270","Type":"ContainerStarted","Data":"b3131ee3ce87012811be51d82f8880efb800bef2073a042840252475d48f62c2"} Feb 16 18:32:04 crc kubenswrapper[4794]: I0216 18:32:04.078156 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jnn9l_eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa/util/0.log" Feb 16 18:32:04 crc kubenswrapper[4794]: I0216 18:32:04.368147 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jnn9l_eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa/pull/0.log" Feb 16 18:32:04 crc kubenswrapper[4794]: I0216 18:32:04.374471 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jnn9l_eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa/util/0.log" Feb 16 18:32:04 crc kubenswrapper[4794]: I0216 18:32:04.415320 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jnn9l_eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa/pull/0.log" Feb 16 18:32:04 crc kubenswrapper[4794]: I0216 18:32:04.556045 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jnn9l_eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa/pull/0.log" Feb 16 18:32:04 crc kubenswrapper[4794]: I0216 18:32:04.580669 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jnn9l_eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa/extract/0.log" Feb 16 18:32:04 crc kubenswrapper[4794]: I0216 18:32:04.603814 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jnn9l_eb9f74b1-cfb9-43bd-981b-106ab4e9f0fa/util/0.log" Feb 16 18:32:04 crc kubenswrapper[4794]: I0216 18:32:04.737870 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qj8r5_476791fd-4f52-4366-87cd-1d1154726fa8/util/0.log" Feb 16 18:32:04 crc kubenswrapper[4794]: E0216 18:32:04.801560 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:32:04 crc kubenswrapper[4794]: I0216 18:32:04.963069 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qj8r5_476791fd-4f52-4366-87cd-1d1154726fa8/pull/0.log" Feb 16 18:32:04 crc kubenswrapper[4794]: I0216 18:32:04.991314 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qj8r5_476791fd-4f52-4366-87cd-1d1154726fa8/pull/0.log" Feb 16 18:32:05 crc kubenswrapper[4794]: I0216 18:32:05.015247 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qj8r5_476791fd-4f52-4366-87cd-1d1154726fa8/util/0.log" Feb 16 18:32:05 crc kubenswrapper[4794]: I0216 18:32:05.349903 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qj8r5_476791fd-4f52-4366-87cd-1d1154726fa8/util/0.log" Feb 16 18:32:05 crc kubenswrapper[4794]: I0216 18:32:05.420053 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qj8r5_476791fd-4f52-4366-87cd-1d1154726fa8/pull/0.log" Feb 16 18:32:05 crc kubenswrapper[4794]: I0216 18:32:05.456600 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qj8r5_476791fd-4f52-4366-87cd-1d1154726fa8/extract/0.log" Feb 16 18:32:05 crc kubenswrapper[4794]: I0216 18:32:05.579606 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213grn8t_1bd5ce2a-a814-4cae-bd5e-21ef1564d186/util/0.log" Feb 16 18:32:05 crc kubenswrapper[4794]: I0216 18:32:05.802255 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213grn8t_1bd5ce2a-a814-4cae-bd5e-21ef1564d186/pull/0.log" Feb 16 18:32:05 crc kubenswrapper[4794]: I0216 18:32:05.836521 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213grn8t_1bd5ce2a-a814-4cae-bd5e-21ef1564d186/pull/0.log" Feb 16 18:32:05 crc kubenswrapper[4794]: I0216 18:32:05.838275 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213grn8t_1bd5ce2a-a814-4cae-bd5e-21ef1564d186/util/0.log" Feb 16 18:32:05 crc kubenswrapper[4794]: I0216 18:32:05.991736 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213grn8t_1bd5ce2a-a814-4cae-bd5e-21ef1564d186/util/0.log" Feb 16 18:32:06 crc kubenswrapper[4794]: I0216 18:32:06.073531 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213grn8t_1bd5ce2a-a814-4cae-bd5e-21ef1564d186/pull/0.log" Feb 16 18:32:06 crc kubenswrapper[4794]: I0216 18:32:06.127037 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213grn8t_1bd5ce2a-a814-4cae-bd5e-21ef1564d186/extract/0.log" Feb 16 18:32:06 crc kubenswrapper[4794]: I0216 18:32:06.186978 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v6fpv_5e261c1f-73e1-4df0-8b70-82134d90a4a5/extract-utilities/0.log" Feb 16 18:32:06 crc kubenswrapper[4794]: I0216 18:32:06.394144 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v6fpv_5e261c1f-73e1-4df0-8b70-82134d90a4a5/extract-utilities/0.log" Feb 16 18:32:06 crc kubenswrapper[4794]: I0216 18:32:06.466946 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v6fpv_5e261c1f-73e1-4df0-8b70-82134d90a4a5/extract-content/0.log" Feb 16 18:32:06 crc kubenswrapper[4794]: I0216 18:32:06.467042 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v6fpv_5e261c1f-73e1-4df0-8b70-82134d90a4a5/extract-content/0.log" Feb 16 18:32:06 crc kubenswrapper[4794]: I0216 18:32:06.589403 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v6fpv_5e261c1f-73e1-4df0-8b70-82134d90a4a5/extract-content/0.log" Feb 16 18:32:06 crc kubenswrapper[4794]: I0216 18:32:06.590849 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v6fpv_5e261c1f-73e1-4df0-8b70-82134d90a4a5/extract-utilities/0.log" Feb 16 18:32:06 crc kubenswrapper[4794]: I0216 18:32:06.820843 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dwwrn_d7f1aaab-f576-46e7-8dde-d4cf89e2ff10/extract-utilities/0.log" Feb 16 18:32:06 crc kubenswrapper[4794]: I0216 18:32:06.990705 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dwwrn_d7f1aaab-f576-46e7-8dde-d4cf89e2ff10/extract-content/0.log" Feb 16 18:32:07 crc kubenswrapper[4794]: I0216 18:32:07.020854 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dwwrn_d7f1aaab-f576-46e7-8dde-d4cf89e2ff10/extract-utilities/0.log" Feb 16 18:32:07 crc kubenswrapper[4794]: I0216 18:32:07.085590 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dwwrn_d7f1aaab-f576-46e7-8dde-d4cf89e2ff10/extract-content/0.log" Feb 16 18:32:07 crc kubenswrapper[4794]: I0216 18:32:07.197405 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v6fpv_5e261c1f-73e1-4df0-8b70-82134d90a4a5/registry-server/0.log" Feb 16 18:32:07 crc kubenswrapper[4794]: I0216 18:32:07.249130 4794 generic.go:334] "Generic (PLEG): container finished" podID="78a5e60a-6d42-4965-98d8-bfd752a92270" containerID="b3131ee3ce87012811be51d82f8880efb800bef2073a042840252475d48f62c2" exitCode=0 Feb 16 18:32:07 crc kubenswrapper[4794]: I0216 18:32:07.249170 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6chx6" event={"ID":"78a5e60a-6d42-4965-98d8-bfd752a92270","Type":"ContainerDied","Data":"b3131ee3ce87012811be51d82f8880efb800bef2073a042840252475d48f62c2"} Feb 16 18:32:07 crc kubenswrapper[4794]: I0216 18:32:07.286906 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dwwrn_d7f1aaab-f576-46e7-8dde-d4cf89e2ff10/extract-utilities/0.log" Feb 16 18:32:07 crc kubenswrapper[4794]: I0216 18:32:07.376211 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dwwrn_d7f1aaab-f576-46e7-8dde-d4cf89e2ff10/extract-content/0.log" Feb 16 18:32:07 crc kubenswrapper[4794]: I0216 18:32:07.515193 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e0898985jdd_c548d720-7bad-47af-badb-d01ab54e8afd/util/0.log" Feb 16 18:32:07 crc kubenswrapper[4794]: E0216 18:32:07.812267 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:32:07 crc kubenswrapper[4794]: I0216 18:32:07.812540 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e0898985jdd_c548d720-7bad-47af-badb-d01ab54e8afd/util/0.log" Feb 16 18:32:07 crc kubenswrapper[4794]: I0216 18:32:07.821095 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e0898985jdd_c548d720-7bad-47af-badb-d01ab54e8afd/pull/0.log" Feb 16 18:32:07 crc kubenswrapper[4794]: I0216 18:32:07.911656 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e0898985jdd_c548d720-7bad-47af-badb-d01ab54e8afd/pull/0.log" Feb 16 18:32:08 crc kubenswrapper[4794]: I0216 18:32:08.029793 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e0898985jdd_c548d720-7bad-47af-badb-d01ab54e8afd/util/0.log" Feb 16 18:32:08 crc kubenswrapper[4794]: I0216 18:32:08.129525 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e0898985jdd_c548d720-7bad-47af-badb-d01ab54e8afd/pull/0.log" Feb 16 18:32:08 crc kubenswrapper[4794]: I0216 18:32:08.130461 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e0898985jdd_c548d720-7bad-47af-badb-d01ab54e8afd/extract/0.log" Feb 16 18:32:08 crc kubenswrapper[4794]: I0216 18:32:08.189590 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dwwrn_d7f1aaab-f576-46e7-8dde-d4cf89e2ff10/registry-server/0.log" Feb 16 18:32:08 crc kubenswrapper[4794]: I0216 18:32:08.262677 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6chx6" event={"ID":"78a5e60a-6d42-4965-98d8-bfd752a92270","Type":"ContainerStarted","Data":"7a88d568a80cf612a2e3c3890b318aeefa5de72e757fa6139e36a70c83474302"} Feb 16 18:32:08 crc kubenswrapper[4794]: I0216 18:32:08.286957 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6chx6" podStartSLOduration=2.750164768 podStartE2EDuration="9.286933581s" podCreationTimestamp="2026-02-16 18:31:59 +0000 UTC" firstStartedPulling="2026-02-16 18:32:01.185042644 +0000 UTC m=+5547.133137291" lastFinishedPulling="2026-02-16 18:32:07.721811457 +0000 UTC m=+5553.669906104" observedRunningTime="2026-02-16 18:32:08.278689008 +0000 UTC m=+5554.226783655" watchObservedRunningTime="2026-02-16 18:32:08.286933581 +0000 UTC m=+5554.235028228" Feb 16 18:32:08 crc kubenswrapper[4794]: I0216 18:32:08.327795 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecafn9zm_4782dec2-0df6-498a-908f-ba56f68b462f/util/0.log" Feb 16 18:32:08 crc kubenswrapper[4794]: I0216 18:32:08.510028 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecafn9zm_4782dec2-0df6-498a-908f-ba56f68b462f/pull/0.log" Feb 16 18:32:08 crc kubenswrapper[4794]: I0216 18:32:08.550017 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecafn9zm_4782dec2-0df6-498a-908f-ba56f68b462f/pull/0.log" Feb 16 18:32:08 crc kubenswrapper[4794]: I0216 18:32:08.568065 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecafn9zm_4782dec2-0df6-498a-908f-ba56f68b462f/util/0.log" Feb 16 18:32:08 crc kubenswrapper[4794]: I0216 18:32:08.793632 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecafn9zm_4782dec2-0df6-498a-908f-ba56f68b462f/util/0.log" Feb 16 18:32:08 crc kubenswrapper[4794]: I0216 18:32:08.811572 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecafn9zm_4782dec2-0df6-498a-908f-ba56f68b462f/pull/0.log" Feb 16 18:32:08 crc kubenswrapper[4794]: I0216 18:32:08.837182 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecafn9zm_4782dec2-0df6-498a-908f-ba56f68b462f/extract/0.log" Feb 16 18:32:08 crc kubenswrapper[4794]: I0216 18:32:08.877819 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-8hqkn_7dbed710-cd99-4571-8aca-92145b798f65/marketplace-operator/0.log" Feb 16 18:32:09 crc kubenswrapper[4794]: I0216 18:32:09.520584 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q8h66_984a7bce-d46b-4339-bd79-7fce25092b99/extract-utilities/0.log" Feb 16 18:32:09 crc kubenswrapper[4794]: I0216 18:32:09.611090 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6chx6" Feb 16 18:32:09 crc kubenswrapper[4794]: I0216 18:32:09.611141 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6chx6" Feb 16 18:32:09 crc kubenswrapper[4794]: I0216 18:32:09.716790 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q8h66_984a7bce-d46b-4339-bd79-7fce25092b99/extract-content/0.log" Feb 16 18:32:09 crc kubenswrapper[4794]: I0216 18:32:09.718777 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q8h66_984a7bce-d46b-4339-bd79-7fce25092b99/extract-content/0.log" Feb 16 18:32:09 crc kubenswrapper[4794]: I0216 18:32:09.768691 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q8h66_984a7bce-d46b-4339-bd79-7fce25092b99/extract-utilities/0.log" Feb 16 18:32:09 crc kubenswrapper[4794]: I0216 18:32:09.977522 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q8h66_984a7bce-d46b-4339-bd79-7fce25092b99/extract-utilities/0.log" Feb 16 18:32:09 crc kubenswrapper[4794]: I0216 18:32:09.999628 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q8h66_984a7bce-d46b-4339-bd79-7fce25092b99/registry-server/0.log" Feb 16 18:32:10 crc kubenswrapper[4794]: I0216 18:32:10.289154 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q8h66_984a7bce-d46b-4339-bd79-7fce25092b99/extract-content/0.log" Feb 16 18:32:10 crc kubenswrapper[4794]: I0216 18:32:10.344529 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6chx6_78a5e60a-6d42-4965-98d8-bfd752a92270/extract-utilities/0.log" Feb 16 18:32:10 crc kubenswrapper[4794]: I0216 18:32:10.410745 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6chx6_78a5e60a-6d42-4965-98d8-bfd752a92270/extract-content/0.log" Feb 16 18:32:10 crc kubenswrapper[4794]: I0216 18:32:10.448599 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6chx6_78a5e60a-6d42-4965-98d8-bfd752a92270/extract-utilities/0.log" Feb 16 18:32:10 crc kubenswrapper[4794]: I0216 18:32:10.524947 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6chx6_78a5e60a-6d42-4965-98d8-bfd752a92270/extract-content/0.log" Feb 16 18:32:10 crc kubenswrapper[4794]: I0216 18:32:10.660360 4794 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6chx6" podUID="78a5e60a-6d42-4965-98d8-bfd752a92270" containerName="registry-server" probeResult="failure" output=< Feb 16 18:32:10 crc kubenswrapper[4794]: timeout: failed to connect service ":50051" within 1s Feb 16 18:32:10 crc kubenswrapper[4794]: > Feb 16 18:32:10 crc kubenswrapper[4794]: I0216 18:32:10.713687 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6chx6_78a5e60a-6d42-4965-98d8-bfd752a92270/extract-utilities/0.log" Feb 16 18:32:10 crc kubenswrapper[4794]: I0216 18:32:10.753534 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6chx6_78a5e60a-6d42-4965-98d8-bfd752a92270/extract-content/0.log" Feb 16 18:32:10 crc kubenswrapper[4794]: I0216 18:32:10.759840 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6chx6_78a5e60a-6d42-4965-98d8-bfd752a92270/registry-server/0.log" Feb 16 18:32:10 crc kubenswrapper[4794]: I0216 18:32:10.815684 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zw7gt_b3d4ba8e-df36-4a0b-8ea2-014e4f94993d/extract-utilities/0.log" Feb 16 18:32:11 crc kubenswrapper[4794]: I0216 18:32:11.788507 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zw7gt_b3d4ba8e-df36-4a0b-8ea2-014e4f94993d/extract-content/0.log" Feb 16 18:32:11 crc kubenswrapper[4794]: I0216 18:32:11.838610 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zw7gt_b3d4ba8e-df36-4a0b-8ea2-014e4f94993d/extract-utilities/0.log" Feb 16 18:32:11 crc kubenswrapper[4794]: I0216 18:32:11.976286 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zw7gt_b3d4ba8e-df36-4a0b-8ea2-014e4f94993d/extract-content/0.log" Feb 16 18:32:12 crc kubenswrapper[4794]: I0216 18:32:12.123895 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zw7gt_b3d4ba8e-df36-4a0b-8ea2-014e4f94993d/extract-content/0.log" Feb 16 18:32:12 crc kubenswrapper[4794]: I0216 18:32:12.130967 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zw7gt_b3d4ba8e-df36-4a0b-8ea2-014e4f94993d/extract-utilities/0.log" Feb 16 18:32:12 crc kubenswrapper[4794]: I0216 18:32:12.818971 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zw7gt_b3d4ba8e-df36-4a0b-8ea2-014e4f94993d/registry-server/0.log" Feb 16 18:32:16 crc kubenswrapper[4794]: I0216 18:32:16.222380 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n7fsz"] Feb 16 18:32:16 crc kubenswrapper[4794]: I0216 18:32:16.226969 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n7fsz" Feb 16 18:32:16 crc kubenswrapper[4794]: I0216 18:32:16.238885 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n7fsz"] Feb 16 18:32:16 crc kubenswrapper[4794]: I0216 18:32:16.357563 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmdbp\" (UniqueName: \"kubernetes.io/projected/7c5cf25a-1f1a-4ae5-9cef-4793b1d674df-kube-api-access-cmdbp\") pod \"community-operators-n7fsz\" (UID: \"7c5cf25a-1f1a-4ae5-9cef-4793b1d674df\") " pod="openshift-marketplace/community-operators-n7fsz" Feb 16 18:32:16 crc kubenswrapper[4794]: I0216 18:32:16.358216 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c5cf25a-1f1a-4ae5-9cef-4793b1d674df-catalog-content\") pod \"community-operators-n7fsz\" (UID: \"7c5cf25a-1f1a-4ae5-9cef-4793b1d674df\") " pod="openshift-marketplace/community-operators-n7fsz" Feb 16 18:32:16 crc kubenswrapper[4794]: I0216 18:32:16.358483 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c5cf25a-1f1a-4ae5-9cef-4793b1d674df-utilities\") pod \"community-operators-n7fsz\" (UID: \"7c5cf25a-1f1a-4ae5-9cef-4793b1d674df\") " pod="openshift-marketplace/community-operators-n7fsz" Feb 16 18:32:16 crc kubenswrapper[4794]: I0216 18:32:16.460858 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmdbp\" (UniqueName: \"kubernetes.io/projected/7c5cf25a-1f1a-4ae5-9cef-4793b1d674df-kube-api-access-cmdbp\") pod \"community-operators-n7fsz\" (UID: \"7c5cf25a-1f1a-4ae5-9cef-4793b1d674df\") " pod="openshift-marketplace/community-operators-n7fsz" Feb 16 18:32:16 crc kubenswrapper[4794]: I0216 18:32:16.461058 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c5cf25a-1f1a-4ae5-9cef-4793b1d674df-catalog-content\") pod \"community-operators-n7fsz\" (UID: \"7c5cf25a-1f1a-4ae5-9cef-4793b1d674df\") " pod="openshift-marketplace/community-operators-n7fsz" Feb 16 18:32:16 crc kubenswrapper[4794]: I0216 18:32:16.461218 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c5cf25a-1f1a-4ae5-9cef-4793b1d674df-utilities\") pod \"community-operators-n7fsz\" (UID: \"7c5cf25a-1f1a-4ae5-9cef-4793b1d674df\") " pod="openshift-marketplace/community-operators-n7fsz" Feb 16 18:32:16 crc kubenswrapper[4794]: I0216 18:32:16.461694 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c5cf25a-1f1a-4ae5-9cef-4793b1d674df-catalog-content\") pod \"community-operators-n7fsz\" (UID: \"7c5cf25a-1f1a-4ae5-9cef-4793b1d674df\") " pod="openshift-marketplace/community-operators-n7fsz" Feb 16 18:32:16 crc kubenswrapper[4794]: I0216 18:32:16.461703 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c5cf25a-1f1a-4ae5-9cef-4793b1d674df-utilities\") pod \"community-operators-n7fsz\" (UID: \"7c5cf25a-1f1a-4ae5-9cef-4793b1d674df\") " pod="openshift-marketplace/community-operators-n7fsz" Feb 16 18:32:16 crc kubenswrapper[4794]: I0216 18:32:16.481496 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmdbp\" (UniqueName: \"kubernetes.io/projected/7c5cf25a-1f1a-4ae5-9cef-4793b1d674df-kube-api-access-cmdbp\") pod \"community-operators-n7fsz\" (UID: \"7c5cf25a-1f1a-4ae5-9cef-4793b1d674df\") " pod="openshift-marketplace/community-operators-n7fsz" Feb 16 18:32:16 crc kubenswrapper[4794]: I0216 18:32:16.581590 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n7fsz" Feb 16 18:32:17 crc kubenswrapper[4794]: I0216 18:32:17.072889 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n7fsz"] Feb 16 18:32:17 crc kubenswrapper[4794]: I0216 18:32:17.360268 4794 generic.go:334] "Generic (PLEG): container finished" podID="7c5cf25a-1f1a-4ae5-9cef-4793b1d674df" containerID="2fb55b592b5e07ea836fe3a97b5ae350b002dd61145e6005d89e8fb36d882812" exitCode=0 Feb 16 18:32:17 crc kubenswrapper[4794]: I0216 18:32:17.360465 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n7fsz" event={"ID":"7c5cf25a-1f1a-4ae5-9cef-4793b1d674df","Type":"ContainerDied","Data":"2fb55b592b5e07ea836fe3a97b5ae350b002dd61145e6005d89e8fb36d882812"} Feb 16 18:32:17 crc kubenswrapper[4794]: I0216 18:32:17.361467 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n7fsz" event={"ID":"7c5cf25a-1f1a-4ae5-9cef-4793b1d674df","Type":"ContainerStarted","Data":"253e07a7349d5246ce9d80d2df4fb5f475b4154e2efbbd0c4ef856db6e1a4447"} Feb 16 18:32:18 crc kubenswrapper[4794]: I0216 18:32:18.375290 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n7fsz" event={"ID":"7c5cf25a-1f1a-4ae5-9cef-4793b1d674df","Type":"ContainerStarted","Data":"490dadb0a080dc725f3446707ffe6b8748d3078e9563b0848190c9e167f3f4ae"} Feb 16 18:32:19 crc kubenswrapper[4794]: E0216 18:32:19.794628 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:32:19 crc kubenswrapper[4794]: E0216 18:32:19.795024 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:32:20 crc kubenswrapper[4794]: I0216 18:32:20.658321 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6chx6" Feb 16 18:32:20 crc kubenswrapper[4794]: I0216 18:32:20.727884 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6chx6" Feb 16 18:32:21 crc kubenswrapper[4794]: I0216 18:32:21.414738 4794 generic.go:334] "Generic (PLEG): container finished" podID="7c5cf25a-1f1a-4ae5-9cef-4793b1d674df" containerID="490dadb0a080dc725f3446707ffe6b8748d3078e9563b0848190c9e167f3f4ae" exitCode=0 Feb 16 18:32:21 crc kubenswrapper[4794]: I0216 18:32:21.415476 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n7fsz" event={"ID":"7c5cf25a-1f1a-4ae5-9cef-4793b1d674df","Type":"ContainerDied","Data":"490dadb0a080dc725f3446707ffe6b8748d3078e9563b0848190c9e167f3f4ae"} Feb 16 18:32:21 crc kubenswrapper[4794]: I0216 18:32:21.588897 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6chx6"] Feb 16 18:32:22 crc kubenswrapper[4794]: I0216 18:32:22.429243 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n7fsz" event={"ID":"7c5cf25a-1f1a-4ae5-9cef-4793b1d674df","Type":"ContainerStarted","Data":"ed05a4a12fed4a1f4d291a0a5cf643ccce48fbfc89bf48bf016f1fa1168d3d34"} Feb 16 18:32:22 crc kubenswrapper[4794]: I0216 18:32:22.429476 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6chx6" podUID="78a5e60a-6d42-4965-98d8-bfd752a92270" containerName="registry-server" containerID="cri-o://7a88d568a80cf612a2e3c3890b318aeefa5de72e757fa6139e36a70c83474302" gracePeriod=2 Feb 16 18:32:22 crc kubenswrapper[4794]: I0216 18:32:22.462734 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n7fsz" podStartSLOduration=1.962520065 podStartE2EDuration="6.462713724s" podCreationTimestamp="2026-02-16 18:32:16 +0000 UTC" firstStartedPulling="2026-02-16 18:32:17.362158994 +0000 UTC m=+5563.310253651" lastFinishedPulling="2026-02-16 18:32:21.862352653 +0000 UTC m=+5567.810447310" observedRunningTime="2026-02-16 18:32:22.453349329 +0000 UTC m=+5568.401443976" watchObservedRunningTime="2026-02-16 18:32:22.462713724 +0000 UTC m=+5568.410808371" Feb 16 18:32:23 crc kubenswrapper[4794]: I0216 18:32:23.449924 4794 generic.go:334] "Generic (PLEG): container finished" podID="78a5e60a-6d42-4965-98d8-bfd752a92270" containerID="7a88d568a80cf612a2e3c3890b318aeefa5de72e757fa6139e36a70c83474302" exitCode=0 Feb 16 18:32:23 crc kubenswrapper[4794]: I0216 18:32:23.450088 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6chx6" event={"ID":"78a5e60a-6d42-4965-98d8-bfd752a92270","Type":"ContainerDied","Data":"7a88d568a80cf612a2e3c3890b318aeefa5de72e757fa6139e36a70c83474302"} Feb 16 18:32:23 crc kubenswrapper[4794]: I0216 18:32:23.450617 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6chx6" event={"ID":"78a5e60a-6d42-4965-98d8-bfd752a92270","Type":"ContainerDied","Data":"70058f2730c17f0922a124bbf8dcd8a1eb707fecf9ae6b42d7a2055d0962c8d6"} Feb 16 18:32:23 crc kubenswrapper[4794]: I0216 18:32:23.450636 4794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70058f2730c17f0922a124bbf8dcd8a1eb707fecf9ae6b42d7a2055d0962c8d6" Feb 16 18:32:23 crc kubenswrapper[4794]: I0216 18:32:23.534759 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6chx6" Feb 16 18:32:23 crc kubenswrapper[4794]: I0216 18:32:23.550504 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sz2wj\" (UniqueName: \"kubernetes.io/projected/78a5e60a-6d42-4965-98d8-bfd752a92270-kube-api-access-sz2wj\") pod \"78a5e60a-6d42-4965-98d8-bfd752a92270\" (UID: \"78a5e60a-6d42-4965-98d8-bfd752a92270\") " Feb 16 18:32:23 crc kubenswrapper[4794]: I0216 18:32:23.550742 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78a5e60a-6d42-4965-98d8-bfd752a92270-utilities\") pod \"78a5e60a-6d42-4965-98d8-bfd752a92270\" (UID: \"78a5e60a-6d42-4965-98d8-bfd752a92270\") " Feb 16 18:32:23 crc kubenswrapper[4794]: I0216 18:32:23.550768 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78a5e60a-6d42-4965-98d8-bfd752a92270-catalog-content\") pod \"78a5e60a-6d42-4965-98d8-bfd752a92270\" (UID: \"78a5e60a-6d42-4965-98d8-bfd752a92270\") " Feb 16 18:32:23 crc kubenswrapper[4794]: I0216 18:32:23.551977 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78a5e60a-6d42-4965-98d8-bfd752a92270-utilities" (OuterVolumeSpecName: "utilities") pod "78a5e60a-6d42-4965-98d8-bfd752a92270" (UID: "78a5e60a-6d42-4965-98d8-bfd752a92270"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 18:32:23 crc kubenswrapper[4794]: I0216 18:32:23.609031 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78a5e60a-6d42-4965-98d8-bfd752a92270-kube-api-access-sz2wj" (OuterVolumeSpecName: "kube-api-access-sz2wj") pod "78a5e60a-6d42-4965-98d8-bfd752a92270" (UID: "78a5e60a-6d42-4965-98d8-bfd752a92270"). InnerVolumeSpecName "kube-api-access-sz2wj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 18:32:23 crc kubenswrapper[4794]: I0216 18:32:23.653596 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sz2wj\" (UniqueName: \"kubernetes.io/projected/78a5e60a-6d42-4965-98d8-bfd752a92270-kube-api-access-sz2wj\") on node \"crc\" DevicePath \"\"" Feb 16 18:32:23 crc kubenswrapper[4794]: I0216 18:32:23.653817 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78a5e60a-6d42-4965-98d8-bfd752a92270-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 18:32:23 crc kubenswrapper[4794]: I0216 18:32:23.747969 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78a5e60a-6d42-4965-98d8-bfd752a92270-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "78a5e60a-6d42-4965-98d8-bfd752a92270" (UID: "78a5e60a-6d42-4965-98d8-bfd752a92270"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 18:32:23 crc kubenswrapper[4794]: I0216 18:32:23.756248 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78a5e60a-6d42-4965-98d8-bfd752a92270-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 18:32:24 crc kubenswrapper[4794]: I0216 18:32:24.460922 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6chx6" Feb 16 18:32:24 crc kubenswrapper[4794]: I0216 18:32:24.501069 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6chx6"] Feb 16 18:32:24 crc kubenswrapper[4794]: I0216 18:32:24.510559 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6chx6"] Feb 16 18:32:24 crc kubenswrapper[4794]: I0216 18:32:24.804100 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78a5e60a-6d42-4965-98d8-bfd752a92270" path="/var/lib/kubelet/pods/78a5e60a-6d42-4965-98d8-bfd752a92270/volumes" Feb 16 18:32:26 crc kubenswrapper[4794]: I0216 18:32:26.581783 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n7fsz" Feb 16 18:32:26 crc kubenswrapper[4794]: I0216 18:32:26.583619 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n7fsz" Feb 16 18:32:26 crc kubenswrapper[4794]: I0216 18:32:26.655169 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n7fsz" Feb 16 18:32:27 crc kubenswrapper[4794]: I0216 18:32:27.646576 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n7fsz" Feb 16 18:32:28 crc kubenswrapper[4794]: I0216 18:32:28.782537 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n7fsz"] Feb 16 18:32:29 crc kubenswrapper[4794]: I0216 18:32:29.349198 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-v7bg9_d26ffaf2-c9d0-459c-8ec3-6cc3a72b0cd4/prometheus-operator/0.log" Feb 16 18:32:29 crc kubenswrapper[4794]: I0216 18:32:29.370941 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6c6bdd8db9-6xpmf_0ca9bb6d-4f89-469a-aff2-3ecb9dcc814b/prometheus-operator-admission-webhook/0.log" Feb 16 18:32:29 crc kubenswrapper[4794]: I0216 18:32:29.403920 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6c6bdd8db9-jv568_3a52f6b4-d7fb-4eab-92f6-8dee7c495f6a/prometheus-operator-admission-webhook/0.log" Feb 16 18:32:29 crc kubenswrapper[4794]: I0216 18:32:29.539947 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-xwlnp_c51ad8ee-4b16-4ddc-89a6-d63e4e5abf53/observability-ui-dashboards/0.log" Feb 16 18:32:29 crc kubenswrapper[4794]: I0216 18:32:29.557970 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-d85pd_bf8a1703-ef5d-4314-92ff-0a4f21d863ca/operator/0.log" Feb 16 18:32:29 crc kubenswrapper[4794]: I0216 18:32:29.609579 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-tq9qc_33908c91-9542-47cd-9530-dfe7b104e79e/perses-operator/0.log" Feb 16 18:32:30 crc kubenswrapper[4794]: I0216 18:32:30.623119 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n7fsz" podUID="7c5cf25a-1f1a-4ae5-9cef-4793b1d674df" containerName="registry-server" containerID="cri-o://ed05a4a12fed4a1f4d291a0a5cf643ccce48fbfc89bf48bf016f1fa1168d3d34" gracePeriod=2 Feb 16 18:32:30 crc kubenswrapper[4794]: E0216 18:32:30.792945 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:32:31 crc kubenswrapper[4794]: I0216 18:32:31.169777 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n7fsz" Feb 16 18:32:31 crc kubenswrapper[4794]: I0216 18:32:31.251204 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c5cf25a-1f1a-4ae5-9cef-4793b1d674df-catalog-content\") pod \"7c5cf25a-1f1a-4ae5-9cef-4793b1d674df\" (UID: \"7c5cf25a-1f1a-4ae5-9cef-4793b1d674df\") " Feb 16 18:32:31 crc kubenswrapper[4794]: I0216 18:32:31.251327 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmdbp\" (UniqueName: \"kubernetes.io/projected/7c5cf25a-1f1a-4ae5-9cef-4793b1d674df-kube-api-access-cmdbp\") pod \"7c5cf25a-1f1a-4ae5-9cef-4793b1d674df\" (UID: \"7c5cf25a-1f1a-4ae5-9cef-4793b1d674df\") " Feb 16 18:32:31 crc kubenswrapper[4794]: I0216 18:32:31.251453 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c5cf25a-1f1a-4ae5-9cef-4793b1d674df-utilities\") pod \"7c5cf25a-1f1a-4ae5-9cef-4793b1d674df\" (UID: \"7c5cf25a-1f1a-4ae5-9cef-4793b1d674df\") " Feb 16 18:32:31 crc kubenswrapper[4794]: I0216 18:32:31.252773 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c5cf25a-1f1a-4ae5-9cef-4793b1d674df-utilities" (OuterVolumeSpecName: "utilities") pod "7c5cf25a-1f1a-4ae5-9cef-4793b1d674df" (UID: "7c5cf25a-1f1a-4ae5-9cef-4793b1d674df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 18:32:31 crc kubenswrapper[4794]: I0216 18:32:31.265505 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c5cf25a-1f1a-4ae5-9cef-4793b1d674df-kube-api-access-cmdbp" (OuterVolumeSpecName: "kube-api-access-cmdbp") pod "7c5cf25a-1f1a-4ae5-9cef-4793b1d674df" (UID: "7c5cf25a-1f1a-4ae5-9cef-4793b1d674df"). InnerVolumeSpecName "kube-api-access-cmdbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 18:32:31 crc kubenswrapper[4794]: I0216 18:32:31.297998 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c5cf25a-1f1a-4ae5-9cef-4793b1d674df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c5cf25a-1f1a-4ae5-9cef-4793b1d674df" (UID: "7c5cf25a-1f1a-4ae5-9cef-4793b1d674df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 18:32:31 crc kubenswrapper[4794]: I0216 18:32:31.353943 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c5cf25a-1f1a-4ae5-9cef-4793b1d674df-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 18:32:31 crc kubenswrapper[4794]: I0216 18:32:31.353970 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c5cf25a-1f1a-4ae5-9cef-4793b1d674df-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 18:32:31 crc kubenswrapper[4794]: I0216 18:32:31.353982 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmdbp\" (UniqueName: \"kubernetes.io/projected/7c5cf25a-1f1a-4ae5-9cef-4793b1d674df-kube-api-access-cmdbp\") on node \"crc\" DevicePath \"\"" Feb 16 18:32:31 crc kubenswrapper[4794]: I0216 18:32:31.634453 4794 generic.go:334] "Generic (PLEG): container finished" podID="7c5cf25a-1f1a-4ae5-9cef-4793b1d674df" containerID="ed05a4a12fed4a1f4d291a0a5cf643ccce48fbfc89bf48bf016f1fa1168d3d34" exitCode=0 Feb 16 18:32:31 crc kubenswrapper[4794]: I0216 18:32:31.634520 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n7fsz" Feb 16 18:32:31 crc kubenswrapper[4794]: I0216 18:32:31.635818 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n7fsz" event={"ID":"7c5cf25a-1f1a-4ae5-9cef-4793b1d674df","Type":"ContainerDied","Data":"ed05a4a12fed4a1f4d291a0a5cf643ccce48fbfc89bf48bf016f1fa1168d3d34"} Feb 16 18:32:31 crc kubenswrapper[4794]: I0216 18:32:31.635934 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n7fsz" event={"ID":"7c5cf25a-1f1a-4ae5-9cef-4793b1d674df","Type":"ContainerDied","Data":"253e07a7349d5246ce9d80d2df4fb5f475b4154e2efbbd0c4ef856db6e1a4447"} Feb 16 18:32:31 crc kubenswrapper[4794]: I0216 18:32:31.636009 4794 scope.go:117] "RemoveContainer" containerID="ed05a4a12fed4a1f4d291a0a5cf643ccce48fbfc89bf48bf016f1fa1168d3d34" Feb 16 18:32:31 crc kubenswrapper[4794]: I0216 18:32:31.673579 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n7fsz"] Feb 16 18:32:31 crc kubenswrapper[4794]: I0216 18:32:31.676207 4794 scope.go:117] "RemoveContainer" containerID="490dadb0a080dc725f3446707ffe6b8748d3078e9563b0848190c9e167f3f4ae" Feb 16 18:32:31 crc kubenswrapper[4794]: I0216 18:32:31.682618 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n7fsz"] Feb 16 18:32:31 crc kubenswrapper[4794]: I0216 18:32:31.737271 4794 scope.go:117] "RemoveContainer" containerID="2fb55b592b5e07ea836fe3a97b5ae350b002dd61145e6005d89e8fb36d882812" Feb 16 18:32:31 crc kubenswrapper[4794]: I0216 18:32:31.769882 4794 scope.go:117] "RemoveContainer" containerID="ed05a4a12fed4a1f4d291a0a5cf643ccce48fbfc89bf48bf016f1fa1168d3d34" Feb 16 18:32:31 crc kubenswrapper[4794]: E0216 18:32:31.770443 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed05a4a12fed4a1f4d291a0a5cf643ccce48fbfc89bf48bf016f1fa1168d3d34\": container with ID starting with ed05a4a12fed4a1f4d291a0a5cf643ccce48fbfc89bf48bf016f1fa1168d3d34 not found: ID does not exist" containerID="ed05a4a12fed4a1f4d291a0a5cf643ccce48fbfc89bf48bf016f1fa1168d3d34" Feb 16 18:32:31 crc kubenswrapper[4794]: I0216 18:32:31.770498 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed05a4a12fed4a1f4d291a0a5cf643ccce48fbfc89bf48bf016f1fa1168d3d34"} err="failed to get container status \"ed05a4a12fed4a1f4d291a0a5cf643ccce48fbfc89bf48bf016f1fa1168d3d34\": rpc error: code = NotFound desc = could not find container \"ed05a4a12fed4a1f4d291a0a5cf643ccce48fbfc89bf48bf016f1fa1168d3d34\": container with ID starting with ed05a4a12fed4a1f4d291a0a5cf643ccce48fbfc89bf48bf016f1fa1168d3d34 not found: ID does not exist" Feb 16 18:32:31 crc kubenswrapper[4794]: I0216 18:32:31.770532 4794 scope.go:117] "RemoveContainer" containerID="490dadb0a080dc725f3446707ffe6b8748d3078e9563b0848190c9e167f3f4ae" Feb 16 18:32:31 crc kubenswrapper[4794]: E0216 18:32:31.770851 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"490dadb0a080dc725f3446707ffe6b8748d3078e9563b0848190c9e167f3f4ae\": container with ID starting with 490dadb0a080dc725f3446707ffe6b8748d3078e9563b0848190c9e167f3f4ae not found: ID does not exist" containerID="490dadb0a080dc725f3446707ffe6b8748d3078e9563b0848190c9e167f3f4ae" Feb 16 18:32:31 crc kubenswrapper[4794]: I0216 18:32:31.770894 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"490dadb0a080dc725f3446707ffe6b8748d3078e9563b0848190c9e167f3f4ae"} err="failed to get container status \"490dadb0a080dc725f3446707ffe6b8748d3078e9563b0848190c9e167f3f4ae\": rpc error: code = NotFound desc = could not find container \"490dadb0a080dc725f3446707ffe6b8748d3078e9563b0848190c9e167f3f4ae\": container with ID starting with 490dadb0a080dc725f3446707ffe6b8748d3078e9563b0848190c9e167f3f4ae not found: ID does not exist" Feb 16 18:32:31 crc kubenswrapper[4794]: I0216 18:32:31.770923 4794 scope.go:117] "RemoveContainer" containerID="2fb55b592b5e07ea836fe3a97b5ae350b002dd61145e6005d89e8fb36d882812" Feb 16 18:32:31 crc kubenswrapper[4794]: E0216 18:32:31.771176 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fb55b592b5e07ea836fe3a97b5ae350b002dd61145e6005d89e8fb36d882812\": container with ID starting with 2fb55b592b5e07ea836fe3a97b5ae350b002dd61145e6005d89e8fb36d882812 not found: ID does not exist" containerID="2fb55b592b5e07ea836fe3a97b5ae350b002dd61145e6005d89e8fb36d882812" Feb 16 18:32:31 crc kubenswrapper[4794]: I0216 18:32:31.771216 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fb55b592b5e07ea836fe3a97b5ae350b002dd61145e6005d89e8fb36d882812"} err="failed to get container status \"2fb55b592b5e07ea836fe3a97b5ae350b002dd61145e6005d89e8fb36d882812\": rpc error: code = NotFound desc = could not find container \"2fb55b592b5e07ea836fe3a97b5ae350b002dd61145e6005d89e8fb36d882812\": container with ID starting with 2fb55b592b5e07ea836fe3a97b5ae350b002dd61145e6005d89e8fb36d882812 not found: ID does not exist" Feb 16 18:32:31 crc kubenswrapper[4794]: E0216 18:32:31.797267 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:32:32 crc kubenswrapper[4794]: I0216 18:32:32.805896 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c5cf25a-1f1a-4ae5-9cef-4793b1d674df" path="/var/lib/kubelet/pods/7c5cf25a-1f1a-4ae5-9cef-4793b1d674df/volumes" Feb 16 18:32:43 crc kubenswrapper[4794]: E0216 18:32:43.795552 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:32:46 crc kubenswrapper[4794]: E0216 18:32:46.803258 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:32:47 crc kubenswrapper[4794]: I0216 18:32:47.336455 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-8499595899-t6s7p_1a441979-8971-4f00-9a49-0dbd7d90d537/manager/0.log" Feb 16 18:32:47 crc kubenswrapper[4794]: I0216 18:32:47.395830 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-8499595899-t6s7p_1a441979-8971-4f00-9a49-0dbd7d90d537/kube-rbac-proxy/0.log" Feb 16 18:32:50 crc kubenswrapper[4794]: I0216 18:32:50.140940 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 18:32:50 crc kubenswrapper[4794]: I0216 18:32:50.141418 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 18:32:57 crc kubenswrapper[4794]: E0216 18:32:57.794074 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:32:58 crc kubenswrapper[4794]: I0216 18:32:58.806646 4794 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 18:32:59 crc kubenswrapper[4794]: E0216 18:32:59.764377 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 18:32:59 crc kubenswrapper[4794]: E0216 18:32:59.764643 4794 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 18:32:59 crc kubenswrapper[4794]: E0216 18:32:59.764765 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2h5l2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-7gcsf_openstack(c695f880-15cb-45b1-8545-60d8437ec631): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 18:32:59 crc kubenswrapper[4794]: E0216 18:32:59.765922 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:33:08 crc kubenswrapper[4794]: E0216 18:33:08.541621 4794 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.151:60710->38.102.83.151:40789: read tcp 38.102.83.151:60710->38.102.83.151:40789: read: connection reset by peer Feb 16 18:33:12 crc kubenswrapper[4794]: E0216 18:33:12.793862 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:33:13 crc kubenswrapper[4794]: E0216 18:33:13.792474 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:33:20 crc kubenswrapper[4794]: I0216 18:33:20.140329 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 18:33:20 crc kubenswrapper[4794]: I0216 18:33:20.140841 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 18:33:26 crc kubenswrapper[4794]: E0216 18:33:26.913351 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 18:33:26 crc kubenswrapper[4794]: E0216 18:33:26.913970 4794 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 18:33:26 crc kubenswrapper[4794]: E0216 18:33:26.914117 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59fh58dh6ch557h84h55ch564h5bh58fh5c8h5d4h584h669h667h569h59hd5hdbh9dh67ch5f9h59fh597h96h664h687h66dhfch5ddh5b7h88h59cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9v9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(8981f528-1f74-4d56-a93c-22860725b490): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 18:33:26 crc kubenswrapper[4794]: E0216 18:33:26.915462 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:33:27 crc kubenswrapper[4794]: E0216 18:33:27.794326 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:33:37 crc kubenswrapper[4794]: E0216 18:33:37.793364 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:33:40 crc kubenswrapper[4794]: E0216 18:33:40.794442 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:33:48 crc kubenswrapper[4794]: I0216 18:33:48.398920 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7cft9"] Feb 16 18:33:48 crc kubenswrapper[4794]: E0216 18:33:48.400243 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78a5e60a-6d42-4965-98d8-bfd752a92270" containerName="registry-server" Feb 16 18:33:48 crc kubenswrapper[4794]: I0216 18:33:48.400265 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="78a5e60a-6d42-4965-98d8-bfd752a92270" containerName="registry-server" Feb 16 18:33:48 crc kubenswrapper[4794]: E0216 18:33:48.400299 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78a5e60a-6d42-4965-98d8-bfd752a92270" containerName="extract-utilities" Feb 16 18:33:48 crc kubenswrapper[4794]: I0216 18:33:48.400335 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="78a5e60a-6d42-4965-98d8-bfd752a92270" containerName="extract-utilities" Feb 16 18:33:48 crc kubenswrapper[4794]: E0216 18:33:48.400374 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c5cf25a-1f1a-4ae5-9cef-4793b1d674df" containerName="extract-utilities" Feb 16 18:33:48 crc kubenswrapper[4794]: I0216 18:33:48.400386 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c5cf25a-1f1a-4ae5-9cef-4793b1d674df" containerName="extract-utilities" Feb 16 18:33:48 crc kubenswrapper[4794]: E0216 18:33:48.400426 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c5cf25a-1f1a-4ae5-9cef-4793b1d674df" containerName="extract-content" Feb 16 18:33:48 crc kubenswrapper[4794]: I0216 18:33:48.400438 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c5cf25a-1f1a-4ae5-9cef-4793b1d674df" containerName="extract-content" Feb 16 18:33:48 crc kubenswrapper[4794]: E0216 18:33:48.400464 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78a5e60a-6d42-4965-98d8-bfd752a92270" containerName="extract-content" Feb 16 18:33:48 crc kubenswrapper[4794]: I0216 18:33:48.400476 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="78a5e60a-6d42-4965-98d8-bfd752a92270" containerName="extract-content" Feb 16 18:33:48 crc kubenswrapper[4794]: E0216 18:33:48.400525 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c5cf25a-1f1a-4ae5-9cef-4793b1d674df" containerName="registry-server" Feb 16 18:33:48 crc kubenswrapper[4794]: I0216 18:33:48.400536 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c5cf25a-1f1a-4ae5-9cef-4793b1d674df" containerName="registry-server" Feb 16 18:33:48 crc kubenswrapper[4794]: I0216 18:33:48.400952 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="78a5e60a-6d42-4965-98d8-bfd752a92270" containerName="registry-server" Feb 16 18:33:48 crc kubenswrapper[4794]: I0216 18:33:48.400991 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c5cf25a-1f1a-4ae5-9cef-4793b1d674df" containerName="registry-server" Feb 16 18:33:48 crc kubenswrapper[4794]: I0216 18:33:48.403973 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7cft9" Feb 16 18:33:48 crc kubenswrapper[4794]: I0216 18:33:48.413147 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7cft9"] Feb 16 18:33:48 crc kubenswrapper[4794]: I0216 18:33:48.511354 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecf6bc84-23c8-4431-91ce-325ad71302b8-catalog-content\") pod \"certified-operators-7cft9\" (UID: \"ecf6bc84-23c8-4431-91ce-325ad71302b8\") " pod="openshift-marketplace/certified-operators-7cft9" Feb 16 18:33:48 crc kubenswrapper[4794]: I0216 18:33:48.511523 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27ncn\" (UniqueName: \"kubernetes.io/projected/ecf6bc84-23c8-4431-91ce-325ad71302b8-kube-api-access-27ncn\") pod \"certified-operators-7cft9\" (UID: \"ecf6bc84-23c8-4431-91ce-325ad71302b8\") " pod="openshift-marketplace/certified-operators-7cft9" Feb 16 18:33:48 crc kubenswrapper[4794]: I0216 18:33:48.511684 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecf6bc84-23c8-4431-91ce-325ad71302b8-utilities\") pod \"certified-operators-7cft9\" (UID: \"ecf6bc84-23c8-4431-91ce-325ad71302b8\") " pod="openshift-marketplace/certified-operators-7cft9" Feb 16 18:33:48 crc kubenswrapper[4794]: I0216 18:33:48.613865 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27ncn\" (UniqueName: \"kubernetes.io/projected/ecf6bc84-23c8-4431-91ce-325ad71302b8-kube-api-access-27ncn\") pod \"certified-operators-7cft9\" (UID: \"ecf6bc84-23c8-4431-91ce-325ad71302b8\") " pod="openshift-marketplace/certified-operators-7cft9" Feb 16 18:33:48 crc kubenswrapper[4794]: I0216 18:33:48.614010 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecf6bc84-23c8-4431-91ce-325ad71302b8-utilities\") pod \"certified-operators-7cft9\" (UID: \"ecf6bc84-23c8-4431-91ce-325ad71302b8\") " pod="openshift-marketplace/certified-operators-7cft9" Feb 16 18:33:48 crc kubenswrapper[4794]: I0216 18:33:48.614068 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecf6bc84-23c8-4431-91ce-325ad71302b8-catalog-content\") pod \"certified-operators-7cft9\" (UID: \"ecf6bc84-23c8-4431-91ce-325ad71302b8\") " pod="openshift-marketplace/certified-operators-7cft9" Feb 16 18:33:48 crc kubenswrapper[4794]: I0216 18:33:48.614558 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecf6bc84-23c8-4431-91ce-325ad71302b8-catalog-content\") pod \"certified-operators-7cft9\" (UID: \"ecf6bc84-23c8-4431-91ce-325ad71302b8\") " pod="openshift-marketplace/certified-operators-7cft9" Feb 16 18:33:48 crc kubenswrapper[4794]: I0216 18:33:48.614654 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecf6bc84-23c8-4431-91ce-325ad71302b8-utilities\") pod \"certified-operators-7cft9\" (UID: \"ecf6bc84-23c8-4431-91ce-325ad71302b8\") " pod="openshift-marketplace/certified-operators-7cft9" Feb 16 18:33:48 crc kubenswrapper[4794]: I0216 18:33:48.641438 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27ncn\" (UniqueName: \"kubernetes.io/projected/ecf6bc84-23c8-4431-91ce-325ad71302b8-kube-api-access-27ncn\") pod \"certified-operators-7cft9\" (UID: \"ecf6bc84-23c8-4431-91ce-325ad71302b8\") " pod="openshift-marketplace/certified-operators-7cft9" Feb 16 18:33:48 crc kubenswrapper[4794]: I0216 18:33:48.733511 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7cft9" Feb 16 18:33:49 crc kubenswrapper[4794]: I0216 18:33:49.208813 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7cft9"] Feb 16 18:33:49 crc kubenswrapper[4794]: I0216 18:33:49.606830 4794 generic.go:334] "Generic (PLEG): container finished" podID="ecf6bc84-23c8-4431-91ce-325ad71302b8" containerID="ae173b0d720b8583665d59ee41945309f5d9e53d4cc8b879b9da3d374d07d566" exitCode=0 Feb 16 18:33:49 crc kubenswrapper[4794]: I0216 18:33:49.606903 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7cft9" event={"ID":"ecf6bc84-23c8-4431-91ce-325ad71302b8","Type":"ContainerDied","Data":"ae173b0d720b8583665d59ee41945309f5d9e53d4cc8b879b9da3d374d07d566"} Feb 16 18:33:49 crc kubenswrapper[4794]: I0216 18:33:49.607385 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7cft9" event={"ID":"ecf6bc84-23c8-4431-91ce-325ad71302b8","Type":"ContainerStarted","Data":"445b7dc37a7fd7f707c53c584a1d748263069e1f23cc39f0687b1b5d14c7280f"} Feb 16 18:33:50 crc kubenswrapper[4794]: I0216 18:33:50.140533 4794 patch_prober.go:28] interesting pod/machine-config-daemon-8q7xf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 18:33:50 crc kubenswrapper[4794]: I0216 18:33:50.140613 4794 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 18:33:50 crc kubenswrapper[4794]: I0216 18:33:50.140674 4794 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" Feb 16 18:33:50 crc kubenswrapper[4794]: I0216 18:33:50.141805 4794 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4e15aeb50f8bb6767198248710b849e8c1c7d3ea7cc1b2f2b495b90676480032"} pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 18:33:50 crc kubenswrapper[4794]: I0216 18:33:50.141908 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerName="machine-config-daemon" containerID="cri-o://4e15aeb50f8bb6767198248710b849e8c1c7d3ea7cc1b2f2b495b90676480032" gracePeriod=600 Feb 16 18:33:50 crc kubenswrapper[4794]: E0216 18:33:50.279870 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:33:50 crc kubenswrapper[4794]: I0216 18:33:50.621136 4794 generic.go:334] "Generic (PLEG): container finished" podID="2d17fb0b-381a-46a1-8bba-33daee594e18" containerID="4e15aeb50f8bb6767198248710b849e8c1c7d3ea7cc1b2f2b495b90676480032" exitCode=0 Feb 16 18:33:50 crc kubenswrapper[4794]: I0216 18:33:50.621228 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerDied","Data":"4e15aeb50f8bb6767198248710b849e8c1c7d3ea7cc1b2f2b495b90676480032"} Feb 16 18:33:50 crc kubenswrapper[4794]: I0216 18:33:50.622480 4794 scope.go:117] "RemoveContainer" containerID="860677b1eb456be903bb112f204623b2baf083d1d49cf8922af98fec72c85451" Feb 16 18:33:50 crc kubenswrapper[4794]: I0216 18:33:50.623383 4794 scope.go:117] "RemoveContainer" containerID="4e15aeb50f8bb6767198248710b849e8c1c7d3ea7cc1b2f2b495b90676480032" Feb 16 18:33:50 crc kubenswrapper[4794]: E0216 18:33:50.623769 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:33:50 crc kubenswrapper[4794]: I0216 18:33:50.629878 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7cft9" event={"ID":"ecf6bc84-23c8-4431-91ce-325ad71302b8","Type":"ContainerStarted","Data":"1e237d19bf12c5303e7ac49fd16f90a83a4313a537ee95fba7ecac37d9fba22b"} Feb 16 18:33:51 crc kubenswrapper[4794]: E0216 18:33:51.815056 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:33:52 crc kubenswrapper[4794]: I0216 18:33:52.660823 4794 generic.go:334] "Generic (PLEG): container finished" podID="ecf6bc84-23c8-4431-91ce-325ad71302b8" containerID="1e237d19bf12c5303e7ac49fd16f90a83a4313a537ee95fba7ecac37d9fba22b" exitCode=0 Feb 16 18:33:52 crc kubenswrapper[4794]: I0216 18:33:52.660914 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7cft9" event={"ID":"ecf6bc84-23c8-4431-91ce-325ad71302b8","Type":"ContainerDied","Data":"1e237d19bf12c5303e7ac49fd16f90a83a4313a537ee95fba7ecac37d9fba22b"} Feb 16 18:33:52 crc kubenswrapper[4794]: E0216 18:33:52.794194 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:33:53 crc kubenswrapper[4794]: I0216 18:33:53.681363 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7cft9" event={"ID":"ecf6bc84-23c8-4431-91ce-325ad71302b8","Type":"ContainerStarted","Data":"f20fd6e0a12366c24bb15e7ebaf0158d416689b03f4a18d7944af3c10a60a7f8"} Feb 16 18:33:53 crc kubenswrapper[4794]: I0216 18:33:53.712187 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7cft9" podStartSLOduration=2.208863301 podStartE2EDuration="5.712165346s" podCreationTimestamp="2026-02-16 18:33:48 +0000 UTC" firstStartedPulling="2026-02-16 18:33:49.608858917 +0000 UTC m=+5655.556953564" lastFinishedPulling="2026-02-16 18:33:53.112160942 +0000 UTC m=+5659.060255609" observedRunningTime="2026-02-16 18:33:53.704418277 +0000 UTC m=+5659.652512944" watchObservedRunningTime="2026-02-16 18:33:53.712165346 +0000 UTC m=+5659.660260003" Feb 16 18:33:58 crc kubenswrapper[4794]: I0216 18:33:58.733992 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7cft9" Feb 16 18:33:58 crc kubenswrapper[4794]: I0216 18:33:58.734976 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7cft9" Feb 16 18:33:58 crc kubenswrapper[4794]: I0216 18:33:58.830481 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7cft9" Feb 16 18:33:59 crc kubenswrapper[4794]: I0216 18:33:59.858108 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7cft9" Feb 16 18:33:59 crc kubenswrapper[4794]: I0216 18:33:59.930329 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7cft9"] Feb 16 18:34:01 crc kubenswrapper[4794]: I0216 18:34:01.784657 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7cft9" podUID="ecf6bc84-23c8-4431-91ce-325ad71302b8" containerName="registry-server" containerID="cri-o://f20fd6e0a12366c24bb15e7ebaf0158d416689b03f4a18d7944af3c10a60a7f8" gracePeriod=2 Feb 16 18:34:02 crc kubenswrapper[4794]: I0216 18:34:02.368606 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7cft9" Feb 16 18:34:02 crc kubenswrapper[4794]: I0216 18:34:02.434321 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecf6bc84-23c8-4431-91ce-325ad71302b8-utilities\") pod \"ecf6bc84-23c8-4431-91ce-325ad71302b8\" (UID: \"ecf6bc84-23c8-4431-91ce-325ad71302b8\") " Feb 16 18:34:02 crc kubenswrapper[4794]: I0216 18:34:02.434369 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecf6bc84-23c8-4431-91ce-325ad71302b8-catalog-content\") pod \"ecf6bc84-23c8-4431-91ce-325ad71302b8\" (UID: \"ecf6bc84-23c8-4431-91ce-325ad71302b8\") " Feb 16 18:34:02 crc kubenswrapper[4794]: I0216 18:34:02.434537 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27ncn\" (UniqueName: \"kubernetes.io/projected/ecf6bc84-23c8-4431-91ce-325ad71302b8-kube-api-access-27ncn\") pod \"ecf6bc84-23c8-4431-91ce-325ad71302b8\" (UID: \"ecf6bc84-23c8-4431-91ce-325ad71302b8\") " Feb 16 18:34:02 crc kubenswrapper[4794]: I0216 18:34:02.435925 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecf6bc84-23c8-4431-91ce-325ad71302b8-utilities" (OuterVolumeSpecName: "utilities") pod "ecf6bc84-23c8-4431-91ce-325ad71302b8" (UID: "ecf6bc84-23c8-4431-91ce-325ad71302b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 18:34:02 crc kubenswrapper[4794]: I0216 18:34:02.442571 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecf6bc84-23c8-4431-91ce-325ad71302b8-kube-api-access-27ncn" (OuterVolumeSpecName: "kube-api-access-27ncn") pod "ecf6bc84-23c8-4431-91ce-325ad71302b8" (UID: "ecf6bc84-23c8-4431-91ce-325ad71302b8"). InnerVolumeSpecName "kube-api-access-27ncn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 18:34:02 crc kubenswrapper[4794]: I0216 18:34:02.482738 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecf6bc84-23c8-4431-91ce-325ad71302b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ecf6bc84-23c8-4431-91ce-325ad71302b8" (UID: "ecf6bc84-23c8-4431-91ce-325ad71302b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 18:34:02 crc kubenswrapper[4794]: I0216 18:34:02.537021 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecf6bc84-23c8-4431-91ce-325ad71302b8-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 18:34:02 crc kubenswrapper[4794]: I0216 18:34:02.537049 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecf6bc84-23c8-4431-91ce-325ad71302b8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 18:34:02 crc kubenswrapper[4794]: I0216 18:34:02.537065 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27ncn\" (UniqueName: \"kubernetes.io/projected/ecf6bc84-23c8-4431-91ce-325ad71302b8-kube-api-access-27ncn\") on node \"crc\" DevicePath \"\"" Feb 16 18:34:02 crc kubenswrapper[4794]: I0216 18:34:02.812770 4794 generic.go:334] "Generic (PLEG): container finished" podID="ecf6bc84-23c8-4431-91ce-325ad71302b8" containerID="f20fd6e0a12366c24bb15e7ebaf0158d416689b03f4a18d7944af3c10a60a7f8" exitCode=0 Feb 16 18:34:02 crc kubenswrapper[4794]: I0216 18:34:02.812914 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7cft9" Feb 16 18:34:02 crc kubenswrapper[4794]: I0216 18:34:02.834921 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7cft9" event={"ID":"ecf6bc84-23c8-4431-91ce-325ad71302b8","Type":"ContainerDied","Data":"f20fd6e0a12366c24bb15e7ebaf0158d416689b03f4a18d7944af3c10a60a7f8"} Feb 16 18:34:02 crc kubenswrapper[4794]: I0216 18:34:02.835160 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7cft9" event={"ID":"ecf6bc84-23c8-4431-91ce-325ad71302b8","Type":"ContainerDied","Data":"445b7dc37a7fd7f707c53c584a1d748263069e1f23cc39f0687b1b5d14c7280f"} Feb 16 18:34:02 crc kubenswrapper[4794]: I0216 18:34:02.835219 4794 scope.go:117] "RemoveContainer" containerID="f20fd6e0a12366c24bb15e7ebaf0158d416689b03f4a18d7944af3c10a60a7f8" Feb 16 18:34:02 crc kubenswrapper[4794]: I0216 18:34:02.863144 4794 scope.go:117] "RemoveContainer" containerID="1e237d19bf12c5303e7ac49fd16f90a83a4313a537ee95fba7ecac37d9fba22b" Feb 16 18:34:02 crc kubenswrapper[4794]: I0216 18:34:02.891963 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7cft9"] Feb 16 18:34:02 crc kubenswrapper[4794]: I0216 18:34:02.901792 4794 scope.go:117] "RemoveContainer" containerID="ae173b0d720b8583665d59ee41945309f5d9e53d4cc8b879b9da3d374d07d566" Feb 16 18:34:02 crc kubenswrapper[4794]: I0216 18:34:02.905009 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7cft9"] Feb 16 18:34:02 crc kubenswrapper[4794]: I0216 18:34:02.956551 4794 scope.go:117] "RemoveContainer" containerID="f20fd6e0a12366c24bb15e7ebaf0158d416689b03f4a18d7944af3c10a60a7f8" Feb 16 18:34:02 crc kubenswrapper[4794]: E0216 18:34:02.956991 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f20fd6e0a12366c24bb15e7ebaf0158d416689b03f4a18d7944af3c10a60a7f8\": container with ID starting with f20fd6e0a12366c24bb15e7ebaf0158d416689b03f4a18d7944af3c10a60a7f8 not found: ID does not exist" containerID="f20fd6e0a12366c24bb15e7ebaf0158d416689b03f4a18d7944af3c10a60a7f8" Feb 16 18:34:02 crc kubenswrapper[4794]: I0216 18:34:02.957033 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f20fd6e0a12366c24bb15e7ebaf0158d416689b03f4a18d7944af3c10a60a7f8"} err="failed to get container status \"f20fd6e0a12366c24bb15e7ebaf0158d416689b03f4a18d7944af3c10a60a7f8\": rpc error: code = NotFound desc = could not find container \"f20fd6e0a12366c24bb15e7ebaf0158d416689b03f4a18d7944af3c10a60a7f8\": container with ID starting with f20fd6e0a12366c24bb15e7ebaf0158d416689b03f4a18d7944af3c10a60a7f8 not found: ID does not exist" Feb 16 18:34:02 crc kubenswrapper[4794]: I0216 18:34:02.957058 4794 scope.go:117] "RemoveContainer" containerID="1e237d19bf12c5303e7ac49fd16f90a83a4313a537ee95fba7ecac37d9fba22b" Feb 16 18:34:02 crc kubenswrapper[4794]: E0216 18:34:02.957704 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e237d19bf12c5303e7ac49fd16f90a83a4313a537ee95fba7ecac37d9fba22b\": container with ID starting with 1e237d19bf12c5303e7ac49fd16f90a83a4313a537ee95fba7ecac37d9fba22b not found: ID does not exist" containerID="1e237d19bf12c5303e7ac49fd16f90a83a4313a537ee95fba7ecac37d9fba22b" Feb 16 18:34:02 crc kubenswrapper[4794]: I0216 18:34:02.957726 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e237d19bf12c5303e7ac49fd16f90a83a4313a537ee95fba7ecac37d9fba22b"} err="failed to get container status \"1e237d19bf12c5303e7ac49fd16f90a83a4313a537ee95fba7ecac37d9fba22b\": rpc error: code = NotFound desc = could not find container \"1e237d19bf12c5303e7ac49fd16f90a83a4313a537ee95fba7ecac37d9fba22b\": container with ID starting with 1e237d19bf12c5303e7ac49fd16f90a83a4313a537ee95fba7ecac37d9fba22b not found: ID does not exist" Feb 16 18:34:02 crc kubenswrapper[4794]: I0216 18:34:02.957738 4794 scope.go:117] "RemoveContainer" containerID="ae173b0d720b8583665d59ee41945309f5d9e53d4cc8b879b9da3d374d07d566" Feb 16 18:34:02 crc kubenswrapper[4794]: E0216 18:34:02.957983 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae173b0d720b8583665d59ee41945309f5d9e53d4cc8b879b9da3d374d07d566\": container with ID starting with ae173b0d720b8583665d59ee41945309f5d9e53d4cc8b879b9da3d374d07d566 not found: ID does not exist" containerID="ae173b0d720b8583665d59ee41945309f5d9e53d4cc8b879b9da3d374d07d566" Feb 16 18:34:02 crc kubenswrapper[4794]: I0216 18:34:02.958010 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae173b0d720b8583665d59ee41945309f5d9e53d4cc8b879b9da3d374d07d566"} err="failed to get container status \"ae173b0d720b8583665d59ee41945309f5d9e53d4cc8b879b9da3d374d07d566\": rpc error: code = NotFound desc = could not find container \"ae173b0d720b8583665d59ee41945309f5d9e53d4cc8b879b9da3d374d07d566\": container with ID starting with ae173b0d720b8583665d59ee41945309f5d9e53d4cc8b879b9da3d374d07d566 not found: ID does not exist" Feb 16 18:34:03 crc kubenswrapper[4794]: I0216 18:34:03.791401 4794 scope.go:117] "RemoveContainer" containerID="4e15aeb50f8bb6767198248710b849e8c1c7d3ea7cc1b2f2b495b90676480032" Feb 16 18:34:03 crc kubenswrapper[4794]: E0216 18:34:03.792158 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:34:04 crc kubenswrapper[4794]: I0216 18:34:04.816661 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecf6bc84-23c8-4431-91ce-325ad71302b8" path="/var/lib/kubelet/pods/ecf6bc84-23c8-4431-91ce-325ad71302b8/volumes" Feb 16 18:34:05 crc kubenswrapper[4794]: E0216 18:34:05.799358 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:34:06 crc kubenswrapper[4794]: E0216 18:34:06.799920 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:34:16 crc kubenswrapper[4794]: E0216 18:34:16.809238 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:34:17 crc kubenswrapper[4794]: I0216 18:34:17.794821 4794 scope.go:117] "RemoveContainer" containerID="4e15aeb50f8bb6767198248710b849e8c1c7d3ea7cc1b2f2b495b90676480032" Feb 16 18:34:17 crc kubenswrapper[4794]: E0216 18:34:17.795111 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:34:17 crc kubenswrapper[4794]: E0216 18:34:17.795827 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:34:28 crc kubenswrapper[4794]: E0216 18:34:28.796496 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:34:30 crc kubenswrapper[4794]: I0216 18:34:30.794598 4794 scope.go:117] "RemoveContainer" containerID="4e15aeb50f8bb6767198248710b849e8c1c7d3ea7cc1b2f2b495b90676480032" Feb 16 18:34:30 crc kubenswrapper[4794]: E0216 18:34:30.795894 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:34:30 crc kubenswrapper[4794]: E0216 18:34:30.797924 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:34:38 crc kubenswrapper[4794]: I0216 18:34:38.292992 4794 generic.go:334] "Generic (PLEG): container finished" podID="bbbb8431-488f-40c2-9166-28f5399b1253" containerID="1cfe5ea54e4c83749b053beec5e1f137149ed83e762008161b895423f0392205" exitCode=0 Feb 16 18:34:38 crc kubenswrapper[4794]: I0216 18:34:38.293064 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9bvp9/must-gather-d6pd7" event={"ID":"bbbb8431-488f-40c2-9166-28f5399b1253","Type":"ContainerDied","Data":"1cfe5ea54e4c83749b053beec5e1f137149ed83e762008161b895423f0392205"} Feb 16 18:34:38 crc kubenswrapper[4794]: I0216 18:34:38.294744 4794 scope.go:117] "RemoveContainer" containerID="1cfe5ea54e4c83749b053beec5e1f137149ed83e762008161b895423f0392205" Feb 16 18:34:38 crc kubenswrapper[4794]: I0216 18:34:38.395728 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9bvp9_must-gather-d6pd7_bbbb8431-488f-40c2-9166-28f5399b1253/gather/0.log" Feb 16 18:34:41 crc kubenswrapper[4794]: E0216 18:34:41.796566 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:34:43 crc kubenswrapper[4794]: E0216 18:34:43.795840 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:34:44 crc kubenswrapper[4794]: I0216 18:34:44.816245 4794 scope.go:117] "RemoveContainer" containerID="4e15aeb50f8bb6767198248710b849e8c1c7d3ea7cc1b2f2b495b90676480032" Feb 16 18:34:44 crc kubenswrapper[4794]: E0216 18:34:44.817853 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:34:47 crc kubenswrapper[4794]: I0216 18:34:47.324358 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9bvp9/must-gather-d6pd7"] Feb 16 18:34:47 crc kubenswrapper[4794]: I0216 18:34:47.325032 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-9bvp9/must-gather-d6pd7" podUID="bbbb8431-488f-40c2-9166-28f5399b1253" containerName="copy" containerID="cri-o://87e8fd85c08d8e7dcdb892f28f2897267e7ab03e075c3e72eacd572d309d1f55" gracePeriod=2 Feb 16 18:34:47 crc kubenswrapper[4794]: I0216 18:34:47.336144 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9bvp9/must-gather-d6pd7"] Feb 16 18:34:47 crc kubenswrapper[4794]: I0216 18:34:47.819185 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9bvp9_must-gather-d6pd7_bbbb8431-488f-40c2-9166-28f5399b1253/copy/0.log" Feb 16 18:34:47 crc kubenswrapper[4794]: I0216 18:34:47.820321 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9bvp9/must-gather-d6pd7" Feb 16 18:34:47 crc kubenswrapper[4794]: I0216 18:34:47.860784 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bbbb8431-488f-40c2-9166-28f5399b1253-must-gather-output\") pod \"bbbb8431-488f-40c2-9166-28f5399b1253\" (UID: \"bbbb8431-488f-40c2-9166-28f5399b1253\") " Feb 16 18:34:47 crc kubenswrapper[4794]: I0216 18:34:47.860957 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcw2m\" (UniqueName: \"kubernetes.io/projected/bbbb8431-488f-40c2-9166-28f5399b1253-kube-api-access-hcw2m\") pod \"bbbb8431-488f-40c2-9166-28f5399b1253\" (UID: \"bbbb8431-488f-40c2-9166-28f5399b1253\") " Feb 16 18:34:48 crc kubenswrapper[4794]: I0216 18:34:48.051069 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbbb8431-488f-40c2-9166-28f5399b1253-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "bbbb8431-488f-40c2-9166-28f5399b1253" (UID: "bbbb8431-488f-40c2-9166-28f5399b1253"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 18:34:48 crc kubenswrapper[4794]: I0216 18:34:48.064919 4794 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bbbb8431-488f-40c2-9166-28f5399b1253-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 16 18:34:48 crc kubenswrapper[4794]: I0216 18:34:48.418442 4794 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9bvp9_must-gather-d6pd7_bbbb8431-488f-40c2-9166-28f5399b1253/copy/0.log" Feb 16 18:34:48 crc kubenswrapper[4794]: I0216 18:34:48.420109 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9bvp9/must-gather-d6pd7" Feb 16 18:34:48 crc kubenswrapper[4794]: I0216 18:34:48.420078 4794 generic.go:334] "Generic (PLEG): container finished" podID="bbbb8431-488f-40c2-9166-28f5399b1253" containerID="87e8fd85c08d8e7dcdb892f28f2897267e7ab03e075c3e72eacd572d309d1f55" exitCode=143 Feb 16 18:34:48 crc kubenswrapper[4794]: I0216 18:34:48.420195 4794 scope.go:117] "RemoveContainer" containerID="87e8fd85c08d8e7dcdb892f28f2897267e7ab03e075c3e72eacd572d309d1f55" Feb 16 18:34:48 crc kubenswrapper[4794]: I0216 18:34:48.448658 4794 scope.go:117] "RemoveContainer" containerID="1cfe5ea54e4c83749b053beec5e1f137149ed83e762008161b895423f0392205" Feb 16 18:34:48 crc kubenswrapper[4794]: I0216 18:34:48.472691 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbbb8431-488f-40c2-9166-28f5399b1253-kube-api-access-hcw2m" (OuterVolumeSpecName: "kube-api-access-hcw2m") pod "bbbb8431-488f-40c2-9166-28f5399b1253" (UID: "bbbb8431-488f-40c2-9166-28f5399b1253"). InnerVolumeSpecName "kube-api-access-hcw2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 18:34:48 crc kubenswrapper[4794]: I0216 18:34:48.473625 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcw2m\" (UniqueName: \"kubernetes.io/projected/bbbb8431-488f-40c2-9166-28f5399b1253-kube-api-access-hcw2m\") pod \"bbbb8431-488f-40c2-9166-28f5399b1253\" (UID: \"bbbb8431-488f-40c2-9166-28f5399b1253\") " Feb 16 18:34:48 crc kubenswrapper[4794]: W0216 18:34:48.475842 4794 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/bbbb8431-488f-40c2-9166-28f5399b1253/volumes/kubernetes.io~projected/kube-api-access-hcw2m Feb 16 18:34:48 crc kubenswrapper[4794]: I0216 18:34:48.475902 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbbb8431-488f-40c2-9166-28f5399b1253-kube-api-access-hcw2m" (OuterVolumeSpecName: "kube-api-access-hcw2m") pod "bbbb8431-488f-40c2-9166-28f5399b1253" (UID: "bbbb8431-488f-40c2-9166-28f5399b1253"). InnerVolumeSpecName "kube-api-access-hcw2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 18:34:48 crc kubenswrapper[4794]: I0216 18:34:48.501871 4794 scope.go:117] "RemoveContainer" containerID="87e8fd85c08d8e7dcdb892f28f2897267e7ab03e075c3e72eacd572d309d1f55" Feb 16 18:34:48 crc kubenswrapper[4794]: E0216 18:34:48.502296 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87e8fd85c08d8e7dcdb892f28f2897267e7ab03e075c3e72eacd572d309d1f55\": container with ID starting with 87e8fd85c08d8e7dcdb892f28f2897267e7ab03e075c3e72eacd572d309d1f55 not found: ID does not exist" containerID="87e8fd85c08d8e7dcdb892f28f2897267e7ab03e075c3e72eacd572d309d1f55" Feb 16 18:34:48 crc kubenswrapper[4794]: I0216 18:34:48.502456 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87e8fd85c08d8e7dcdb892f28f2897267e7ab03e075c3e72eacd572d309d1f55"} err="failed to get container status \"87e8fd85c08d8e7dcdb892f28f2897267e7ab03e075c3e72eacd572d309d1f55\": rpc error: code = NotFound desc = could not find container \"87e8fd85c08d8e7dcdb892f28f2897267e7ab03e075c3e72eacd572d309d1f55\": container with ID starting with 87e8fd85c08d8e7dcdb892f28f2897267e7ab03e075c3e72eacd572d309d1f55 not found: ID does not exist" Feb 16 18:34:48 crc kubenswrapper[4794]: I0216 18:34:48.502500 4794 scope.go:117] "RemoveContainer" containerID="1cfe5ea54e4c83749b053beec5e1f137149ed83e762008161b895423f0392205" Feb 16 18:34:48 crc kubenswrapper[4794]: E0216 18:34:48.502932 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cfe5ea54e4c83749b053beec5e1f137149ed83e762008161b895423f0392205\": container with ID starting with 1cfe5ea54e4c83749b053beec5e1f137149ed83e762008161b895423f0392205 not found: ID does not exist" containerID="1cfe5ea54e4c83749b053beec5e1f137149ed83e762008161b895423f0392205" Feb 16 18:34:48 crc kubenswrapper[4794]: I0216 18:34:48.503018 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cfe5ea54e4c83749b053beec5e1f137149ed83e762008161b895423f0392205"} err="failed to get container status \"1cfe5ea54e4c83749b053beec5e1f137149ed83e762008161b895423f0392205\": rpc error: code = NotFound desc = could not find container \"1cfe5ea54e4c83749b053beec5e1f137149ed83e762008161b895423f0392205\": container with ID starting with 1cfe5ea54e4c83749b053beec5e1f137149ed83e762008161b895423f0392205 not found: ID does not exist" Feb 16 18:34:48 crc kubenswrapper[4794]: I0216 18:34:48.576910 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcw2m\" (UniqueName: \"kubernetes.io/projected/bbbb8431-488f-40c2-9166-28f5399b1253-kube-api-access-hcw2m\") on node \"crc\" DevicePath \"\"" Feb 16 18:34:48 crc kubenswrapper[4794]: I0216 18:34:48.824682 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbbb8431-488f-40c2-9166-28f5399b1253" path="/var/lib/kubelet/pods/bbbb8431-488f-40c2-9166-28f5399b1253/volumes" Feb 16 18:34:52 crc kubenswrapper[4794]: E0216 18:34:52.794260 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:34:56 crc kubenswrapper[4794]: I0216 18:34:56.792894 4794 scope.go:117] "RemoveContainer" containerID="4e15aeb50f8bb6767198248710b849e8c1c7d3ea7cc1b2f2b495b90676480032" Feb 16 18:34:56 crc kubenswrapper[4794]: E0216 18:34:56.795132 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:34:57 crc kubenswrapper[4794]: E0216 18:34:57.810062 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:35:07 crc kubenswrapper[4794]: E0216 18:35:07.795183 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:35:10 crc kubenswrapper[4794]: I0216 18:35:10.791524 4794 scope.go:117] "RemoveContainer" containerID="4e15aeb50f8bb6767198248710b849e8c1c7d3ea7cc1b2f2b495b90676480032" Feb 16 18:35:10 crc kubenswrapper[4794]: E0216 18:35:10.792461 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:35:12 crc kubenswrapper[4794]: E0216 18:35:12.794074 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:35:18 crc kubenswrapper[4794]: E0216 18:35:18.793973 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:35:22 crc kubenswrapper[4794]: I0216 18:35:22.795094 4794 scope.go:117] "RemoveContainer" containerID="4e15aeb50f8bb6767198248710b849e8c1c7d3ea7cc1b2f2b495b90676480032" Feb 16 18:35:22 crc kubenswrapper[4794]: E0216 18:35:22.796186 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:35:26 crc kubenswrapper[4794]: E0216 18:35:26.800343 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:35:31 crc kubenswrapper[4794]: E0216 18:35:31.795423 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:35:37 crc kubenswrapper[4794]: I0216 18:35:37.792976 4794 scope.go:117] "RemoveContainer" containerID="4e15aeb50f8bb6767198248710b849e8c1c7d3ea7cc1b2f2b495b90676480032" Feb 16 18:35:37 crc kubenswrapper[4794]: E0216 18:35:37.795059 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:35:37 crc kubenswrapper[4794]: E0216 18:35:37.796887 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:35:46 crc kubenswrapper[4794]: E0216 18:35:46.797111 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:35:48 crc kubenswrapper[4794]: I0216 18:35:48.792998 4794 scope.go:117] "RemoveContainer" containerID="4e15aeb50f8bb6767198248710b849e8c1c7d3ea7cc1b2f2b495b90676480032" Feb 16 18:35:48 crc kubenswrapper[4794]: E0216 18:35:48.796744 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:35:52 crc kubenswrapper[4794]: E0216 18:35:52.795958 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:35:59 crc kubenswrapper[4794]: E0216 18:35:59.794666 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:36:02 crc kubenswrapper[4794]: I0216 18:36:02.792085 4794 scope.go:117] "RemoveContainer" containerID="4e15aeb50f8bb6767198248710b849e8c1c7d3ea7cc1b2f2b495b90676480032" Feb 16 18:36:02 crc kubenswrapper[4794]: E0216 18:36:02.793003 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:36:04 crc kubenswrapper[4794]: E0216 18:36:04.812386 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:36:14 crc kubenswrapper[4794]: I0216 18:36:14.802253 4794 scope.go:117] "RemoveContainer" containerID="4e15aeb50f8bb6767198248710b849e8c1c7d3ea7cc1b2f2b495b90676480032" Feb 16 18:36:14 crc kubenswrapper[4794]: E0216 18:36:14.804849 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:36:14 crc kubenswrapper[4794]: E0216 18:36:14.805224 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:36:18 crc kubenswrapper[4794]: E0216 18:36:18.793595 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:36:26 crc kubenswrapper[4794]: E0216 18:36:26.795928 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:36:28 crc kubenswrapper[4794]: I0216 18:36:28.792388 4794 scope.go:117] "RemoveContainer" containerID="4e15aeb50f8bb6767198248710b849e8c1c7d3ea7cc1b2f2b495b90676480032" Feb 16 18:36:28 crc kubenswrapper[4794]: E0216 18:36:28.793102 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:36:32 crc kubenswrapper[4794]: E0216 18:36:32.793915 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:36:39 crc kubenswrapper[4794]: E0216 18:36:39.794412 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:36:41 crc kubenswrapper[4794]: I0216 18:36:41.791832 4794 scope.go:117] "RemoveContainer" containerID="4e15aeb50f8bb6767198248710b849e8c1c7d3ea7cc1b2f2b495b90676480032" Feb 16 18:36:41 crc kubenswrapper[4794]: E0216 18:36:41.792517 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:36:46 crc kubenswrapper[4794]: E0216 18:36:46.794811 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:36:52 crc kubenswrapper[4794]: I0216 18:36:52.792684 4794 scope.go:117] "RemoveContainer" containerID="4e15aeb50f8bb6767198248710b849e8c1c7d3ea7cc1b2f2b495b90676480032" Feb 16 18:36:52 crc kubenswrapper[4794]: E0216 18:36:52.793385 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:36:54 crc kubenswrapper[4794]: E0216 18:36:54.814813 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:36:59 crc kubenswrapper[4794]: E0216 18:36:59.793782 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:37:06 crc kubenswrapper[4794]: I0216 18:37:06.793401 4794 scope.go:117] "RemoveContainer" containerID="4e15aeb50f8bb6767198248710b849e8c1c7d3ea7cc1b2f2b495b90676480032" Feb 16 18:37:06 crc kubenswrapper[4794]: E0216 18:37:06.795187 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:37:07 crc kubenswrapper[4794]: E0216 18:37:07.798921 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:37:13 crc kubenswrapper[4794]: E0216 18:37:13.794666 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:37:17 crc kubenswrapper[4794]: I0216 18:37:17.792716 4794 scope.go:117] "RemoveContainer" containerID="4e15aeb50f8bb6767198248710b849e8c1c7d3ea7cc1b2f2b495b90676480032" Feb 16 18:37:17 crc kubenswrapper[4794]: E0216 18:37:17.793924 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:37:18 crc kubenswrapper[4794]: E0216 18:37:18.795874 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:37:24 crc kubenswrapper[4794]: E0216 18:37:24.802002 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:37:29 crc kubenswrapper[4794]: I0216 18:37:29.792289 4794 scope.go:117] "RemoveContainer" containerID="4e15aeb50f8bb6767198248710b849e8c1c7d3ea7cc1b2f2b495b90676480032" Feb 16 18:37:29 crc kubenswrapper[4794]: E0216 18:37:29.793480 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:37:31 crc kubenswrapper[4794]: E0216 18:37:31.794100 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:37:39 crc kubenswrapper[4794]: E0216 18:37:39.799923 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:37:41 crc kubenswrapper[4794]: I0216 18:37:41.792627 4794 scope.go:117] "RemoveContainer" containerID="4e15aeb50f8bb6767198248710b849e8c1c7d3ea7cc1b2f2b495b90676480032" Feb 16 18:37:41 crc kubenswrapper[4794]: E0216 18:37:41.793518 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:37:45 crc kubenswrapper[4794]: E0216 18:37:45.794651 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:37:52 crc kubenswrapper[4794]: E0216 18:37:52.795072 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:37:53 crc kubenswrapper[4794]: I0216 18:37:53.791555 4794 scope.go:117] "RemoveContainer" containerID="4e15aeb50f8bb6767198248710b849e8c1c7d3ea7cc1b2f2b495b90676480032" Feb 16 18:37:53 crc kubenswrapper[4794]: E0216 18:37:53.791969 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:38:00 crc kubenswrapper[4794]: E0216 18:38:00.795509 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:38:04 crc kubenswrapper[4794]: I0216 18:38:04.813042 4794 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 18:38:04 crc kubenswrapper[4794]: E0216 18:38:04.957068 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 18:38:04 crc kubenswrapper[4794]: E0216 18:38:04.957144 4794 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 18:38:04 crc kubenswrapper[4794]: E0216 18:38:04.957265 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2h5l2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-7gcsf_openstack(c695f880-15cb-45b1-8545-60d8437ec631): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 18:38:04 crc kubenswrapper[4794]: E0216 18:38:04.958599 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:38:07 crc kubenswrapper[4794]: I0216 18:38:07.792316 4794 scope.go:117] "RemoveContainer" containerID="4e15aeb50f8bb6767198248710b849e8c1c7d3ea7cc1b2f2b495b90676480032" Feb 16 18:38:07 crc kubenswrapper[4794]: E0216 18:38:07.793678 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:38:12 crc kubenswrapper[4794]: I0216 18:38:12.481569 4794 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-74rq8"] Feb 16 18:38:12 crc kubenswrapper[4794]: E0216 18:38:12.483208 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecf6bc84-23c8-4431-91ce-325ad71302b8" containerName="extract-content" Feb 16 18:38:12 crc kubenswrapper[4794]: I0216 18:38:12.483237 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecf6bc84-23c8-4431-91ce-325ad71302b8" containerName="extract-content" Feb 16 18:38:12 crc kubenswrapper[4794]: E0216 18:38:12.483273 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbbb8431-488f-40c2-9166-28f5399b1253" containerName="gather" Feb 16 18:38:12 crc kubenswrapper[4794]: I0216 18:38:12.483289 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbbb8431-488f-40c2-9166-28f5399b1253" containerName="gather" Feb 16 18:38:12 crc kubenswrapper[4794]: E0216 18:38:12.483358 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbbb8431-488f-40c2-9166-28f5399b1253" containerName="copy" Feb 16 18:38:12 crc kubenswrapper[4794]: I0216 18:38:12.483376 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbbb8431-488f-40c2-9166-28f5399b1253" containerName="copy" Feb 16 18:38:12 crc kubenswrapper[4794]: E0216 18:38:12.483453 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecf6bc84-23c8-4431-91ce-325ad71302b8" containerName="registry-server" Feb 16 18:38:12 crc kubenswrapper[4794]: I0216 18:38:12.483470 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecf6bc84-23c8-4431-91ce-325ad71302b8" containerName="registry-server" Feb 16 18:38:12 crc kubenswrapper[4794]: E0216 18:38:12.483495 4794 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecf6bc84-23c8-4431-91ce-325ad71302b8" containerName="extract-utilities" Feb 16 18:38:12 crc kubenswrapper[4794]: I0216 18:38:12.483511 4794 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecf6bc84-23c8-4431-91ce-325ad71302b8" containerName="extract-utilities" Feb 16 18:38:12 crc kubenswrapper[4794]: I0216 18:38:12.484059 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecf6bc84-23c8-4431-91ce-325ad71302b8" containerName="registry-server" Feb 16 18:38:12 crc kubenswrapper[4794]: I0216 18:38:12.484117 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbbb8431-488f-40c2-9166-28f5399b1253" containerName="gather" Feb 16 18:38:12 crc kubenswrapper[4794]: I0216 18:38:12.484142 4794 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbbb8431-488f-40c2-9166-28f5399b1253" containerName="copy" Feb 16 18:38:12 crc kubenswrapper[4794]: I0216 18:38:12.489191 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-74rq8" Feb 16 18:38:12 crc kubenswrapper[4794]: I0216 18:38:12.509555 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-74rq8"] Feb 16 18:38:12 crc kubenswrapper[4794]: I0216 18:38:12.614360 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca4d44d7-f4a0-4e0d-84cf-4d991431c741-utilities\") pod \"redhat-marketplace-74rq8\" (UID: \"ca4d44d7-f4a0-4e0d-84cf-4d991431c741\") " pod="openshift-marketplace/redhat-marketplace-74rq8" Feb 16 18:38:12 crc kubenswrapper[4794]: I0216 18:38:12.614709 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca4d44d7-f4a0-4e0d-84cf-4d991431c741-catalog-content\") pod \"redhat-marketplace-74rq8\" (UID: \"ca4d44d7-f4a0-4e0d-84cf-4d991431c741\") " pod="openshift-marketplace/redhat-marketplace-74rq8" Feb 16 18:38:12 crc kubenswrapper[4794]: I0216 18:38:12.614777 4794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rr9p\" (UniqueName: \"kubernetes.io/projected/ca4d44d7-f4a0-4e0d-84cf-4d991431c741-kube-api-access-4rr9p\") pod \"redhat-marketplace-74rq8\" (UID: \"ca4d44d7-f4a0-4e0d-84cf-4d991431c741\") " pod="openshift-marketplace/redhat-marketplace-74rq8" Feb 16 18:38:12 crc kubenswrapper[4794]: I0216 18:38:12.717521 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca4d44d7-f4a0-4e0d-84cf-4d991431c741-utilities\") pod \"redhat-marketplace-74rq8\" (UID: \"ca4d44d7-f4a0-4e0d-84cf-4d991431c741\") " pod="openshift-marketplace/redhat-marketplace-74rq8" Feb 16 18:38:12 crc kubenswrapper[4794]: I0216 18:38:12.717668 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca4d44d7-f4a0-4e0d-84cf-4d991431c741-catalog-content\") pod \"redhat-marketplace-74rq8\" (UID: \"ca4d44d7-f4a0-4e0d-84cf-4d991431c741\") " pod="openshift-marketplace/redhat-marketplace-74rq8" Feb 16 18:38:12 crc kubenswrapper[4794]: I0216 18:38:12.717693 4794 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rr9p\" (UniqueName: \"kubernetes.io/projected/ca4d44d7-f4a0-4e0d-84cf-4d991431c741-kube-api-access-4rr9p\") pod \"redhat-marketplace-74rq8\" (UID: \"ca4d44d7-f4a0-4e0d-84cf-4d991431c741\") " pod="openshift-marketplace/redhat-marketplace-74rq8" Feb 16 18:38:12 crc kubenswrapper[4794]: I0216 18:38:12.718225 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca4d44d7-f4a0-4e0d-84cf-4d991431c741-utilities\") pod \"redhat-marketplace-74rq8\" (UID: \"ca4d44d7-f4a0-4e0d-84cf-4d991431c741\") " pod="openshift-marketplace/redhat-marketplace-74rq8" Feb 16 18:38:12 crc kubenswrapper[4794]: I0216 18:38:12.718287 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca4d44d7-f4a0-4e0d-84cf-4d991431c741-catalog-content\") pod \"redhat-marketplace-74rq8\" (UID: \"ca4d44d7-f4a0-4e0d-84cf-4d991431c741\") " pod="openshift-marketplace/redhat-marketplace-74rq8" Feb 16 18:38:12 crc kubenswrapper[4794]: I0216 18:38:12.737493 4794 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rr9p\" (UniqueName: \"kubernetes.io/projected/ca4d44d7-f4a0-4e0d-84cf-4d991431c741-kube-api-access-4rr9p\") pod \"redhat-marketplace-74rq8\" (UID: \"ca4d44d7-f4a0-4e0d-84cf-4d991431c741\") " pod="openshift-marketplace/redhat-marketplace-74rq8" Feb 16 18:38:12 crc kubenswrapper[4794]: I0216 18:38:12.845660 4794 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-74rq8" Feb 16 18:38:13 crc kubenswrapper[4794]: I0216 18:38:13.361301 4794 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-74rq8"] Feb 16 18:38:13 crc kubenswrapper[4794]: E0216 18:38:13.794622 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:38:14 crc kubenswrapper[4794]: I0216 18:38:14.130231 4794 generic.go:334] "Generic (PLEG): container finished" podID="ca4d44d7-f4a0-4e0d-84cf-4d991431c741" containerID="27390b55e2c029ea4f249760a9a4ece44eb4527a6aef51ca752283c325e96eda" exitCode=0 Feb 16 18:38:14 crc kubenswrapper[4794]: I0216 18:38:14.130287 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74rq8" event={"ID":"ca4d44d7-f4a0-4e0d-84cf-4d991431c741","Type":"ContainerDied","Data":"27390b55e2c029ea4f249760a9a4ece44eb4527a6aef51ca752283c325e96eda"} Feb 16 18:38:14 crc kubenswrapper[4794]: I0216 18:38:14.130329 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74rq8" event={"ID":"ca4d44d7-f4a0-4e0d-84cf-4d991431c741","Type":"ContainerStarted","Data":"3318a16541982cae2dcfac7defc5556abaa1757b118e55c7177059722d3b58ab"} Feb 16 18:38:15 crc kubenswrapper[4794]: I0216 18:38:15.149237 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74rq8" event={"ID":"ca4d44d7-f4a0-4e0d-84cf-4d991431c741","Type":"ContainerStarted","Data":"eaf295515aab7c25749e069da999c31791b2df466d9350937071148ba972d20d"} Feb 16 18:38:15 crc kubenswrapper[4794]: E0216 18:38:15.793300 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:38:16 crc kubenswrapper[4794]: I0216 18:38:16.163601 4794 generic.go:334] "Generic (PLEG): container finished" podID="ca4d44d7-f4a0-4e0d-84cf-4d991431c741" containerID="eaf295515aab7c25749e069da999c31791b2df466d9350937071148ba972d20d" exitCode=0 Feb 16 18:38:16 crc kubenswrapper[4794]: I0216 18:38:16.164009 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74rq8" event={"ID":"ca4d44d7-f4a0-4e0d-84cf-4d991431c741","Type":"ContainerDied","Data":"eaf295515aab7c25749e069da999c31791b2df466d9350937071148ba972d20d"} Feb 16 18:38:17 crc kubenswrapper[4794]: I0216 18:38:17.187741 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74rq8" event={"ID":"ca4d44d7-f4a0-4e0d-84cf-4d991431c741","Type":"ContainerStarted","Data":"1e1ebeb64edd6499922061778fffbbbe569b0a906130bb97f12fbfcc3cf7b019"} Feb 16 18:38:17 crc kubenswrapper[4794]: I0216 18:38:17.218093 4794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-74rq8" podStartSLOduration=2.591209486 podStartE2EDuration="5.218066742s" podCreationTimestamp="2026-02-16 18:38:12 +0000 UTC" firstStartedPulling="2026-02-16 18:38:14.133395543 +0000 UTC m=+5920.081490190" lastFinishedPulling="2026-02-16 18:38:16.760252779 +0000 UTC m=+5922.708347446" observedRunningTime="2026-02-16 18:38:17.217966029 +0000 UTC m=+5923.166060696" watchObservedRunningTime="2026-02-16 18:38:17.218066742 +0000 UTC m=+5923.166161429" Feb 16 18:38:21 crc kubenswrapper[4794]: I0216 18:38:21.792575 4794 scope.go:117] "RemoveContainer" containerID="4e15aeb50f8bb6767198248710b849e8c1c7d3ea7cc1b2f2b495b90676480032" Feb 16 18:38:21 crc kubenswrapper[4794]: E0216 18:38:21.793846 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:38:22 crc kubenswrapper[4794]: I0216 18:38:22.846625 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-74rq8" Feb 16 18:38:22 crc kubenswrapper[4794]: I0216 18:38:22.847084 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-74rq8" Feb 16 18:38:22 crc kubenswrapper[4794]: I0216 18:38:22.941842 4794 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-74rq8" Feb 16 18:38:23 crc kubenswrapper[4794]: I0216 18:38:23.377019 4794 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-74rq8" Feb 16 18:38:23 crc kubenswrapper[4794]: I0216 18:38:23.469238 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-74rq8"] Feb 16 18:38:25 crc kubenswrapper[4794]: I0216 18:38:25.313906 4794 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-74rq8" podUID="ca4d44d7-f4a0-4e0d-84cf-4d991431c741" containerName="registry-server" containerID="cri-o://1e1ebeb64edd6499922061778fffbbbe569b0a906130bb97f12fbfcc3cf7b019" gracePeriod=2 Feb 16 18:38:25 crc kubenswrapper[4794]: I0216 18:38:25.880829 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-74rq8" Feb 16 18:38:26 crc kubenswrapper[4794]: I0216 18:38:26.020945 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca4d44d7-f4a0-4e0d-84cf-4d991431c741-catalog-content\") pod \"ca4d44d7-f4a0-4e0d-84cf-4d991431c741\" (UID: \"ca4d44d7-f4a0-4e0d-84cf-4d991431c741\") " Feb 16 18:38:26 crc kubenswrapper[4794]: I0216 18:38:26.021431 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca4d44d7-f4a0-4e0d-84cf-4d991431c741-utilities\") pod \"ca4d44d7-f4a0-4e0d-84cf-4d991431c741\" (UID: \"ca4d44d7-f4a0-4e0d-84cf-4d991431c741\") " Feb 16 18:38:26 crc kubenswrapper[4794]: I0216 18:38:26.021611 4794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rr9p\" (UniqueName: \"kubernetes.io/projected/ca4d44d7-f4a0-4e0d-84cf-4d991431c741-kube-api-access-4rr9p\") pod \"ca4d44d7-f4a0-4e0d-84cf-4d991431c741\" (UID: \"ca4d44d7-f4a0-4e0d-84cf-4d991431c741\") " Feb 16 18:38:26 crc kubenswrapper[4794]: I0216 18:38:26.022822 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca4d44d7-f4a0-4e0d-84cf-4d991431c741-utilities" (OuterVolumeSpecName: "utilities") pod "ca4d44d7-f4a0-4e0d-84cf-4d991431c741" (UID: "ca4d44d7-f4a0-4e0d-84cf-4d991431c741"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 18:38:26 crc kubenswrapper[4794]: I0216 18:38:26.023122 4794 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca4d44d7-f4a0-4e0d-84cf-4d991431c741-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 18:38:26 crc kubenswrapper[4794]: I0216 18:38:26.028576 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca4d44d7-f4a0-4e0d-84cf-4d991431c741-kube-api-access-4rr9p" (OuterVolumeSpecName: "kube-api-access-4rr9p") pod "ca4d44d7-f4a0-4e0d-84cf-4d991431c741" (UID: "ca4d44d7-f4a0-4e0d-84cf-4d991431c741"). InnerVolumeSpecName "kube-api-access-4rr9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 18:38:26 crc kubenswrapper[4794]: I0216 18:38:26.055155 4794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca4d44d7-f4a0-4e0d-84cf-4d991431c741-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca4d44d7-f4a0-4e0d-84cf-4d991431c741" (UID: "ca4d44d7-f4a0-4e0d-84cf-4d991431c741"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 18:38:26 crc kubenswrapper[4794]: I0216 18:38:26.125759 4794 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca4d44d7-f4a0-4e0d-84cf-4d991431c741-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 18:38:26 crc kubenswrapper[4794]: I0216 18:38:26.125807 4794 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rr9p\" (UniqueName: \"kubernetes.io/projected/ca4d44d7-f4a0-4e0d-84cf-4d991431c741-kube-api-access-4rr9p\") on node \"crc\" DevicePath \"\"" Feb 16 18:38:26 crc kubenswrapper[4794]: I0216 18:38:26.331665 4794 generic.go:334] "Generic (PLEG): container finished" podID="ca4d44d7-f4a0-4e0d-84cf-4d991431c741" containerID="1e1ebeb64edd6499922061778fffbbbe569b0a906130bb97f12fbfcc3cf7b019" exitCode=0 Feb 16 18:38:26 crc kubenswrapper[4794]: I0216 18:38:26.331729 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74rq8" event={"ID":"ca4d44d7-f4a0-4e0d-84cf-4d991431c741","Type":"ContainerDied","Data":"1e1ebeb64edd6499922061778fffbbbe569b0a906130bb97f12fbfcc3cf7b019"} Feb 16 18:38:26 crc kubenswrapper[4794]: I0216 18:38:26.331770 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74rq8" event={"ID":"ca4d44d7-f4a0-4e0d-84cf-4d991431c741","Type":"ContainerDied","Data":"3318a16541982cae2dcfac7defc5556abaa1757b118e55c7177059722d3b58ab"} Feb 16 18:38:26 crc kubenswrapper[4794]: I0216 18:38:26.331800 4794 scope.go:117] "RemoveContainer" containerID="1e1ebeb64edd6499922061778fffbbbe569b0a906130bb97f12fbfcc3cf7b019" Feb 16 18:38:26 crc kubenswrapper[4794]: I0216 18:38:26.332110 4794 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-74rq8" Feb 16 18:38:26 crc kubenswrapper[4794]: I0216 18:38:26.365574 4794 scope.go:117] "RemoveContainer" containerID="eaf295515aab7c25749e069da999c31791b2df466d9350937071148ba972d20d" Feb 16 18:38:26 crc kubenswrapper[4794]: I0216 18:38:26.399487 4794 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-74rq8"] Feb 16 18:38:26 crc kubenswrapper[4794]: I0216 18:38:26.420030 4794 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-74rq8"] Feb 16 18:38:26 crc kubenswrapper[4794]: I0216 18:38:26.445509 4794 scope.go:117] "RemoveContainer" containerID="27390b55e2c029ea4f249760a9a4ece44eb4527a6aef51ca752283c325e96eda" Feb 16 18:38:26 crc kubenswrapper[4794]: I0216 18:38:26.501160 4794 scope.go:117] "RemoveContainer" containerID="1e1ebeb64edd6499922061778fffbbbe569b0a906130bb97f12fbfcc3cf7b019" Feb 16 18:38:26 crc kubenswrapper[4794]: E0216 18:38:26.501701 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e1ebeb64edd6499922061778fffbbbe569b0a906130bb97f12fbfcc3cf7b019\": container with ID starting with 1e1ebeb64edd6499922061778fffbbbe569b0a906130bb97f12fbfcc3cf7b019 not found: ID does not exist" containerID="1e1ebeb64edd6499922061778fffbbbe569b0a906130bb97f12fbfcc3cf7b019" Feb 16 18:38:26 crc kubenswrapper[4794]: I0216 18:38:26.501741 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e1ebeb64edd6499922061778fffbbbe569b0a906130bb97f12fbfcc3cf7b019"} err="failed to get container status \"1e1ebeb64edd6499922061778fffbbbe569b0a906130bb97f12fbfcc3cf7b019\": rpc error: code = NotFound desc = could not find container \"1e1ebeb64edd6499922061778fffbbbe569b0a906130bb97f12fbfcc3cf7b019\": container with ID starting with 1e1ebeb64edd6499922061778fffbbbe569b0a906130bb97f12fbfcc3cf7b019 not found: ID does not exist" Feb 16 18:38:26 crc kubenswrapper[4794]: I0216 18:38:26.501770 4794 scope.go:117] "RemoveContainer" containerID="eaf295515aab7c25749e069da999c31791b2df466d9350937071148ba972d20d" Feb 16 18:38:26 crc kubenswrapper[4794]: E0216 18:38:26.502009 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaf295515aab7c25749e069da999c31791b2df466d9350937071148ba972d20d\": container with ID starting with eaf295515aab7c25749e069da999c31791b2df466d9350937071148ba972d20d not found: ID does not exist" containerID="eaf295515aab7c25749e069da999c31791b2df466d9350937071148ba972d20d" Feb 16 18:38:26 crc kubenswrapper[4794]: I0216 18:38:26.502029 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaf295515aab7c25749e069da999c31791b2df466d9350937071148ba972d20d"} err="failed to get container status \"eaf295515aab7c25749e069da999c31791b2df466d9350937071148ba972d20d\": rpc error: code = NotFound desc = could not find container \"eaf295515aab7c25749e069da999c31791b2df466d9350937071148ba972d20d\": container with ID starting with eaf295515aab7c25749e069da999c31791b2df466d9350937071148ba972d20d not found: ID does not exist" Feb 16 18:38:26 crc kubenswrapper[4794]: I0216 18:38:26.502042 4794 scope.go:117] "RemoveContainer" containerID="27390b55e2c029ea4f249760a9a4ece44eb4527a6aef51ca752283c325e96eda" Feb 16 18:38:26 crc kubenswrapper[4794]: E0216 18:38:26.502279 4794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27390b55e2c029ea4f249760a9a4ece44eb4527a6aef51ca752283c325e96eda\": container with ID starting with 27390b55e2c029ea4f249760a9a4ece44eb4527a6aef51ca752283c325e96eda not found: ID does not exist" containerID="27390b55e2c029ea4f249760a9a4ece44eb4527a6aef51ca752283c325e96eda" Feb 16 18:38:26 crc kubenswrapper[4794]: I0216 18:38:26.502294 4794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27390b55e2c029ea4f249760a9a4ece44eb4527a6aef51ca752283c325e96eda"} err="failed to get container status \"27390b55e2c029ea4f249760a9a4ece44eb4527a6aef51ca752283c325e96eda\": rpc error: code = NotFound desc = could not find container \"27390b55e2c029ea4f249760a9a4ece44eb4527a6aef51ca752283c325e96eda\": container with ID starting with 27390b55e2c029ea4f249760a9a4ece44eb4527a6aef51ca752283c325e96eda not found: ID does not exist" Feb 16 18:38:26 crc kubenswrapper[4794]: E0216 18:38:26.796026 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:38:26 crc kubenswrapper[4794]: I0216 18:38:26.809704 4794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca4d44d7-f4a0-4e0d-84cf-4d991431c741" path="/var/lib/kubelet/pods/ca4d44d7-f4a0-4e0d-84cf-4d991431c741/volumes" Feb 16 18:38:29 crc kubenswrapper[4794]: E0216 18:38:29.299334 4794 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 18:38:29 crc kubenswrapper[4794]: E0216 18:38:29.299681 4794 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 18:38:29 crc kubenswrapper[4794]: E0216 18:38:29.299879 4794 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59fh58dh6ch557h84h55ch564h5bh58fh5c8h5d4h584h669h667h569h59hd5hdbh9dh67ch5f9h59fh597h96h664h687h66dhfch5ddh5b7h88h59cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9v9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(8981f528-1f74-4d56-a93c-22860725b490): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 18:38:29 crc kubenswrapper[4794]: E0216 18:38:29.301131 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:38:32 crc kubenswrapper[4794]: I0216 18:38:32.792021 4794 scope.go:117] "RemoveContainer" containerID="4e15aeb50f8bb6767198248710b849e8c1c7d3ea7cc1b2f2b495b90676480032" Feb 16 18:38:32 crc kubenswrapper[4794]: E0216 18:38:32.793161 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:38:37 crc kubenswrapper[4794]: E0216 18:38:37.794855 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:38:43 crc kubenswrapper[4794]: E0216 18:38:43.794592 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:38:43 crc kubenswrapper[4794]: I0216 18:38:43.946112 4794 scope.go:117] "RemoveContainer" containerID="7a88d568a80cf612a2e3c3890b318aeefa5de72e757fa6139e36a70c83474302" Feb 16 18:38:43 crc kubenswrapper[4794]: I0216 18:38:43.982281 4794 scope.go:117] "RemoveContainer" containerID="b3131ee3ce87012811be51d82f8880efb800bef2073a042840252475d48f62c2" Feb 16 18:38:44 crc kubenswrapper[4794]: I0216 18:38:44.037180 4794 scope.go:117] "RemoveContainer" containerID="c8fb71761427052cc9ee6ca9ee6455933bed6c6ed0180fd8763f5c5035b6dc3e" Feb 16 18:38:47 crc kubenswrapper[4794]: I0216 18:38:47.792688 4794 scope.go:117] "RemoveContainer" containerID="4e15aeb50f8bb6767198248710b849e8c1c7d3ea7cc1b2f2b495b90676480032" Feb 16 18:38:47 crc kubenswrapper[4794]: E0216 18:38:47.794179 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8q7xf_openshift-machine-config-operator(2d17fb0b-381a-46a1-8bba-33daee594e18)\"" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" podUID="2d17fb0b-381a-46a1-8bba-33daee594e18" Feb 16 18:38:50 crc kubenswrapper[4794]: E0216 18:38:50.797122 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:38:58 crc kubenswrapper[4794]: E0216 18:38:58.795934 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:38:59 crc kubenswrapper[4794]: I0216 18:38:59.791459 4794 scope.go:117] "RemoveContainer" containerID="4e15aeb50f8bb6767198248710b849e8c1c7d3ea7cc1b2f2b495b90676480032" Feb 16 18:39:00 crc kubenswrapper[4794]: I0216 18:39:00.842886 4794 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8q7xf" event={"ID":"2d17fb0b-381a-46a1-8bba-33daee594e18","Type":"ContainerStarted","Data":"629a909e1b109f8042a528695f43977be144704014af632414d0049538d5fe39"} Feb 16 18:39:04 crc kubenswrapper[4794]: E0216 18:39:04.802231 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:39:09 crc kubenswrapper[4794]: E0216 18:39:09.796321 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:39:19 crc kubenswrapper[4794]: E0216 18:39:19.795797 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:39:22 crc kubenswrapper[4794]: E0216 18:39:22.795388 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:39:33 crc kubenswrapper[4794]: E0216 18:39:33.794419 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:39:34 crc kubenswrapper[4794]: E0216 18:39:34.808162 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:39:44 crc kubenswrapper[4794]: E0216 18:39:44.794380 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:39:45 crc kubenswrapper[4794]: E0216 18:39:45.794736 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:39:56 crc kubenswrapper[4794]: E0216 18:39:56.794989 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:39:59 crc kubenswrapper[4794]: E0216 18:39:59.794969 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:40:07 crc kubenswrapper[4794]: E0216 18:40:07.795757 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:40:14 crc kubenswrapper[4794]: E0216 18:40:14.809835 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:40:22 crc kubenswrapper[4794]: E0216 18:40:22.794621 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:40:26 crc kubenswrapper[4794]: E0216 18:40:26.794089 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" Feb 16 18:40:33 crc kubenswrapper[4794]: E0216 18:40:33.796258 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8981f528-1f74-4d56-a93c-22860725b490" Feb 16 18:40:37 crc kubenswrapper[4794]: E0216 18:40:37.795129 4794 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7gcsf" podUID="c695f880-15cb-45b1-8545-60d8437ec631" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515144662272024456 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015144662273017374 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015144646042016513 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015144646042015463 5ustar corecore